Zhipu AI Releases GLM-5.1 Under MIT License — 744B MoE Model Claims to Beat Claude Opus 4.6 and GPT-5.4
Summary
Chinese AI lab Zhipu AI has released GLM-5.1, a massive 744-billion-parameter mixture-of-experts (MoE) model, under the permissive MIT license. The company claims GLM-5.1 surpasses both Anthropic’s Claude Opus 4.6 and OpenAI’s GPT-5.4 on the SWE-Bench Pro benchmark — a test designed to evaluate expert-level software engineering capabilities.
Alongside the flagship model, Zhipu also released GLM-5V-Turbo, a multimodal variant optimized specifically for coding tasks. Both models are freely available for commercial use under the MIT license, making them among the most capable fully open-weight models available today.
Source
Compiled from Fazm AI and WhatLLM.
Commentary
A 744B MoE model under MIT license is a power move. While benchmark claims should always be taken with a grain of salt — especially SWE-Bench, which has become the new “we beat GPT-4 on MMLU” marketing badge — the sheer scale and openness of GLM-5.1 makes it significant regardless of whether the leaderboard numbers hold up.
What makes this notable isn’t just the model itself but the trajectory of Chinese open-source AI. Between DeepSeek, Qwen, and now GLM-5.1, there’s a clear strategy of releasing frontier-competitive models with maximally permissive licenses while Western labs increasingly gate their best work. For developers and enterprises who want to self-host cutting-edge models without API dependency or licensing headaches, the options from Chinese labs are becoming hard to ignore.


