Google Releases Gemma 4 Under Apache 2.0 — Open-Source AI Just Got Serious
Summary
Google has released Gemma 4, its latest family of open AI models, now under the commercially permissive Apache 2.0 license — a significant shift from the custom Google license used by previous Gemma versions. The release includes four model sizes: Effective 2B (E2B), Effective 4B (E4B), 26B Mixture of Experts (MoE), and 31B Dense.
The performance numbers are impressive. The 31B model ranks as the #3 open model globally on the Arena AI text leaderboard, while the 26B model sits at #6 — both outperforming models 20 times their size. Gemma 4 supports multimodal input (text, image, and audio) and is optimized for deployment across everything from Raspberry Pis to multi-GPU servers.
The Gemma family has now surpassed 400 million downloads and 100,000 community-built variants since its initial launch in February 2024.
Source
Official announcement on Google Blog. Additional coverage from AI Business and Simon Willison.
Commentary
The Apache 2.0 move is the real story here. Google has been playing the “open but not really” game with Gemma for two years, and this finally removes the asterisk. For startups and developers building commercial products on top of open models, this eliminates a significant legal gray area.
The intelligence-per-parameter efficiency is also worth noting. A 26B model outperforming 500B+ models means the economics of self-hosted AI just got dramatically better. Between Gemma 4 and Llama’s latest releases, the gap between proprietary API models and what you can run on your own hardware continues to narrow — and that shift has implications for everything from data sovereignty to cost structures.


