According to 1M AI News monitoring, Google has officially released the open-source model family Gemma 4, which includes 4 models of different sizes, all under the Apache 2.0 license. Google says this is a response to feedback from the community. Apache 2.0 means developers are free to use commercially, modify, and distribute without additional restrictions. Hugging Face co-founder and CEO Clément Delangue calls this a “huge milestone.”
The 4 models are designed for different hardware scenarios:
On the text leaderboard of Arena AI, a large-model anonymous battle evaluation platform, the 31B ranks third among global open-source models, while the 26B ranks sixth. Google says it is a “model that surpasses those 20 times its size.” The models are built based on the same research and technology as Gemini 3.
Core capabilities include multi-step reasoning and planning, native function calling and JSON structured output (for agent workflows), code generation, and image and video understanding (across the whole series), along with native training in over 140 languages. Edge models support a 128K context window, while the large models support up to 256K. E2B and E4B, co-optimized with Google Pixel teams, Qualcomm, and MediaTek, can run on devices such as phones, Raspberry Pi, and NVIDIA Jetson Orin Nano. Android developers can build agent applications via the AICore Developer Preview prototype, preparing for compatibility with the future Gemini Nano 4.
In terms of the ecosystem, on day one it already supports major frameworks such as Hugging Face, vLLM, llama.cpp, MLX, Ollama, NVIDIA NIM, LM Studio, and Unsloth. It can be experienced directly in Google AI Studio (31B and 26B) and AI Edge Gallery (E4B and E2B). Since the first release, the Gemma series has been downloaded more than 400 million times, with over 100,000 community-derived variants.