Google launches Gemma 4 open AI models based on Gemini 3
Google launches Gemma 4 open AI models with multimodal features, flexible licensing and support for over 140 languages.
Google has unveiled Gemma 4, a new family of open-weight artificial intelligence models derived from the research and technology behind its Gemini 3 systems. The release marks a notable step in extending advanced AI capabilities beyond proprietary platforms and into the wider developer community.
Table Of Content
The announcement follows the launch of Gemini 3 Pro late last year, which represented a significant advancement in Google’s large language model development. With Gemma 4, the company is seeking to replicate some of that progress in an open format, allowing developers and organisations to access, adapt and deploy the models more freely.
Gemma 4 is positioned as a flexible, scalable solution designed to cater to a range of hardware capabilities. By offering multiple model sizes, Google aims to make advanced AI tools accessible across devices, from smartphones to high-performance computing systems.
A range of models designed for varied computing needs
The Gemma 4 family consists of four models, each defined by the number of parameters they contain. Parameters are the adjustable values within a model that influence how it generates outputs, and they typically correlate with performance and computational requirements.
For smaller, edge-based devices such as smartphones, Google has introduced 2-billion- and 4-billion-parameter “Effective” models. These are designed to deliver strong performance while maintaining efficiency on less powerful hardware. At the higher end, the company offers a 26-billion-parameter “Mixture of Experts” model and a 31-billion-parameter “Dense” model, both intended for more demanding applications on robust systems and which claimed the sixth and third spots respectively on Arena AI’s text leaderboard.
Google claims that Gemma 4 achieves what it describes as “an unprecedented level of intelligence-per-parameter”. This suggests that the models can deliver competitive performance without the scale typically required by high-end AI systems. According to the company, the 31-billion and 26-billion variants secured third and sixth positions, respectively, on a widely referenced AI text performance leaderboard, outperforming models that are significantly larger.
The models are also designed with multimodal capabilities. All variants can process both images and video, enabling tasks such as optical character recognition and visual analysis. The smaller models extend this functionality further by supporting audio input and speech understanding, broadening their potential use cases across mobile and embedded environments.
Expanded capabilities and multilingual support
Beyond their core architecture, the Gemma 4 models introduce a range of features aimed at practical deployment. One of the key capabilities highlighted by Google is the ability to generate code offline. This allows developers to work without an active internet connection, which could be particularly useful in secure or remote environments.
The models have also been trained on more than 140 languages, reflecting an effort to support global usage and improve accessibility. This multilingual capability enables developers to build applications for diverse audiences without requiring extensive localisation work.
Google’s approach suggests a focus on versatility, enabling the models to be applied across industries such as education, customer service, software development and content creation. By combining multimodal inputs with multilingual understanding, the company is positioning Gemma 4 as a general-purpose AI toolkit rather than a narrowly defined solution.
The ability to operate across different types of data, including text, audio and visual inputs, aligns with broader industry trends towards more integrated AI systems. As organisations increasingly seek unified tools to handle complex workflows, models like Gemma 4 may help simplify development processes.
Open licensing aims to boost developer freedom
A significant aspect of the Gemma 4 release is its licensing model. Google has chosen to distribute the models under the Apache 2.0 licence, a widely used open-source framework that permits modification, distribution and commercial use with minimal restrictions.
This represents a shift from the company’s earlier Gemma releases, which were distributed under a more restrictive licence. By adopting Apache 2.0, Google is granting developers greater autonomy to customise the models and integrate them into their own systems.
In a statement, the company said: “This open-source licence provides a foundation for complete developer flexibility and digital sovereignty; granting you complete control over your data, infrastructure and models. It allows you to build freely and deploy securely across any environment, whether on-premises or in the cloud.”
The move is likely to appeal to organisations seeking greater control over their AI deployments, particularly in sectors where data privacy and regulatory compliance are critical. Open-weight models also enable experimentation and innovation, as developers can fine-tune systems to meet specific requirements.
To support adoption, Google has made the Gemma 4 model weights available through several platforms, including Hugging Face, Kaggle and Ollama. This ensures that developers can access the models using familiar tools and workflows.
With the release of Gemma 4, Google is reinforcing its presence in the open AI ecosystem while continuing to develop its proprietary offerings. The strategy reflects a broader effort to balance commercial interests with community engagement amid intensifying competition in the AI sector.





