Google has officially made Gemini 2.0 Flash widely available, expanding access to developers and users across multiple platforms. Initially launched as an experimental model, Gemini 2.0 Flash has been refined to offer better performance and efficiency. Developers can now use the updated model via the Gemini API in Google AI Studio and Vertex AI, while general users can access it through the Gemini app on desktop and mobile.
Alongside this release, Google is introducing Gemini 2.0 Pro Experimental, its most advanced model yet for coding and handling complex prompts. With a 2-million-token context window, this model enhances understanding and reasoning, allowing for deep analysis of large datasets. It also supports tool calls, including Google Search and code execution, making it a powerful option for developers. Gemini 2.0 Pro is available as an experimental model in Google AI Studio, Vertex AI, and the Gemini app for Advanced users.
Another addition is Gemini 2.0 Flash-Lite, Google’s most cost-efficient AI model to date. Designed to maintain the speed and affordability of its predecessor, 1.5 Flash, the new model delivers improved performance at the same price point. With a 1-million-token context window and multimodal capabilities, it enables cost-effective AI applications, such as generating captions for thousands of images at minimal expense. Flash-Lite is now in public preview on Google AI Studio and Vertex AI.
Google also confirmed that future updates will introduce more multimodal functionalities, including image generation and text-to-speech capabilities. More details, including pricing, are available on the Google for Developers blog. As Gemini 2.0 evolves, it signals Google’s push to stay competitive in the AI space while offering scalable solutions for both businesses and individual users.
Advertisement