Google has rolled out Gemini 3 Flash, a faster and more efficient model in its Gemini 3 family, making it the default model in the Gemini app for users worldwide. The update also extends to AI Mode in Google Search, where Gemini 3 Flash is now being deployed globally as the standard model.
The launch builds on Google’s recent Gemini 3 expansion, which introduced Gemini 3 Pro in preview and announced Gemini 3 Deep Think, an advanced reasoning mode. With this update, Gemini 3 Flash replaces Gemini 2.5 Flash as the default option, allowing free users to access the latest Gemini 3 experience at no additional cost.
For developers, Gemini 3 Flash is available in preview through the Gemini API, including access via Google AI Studio, Vertex AI, Gemini Enterprise, Gemini CLI, Android Studio, and Google Antigravity. According to Google’s pricing documentation, Gemini 3 Flash is priced at $0.50 per million input tokens and $3.00 per million output tokens, positioning it below Pro-tier models for high-volume workflows.
Google claims that Gemini 3 Flash uses 30% fewer tokens on average compared to Gemini 2.5 Pro for common tasks and delivers performance that is up to three times faster, based on third-party benchmarks.
With its improved speed, efficiency, and competitive pricing, Gemini 3 Flash is positioned as a practical upgrade for both everyday users and developers building large-scale AI-powered applications.
















































