Google unveiled the fastest and most affordable AI model, Gemini 2.5 Flash-Lite

Google announced the release of stable versions of the Gemini 2.5 Pro and Flash AI models, while announcing a preview version of the Gemini 2.5 Flash-Lite, claimed to be the fastest and most cost-effective model in the Gemini lineup. The developers emphasize that it is focused on performing daily tasks at the lowest cost per request.

In contrast to Gemini 2.0 Flash-Lite, the updated version shows improved results in coding, math, science, and logical reasoning.
Google says the new model is great for large-scale tasks where responsiveness is important, supports integration with services like Google Search and Think Mode, and offers extended contextual reach (up to 1 million tokens).

A preview version of Gemini 2.5 Flash-Lite is already available for testing in Google AI Studio and Vertex AI. The cost to use the model through the API is 10 cents per million incoming tokens (50 cents per million incoming audio tokens) and 40 cents per million outgoing tokens.
The Google unveiled the fastest and most affordable AI model, Gemini 2.5 Flash-Lite, was first published on ITZine.ru.