Google has offered many new features of Gemini 2.5 family models for artificial intelligence (AI) in Google I/O 2025 on Tuesday. The Mountain View technology giant presented a developed Think technology, which is played by the Gemini 2.5 Pro model. The veil also revealed a new, natural and semi -human speech called the original sound product, which will be available via the direct application programming interface (API). In addition, the company also brings thinking summaries and thinking budgets with the latest Gemini models for developers.
Gemini 2.5 Pro ranks on top of the LMARNA leaders
in Blog postThe technology giant has detailed all the new capabilities and features that will be shipped to the Gemini 2.5 AI series in the next few months. Earlier this month, Google released an updated version of Gemini 2.5 Pro with improved coding possibilities. The updated model was also ranked first in Webdev Arena and LeMarena leaders.
Now, Google improves the artificial intelligence model with a deep thinking mode. The new Gemini 2.5 Pro mode allows to consider multiple hypotheses before responding. The company says it uses a different search technology compared to the editions of thinking about old models.
Based on the internal test, the technology giant shared the standard of standard in mode thinking through different parameters. It is worth noting that Gemini 2.5 Pro Deep Think claims to record 49.4 per cent in 2025 UAMo, which is one of the most difficult measurement tests in mathematics. It is also registered competitively on LiveCodebeench V6 and MMMU.
Deep Think is currently testing, Google says it is conducting safety assessments and obtaining inputs from safety experts. Currently, thinking is only available for trusted laboratories via the Gemini applications interface. There is no word on the date of its release.
Google has also announced the addition of new capabilities to the Gemini 2.5 Flash model, which was released only one month ago. The company said that the main criteria for the artificial intelligence model for thinking, multiple weaknesses, symbol and long context have been improved. In addition, it is also more efficient and used 20-30 percent of distinctive symbols.
This new version of Gemini 2.5 Flash is currently available in a preview of developers via Google Ai Studio. Institutions can access it via the Vertex AI platform, and individuals can find it in the Gemini app. It is worth noting that the model will be widely available for production in June.
The developers who reach the Live Application interface will now get a new feature with the Gemini 2.5 series of artificial intelligence models. The company offers a preview copy of the original sound output, which can generate speech in a more expressive and semi -human way. Google said the feature allows users to control the tone, tone and style of speech that was created.
Early version of the ability comes with three features. The first is an emotional dialogue, where the artificial intelligence model can discover feelings in the user’s voice and respond accordingly. The second is the pre -emptive sound, which enables the model to ignore background conversations and respond only when it is talked to. Finally, thinking, which allows to generate speech to take advantage of the capabilities of Gemini to verbally answer complex queries.
Regardless of this, the 2.5 Pro and Flash models are on the Gemini Application interface and in Vertex AI will display intellectual summaries. These are essentially the raw process of the model, which was previously visible only in Gemini thinking models. Now, Google will display a detailed summary including heads, main details and information about typical actions with each response.
In the coming weeks, developers will also be able to use thinking budgets with Gemini 2.5 Pro. This will allow them to determine the number of symbols that the model consumes before responding. Finally, the function of a computer use agent will be added from the Project Mariner to API and in Vertex AI soon.