Published on: March 18, 2025 / update from: March 18, 2025 - Author: Konrad Wolfenstein

Google AI model upgrade: New Gemini 2.0-Deep Research 2.0, Flash 2.0, Flash Thinking 2.0 and Pro 2.0 (experimental)-Image: Xpert.digital
Reasoning newly thought: Gemini 2.0 lifts AI to the next stage
Gemini Deep Research 2.0
Gemini Deep Research 2.0 has been accessible to all users worldwide since March 13, 2025. On this day, Google announced the broad availability of Deep Research, which now works with the improved Gemini 2.0 Flash Thinking Experimental model.
Important points for the availability of Gemini Deep Research 2.0:
- It can now be used free of charge in over 45 languages without a paid subscription.
- All Gemini users can use Deep Research for free several times a month.
- Gemini Advanced users continue to have unrestricted access to the function.
- The mobile version of Deep Research was introduced on February 18, 2025 for Android and iOS devices.
With this expansion, Google Deep Research made accessible to a broader user base and thus took an important step towards democratizing AI-supported research tools.
Suitable for:
- Ki Deep Research Tools in the Hardening test: Chatgpt from Openai, Perplexity or Google Gemini 1.5 Pro?
Gemini 2.0 Flash Thinking: The development of AI research and personalization
The recent leap in development at Google's AI assistant Gemini brings significant improvements in three core areas: Deep Research for all users, extended personalization functions and more powerful recurrence through 2.0 Flash Thinking. These innovations change the way we interact with AI assistants and manage complex research tasks.
Deep Research: AI-based research for everyone
Deep Research, originally an exclusive feature for Gemini Advanced subscribers, is now available free of charge for all users in over 45 languages. This powerful function transforms Gemini into a personal research assistant who researches complex topics independently and summarizes the results in clear, detailed reports.
From Gemini 1.5 Pro to 2.0 Flash Thinking
The decisive improvement is the changeover of Gemini 1.5 Pro to the new 2.0 Flash Thinking Experimental model. This system uses a sophisticated chain of memorial steps to disassemble complex problems into manageable intermediate steps, which significantly improves research skills in all phases - from planning to searches to analysis and reporting.
The research process in detail
Deep Research initially transforms the search query into a personalized, multi -stage research plan. After approval of this plan by the user, the system begins to autonomously search the web and collect relevant information. Throughout the process, Gemini continuously refines his analysis by researching in a similar way as a person: it finds interesting information and then starts new searches based on these findings.
The special thing about Deep Research is the transparency of the thinking process - users can understand the considerations of the system and intervene if necessary. The end result is a comprehensive report with key knowledge and links to the original sources, which is created in a few minutes and replaces hours of manual research.
Increased personalization: Gemini understands individual needs
The second significant innovation is the experimental personalization function, which allows Gemini to adapt answers based on personal data from Google apps and services.
Integration with the Google ecosystem
With the consent of the user, Gemini can access the search history and other Google services in order to provide tailor-made answers. The system uses this data to better understand user activities and preferences and thus provide more relevant content.
Personalization begins with the integration of Google search-Gemini can give recommendations based on previous search queries. In the near future, the system will also be able to draw context from other services such as Google photos and YouTube, which enables even more comprehensive personalization.
Data protection and control
Google emphasizes the responsible handling of user data: Gemini only accesses the search course if this information is considered useful. The function is optional and can be deactivated at any time via a banner with the corresponding link. This personalization function is initially available for Gemini and Gemini Advanced users on the web, with soon expansion to mobile devices.
2.0 Flash Thinking: The transparent thinking process
The heart of these innovations is the 2.0 Flash Thinking Experimental model, which is convinced with improved efficiency and speed and is now also available for all users.
Transparency through visible thoughts
One of the outstanding properties of 2.0 Flash Thinking is the ability to disclose the thinking process. The model indicates its considerations as “thoughts/thoughts” in the answer window, which enables a deeper understanding of the AI function. This “Reasoning” approach means that answers are checked several times before the output, which leads to more precise and reliable results.
Performance and scope
The updated model offers impressive technical improvements:
- A context window with a million token for Gemini Advanced users that enables the analysis of extensive texts
- Support for file uploads
- Improved performance in mathematics and science benchmarks
- Better consistency between thoughts and answers
Integration with apps and services
An important extension is the link with Gemini apps (formerly called extensions), which enables access to services such as Gmail, Google Calendar, Drive, Messages and YouTube. This integration allows complex, multi -step inquiries in which the model records the overall context, dismantled the task into individual steps and continuously evaluates progress.
In the coming weeks, a Google Photos app will also be available that offers “ASK Photos” functionality-users can, for example, have photos of a trip analyzed to create a travel schedule or ask for specific information on images.
A new chapter for AI assistants
The introduction of Deep Research for all users, combined with the extended personalization functions and the powerful 2.0 Flash Thinking model, marks significant progress in the development of AI assistants. Google positions itself at the head of the competition and makes advanced AI functions accessible to a wider audience.
These innovations transform Gemini from a simple chat bot to a powerful personal assistant who can manage complex research tasks, understand individual needs and make his thinking transparent. By integrating the Google ecosystem and increased personalization, Gemini is increasingly becoming a natural expansion of the user, which anticipates its needs and offers really tailor-made support.
Suitable for:
- Ki-Power from Google: Ai Studio and Gemini-this is how you use both optimally-Google Ai riddles solved
Gemini 2.0: Further development of the Google KI compared to previous versions
With the introduction of Gemini 2.0, Google has significantly further developed its AI model family. The new generation brings significant improvements in speed, accuracy and functionality to the previous versions. The most important differences and innovations of Gemini 2.0 are analyzed in detail compared to previous versions.
Performance improvements and main differences
Gemini 2.0 is settled by several fundamental improvements from his predecessors. The most remarkable change is the increased speed: Gemini 2.0 Flash is about twice as fast as Gemini 1.5 Pro and exceeds it in numerous benchmarks. This increase in speed goes hand in hand with a significantly improved accuracy in various tasks.
The precision in complex tasks was also significantly increased. For example, Gemini 2.0 shows improved accuracy when podcasts and detailed transcriptions. In addition, the model generates more nuanced and contextual relevant editions, which makes it a more valuable tool for creative content creating and complex problem solutions.
Another important innovation is the introduction of extended multimodal skills. While Gemini already offered 1.5 multimodal functions, Gemini 2.0 can not only process text, image, audio and video data, but also analyze and understand much more profound.
Model variants from Gemini 2.0
Google has introduced Gemini 2.0 in different variants, each of which is optimized for specific applications:
Gemini 2.0 Flash
The basic model is now generally available and offers higher rate limits and improved performance. It is ideal for developers and can work efficiently with audio, image, video and text data. The model supports a context window of 1 million tokens.
Gemini 2.0 per experimental
This is the most powerful model for complex tasks and coding. It has an extended context window of 2 million tokens-twice as much as the flash variants. In internal benchmarks, Gemini 2.0 Pro achieves the best results in almost all areas.
Gemini 2.0 Flash-Lite
A new, inexpensive variant, which still offers an improved performance compared to Gemini 1.5 Flash. It is particularly interesting for developers who are looking for a cost -efficient solution without having to accept essential performance losses.
Gemini 2.0 Flash Thinking Experimental
This experimental model uses an additional thought process before the answer generation, similar to Openaai O3 and Deepseek-R1. It can also access external tools such as YouTube, Maps and Google Search.
Extended technical skills
Multimodal processing
The multimodal skills of Gemini 2.0 are much more mature than in previous versions. The model can also process and generate text, image and audio data. This ability enables more complex applications such as medical diagnostics, where it can analyze and link written patient reports and imaging procedures.
Autonomous agents and tool use
Gemini 2.0 introduces the concept of autonomous agents who can carry out tasks independently by making decisions and planning actions. At Gemini 2.0 Flash, the Multimodal Live API and the native tool use are particularly noteworthy, which enables the model to access and use them to external tools.
Context window and token processing
An important technical difference is the size of the context window:
- Gemini 2.0 Flash and Flash-Lite: 1 million tokens for input
- Gemini 2.0 per: 2 million tokens for input
- All models: 8,192 tokens for output
In comparison, Gemini 1.5 per amount of data was able to process, including 2 hours of video, 19 hours of audio, code bases with 60,000 code lines or 2,000 text sites.
Benchmark results in comparison
In Benchmarks Gemini 2.0 shows significant improvements compared to previous versions:
In mathematical tasks, Gemini 2.0 per 91.8% in the Math-Benchmark and 65.2% in Hiddenmath achieves significantly more than the flash variants. In Openais Simpleqa-Test, the Pro model reaches 44.3%, while Gemini 2.0 flash comes to 29.9%.
The improvement is also evident in the analysis of complex content. When analyzing images, Gemini 2.0 offers, for example, a deeper analysis and practical solutions compared to older versions.
Integration and availability
All Gemini 2.0 models are available on desktop and mobile devices via Google Ai Studio and Vertex AI as well as Google's premium chatbot Gemini Advanced. Improved integration with Google services such as Google Search, Maps and WorkSpace offers a uniform user experience.
The new functions are also accessible to developers, whereby Google has become more flexible with the pricing of the API. For example, the previous distinction between short and long context inquiries was canceled, which can hold the costs for mixed workloads (text and image) despite the performance improvements under that of Gemini 1.5 flash.
Future developments
While Gemini 2.0 already shows significant progress, it should be noted that some announced functions are not yet available. In this way, image and audio edition as well as live video should follow for flash and pro in the next few months. In addition, the flagship model “Gemini 2.0 Ultra” has not yet been announced.
Multimodal, fast, intelligent: what makes Gemini 2.0 unique
Gemini 2.0 represents an important leap in evolution compared to its predecessor versions. Google offers Google a AI solution that is optimized for a wide variety of applications at improved speed, expanded multimodal processing, larger context windows and specialized model variants. The integration of autonomous agents and the native tool use indicate a paradigm shift in which AI systems can act increasingly independently and intelligently.
Suitable for:
Your global marketing and business development partner
☑️ Our business language is English or German
☑️ NEW: Correspondence in your national language!
I would be happy to serve you and my team as a personal advisor.
You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein ∂ xpert.digital
I'm looking forward to our joint project.