Language selection 📢


Google's Smart Glasses strategy with Project Astra and Gemini Live: The new era of Google's visual AI assistant

Published on: March 9, 2025 / update from: March 9, 2025 - Author: Konrad Wolfenstein

Google's Smart Glasses strategy with Project Astra and Gemini Live: The new era of Google's visual AI assistant

Google's Smart Glasses strategy with Project Astra and Gemini Live: The new era of visual AI assistance from Google-Image: Xpert.digital

Smart Glasses Made Different: Google's vision for a new era of technology

Paradigm shift in sight: Google's way to smart AI in everyday life

Google is on the threshold of a significant technological evolution in the area of ​​portable technology. The latest developments in Gemini Live, combined with concrete plans for new smart glasses, indicate an upcoming paradigm shift that could fundamentally change the way we interact with artificial intelligence. The integration of visual recognition skills in Gemini Live on smartphones forms the technological basis for upcoming smart glass solutions and marks a strategic turning point in Google's vision for ubiquitous AI assistance in everyday life.

Suitable for:

The second attempt: Google's return to the Smart Glasses market

Google's first attempt in the Smart Glasses area was over a decade. Google Glass, presented in 2012 and discontinued for consumers in 2015, was ahead of its time in many ways. With a weight of only 42 grams, the glasses were relatively light, but suffered from practical restrictions such as a low battery life of only two to three hours - clearly too little for a productive working day. In addition, the decisive element was missing at the time that today's smart glasses could make revolutionary: advanced generative AI.

After Google Glass's commercial failure, the company focused on enterprise applications, but largely withdrew from the consumer market. In the meantime, the technology itself developed continuously. The takeover of North, a manufacturer of the Focals Smart Glasses, already indicated a continued interest in this product category a few years ago. The new Smart Glasses, which Google is now developing, should be significantly slimmer and more comfortable than the focals and thus take into account the lessons from previous generations.

Current reports show that Google is in negotiations with established glasses manufacturers such as Essilorluxottica, which also includes Ray-Ban. This strategic decision could help Google to avoid one of Google Glass's main problems: the lack of fashionable acceptance. Ray-Ban already has experience with smart sunglasses through her collaboration with Meta. These partnerships could be crucial to position the new smart glasses as a fashion accessory instead of as a striking technology demonstration.

Suitable for:

Project Astra: The basis for Google's visual AI assistant

At the center of Google's Smart Glasses strategy is “Project Astra”-an ambitious research project for the development of a universal visual AI assistant. Google demonstrated Project Astra for the first time at the I/O developer conference in May 2024 and showed an impressive technical demonstration that illustrated the potential of visual AI assistance.

In a significant organizational restructuring, Google recently integrated the team behind Project Astra under the roof of the Gemini team. This merger underlines the central importance of Gemini for Google's vision of smart glasses and shows that both technologies are viewed as part of a uniform strategy. The Astra team is to work specifically on the live functionalities within the Gemini team and thus further expand the visual component of Gemini.

The technological basis of Project Astra has progressed remarkably. In contrast to Google Glass, which was more vision of future than a mature product a decade ago, Project Astra is based on realistic technical possibilities that are already available today. The demonstration on Google I/O showed how a user can look at his surroundings through smart glasses and at the same time talk about it with a AI assistant. What was considered wishful thinking eleven years ago is technically realizable today.

Gemini Live: The bridge between smartphone and smart glasses

The latest developments in Gemini live form a decisive bridge between the current smartphone applications and the upcoming smart glasses. In March 2025, Google announced significant extensions for Gemini Live, which above all improve the visual skills of the AI ​​assistant.

The new functions include live video input and screen release, which means that users with Gemini can talk about what they see in real time. These functions are supported by Gemini 2.0 Flash, a version of the multimodal model, which has been specially optimized for fast, mobile applications. From the end of March 2025, these functions for Gemini Advanced subscribers will be available on Android devices as part of the Google One AI Premium Plan.

The functionality of these new skills is remarkably intuitive: users can point their smartphone camera on an interesting object and ask Gemini directly. The AI ​​assistant analyzes the video image in real time and provides context-related information. Users can also release their screen for Gemini and discuss what they see during the smartphone interaction with the AI ​​bot.

These functions are not only to be regarded as isolated smartphone features, but rather as a direct forerunner of the planned smart glasses functionality. Google itself clearly establishes this connection: "Gemini Live with its visual component is practically the surface that Google will soon want to use for Smart Glasses". The decisive difference between the smartphone application and smart glasses ultimately only consists of whether the display of the smartphone or the camera image of a smart glasses is released-the basis is technologically identical.

The upcoming Smart Glasses from Google

The new Smart Glasses from Google are expected to be a significant further development compared to previous experiments. Gemini will act as a central element and are constantly available to users both by audio and visual. The user's field of vision is to be permanently released for Gemini, which means that the AI ​​bot can practically interact with the user in the real world.

As part of the “Gemini Sight” project, which was submitted for the Gemini Api Developer Competition, concepts for AI-supported smart glasses were presented, which could help people in particular blind and visually impaired people. These revolutionary Ai-Powered Smart Glasses should integrate seamlessly into Google services and automate a variety of tasks through simple voice commands-from calendar management to sending e-mails to restaurant reservations.

A selected circle of people already had the opportunity to gain hands-on experiences with the Gemini Ai glasses. The reports indicate that the glasses actually deliver the Google Glass experience that Google could not realize over a decade ago. Technological advances, especially in the area of ​​the generative AI, make it possible today, which was still the future music at the time.

Integration with Google services and multimodal skills

A central aspect of the upcoming Smart Glasses is their comprehensive integration with existing Google services. Gemini can already be linked to numerous Google apps and services, including Gmail, Google Drive, Google Docs, Google Maps, YouTube, Google Flights and Google Hotels. These links enable the assistant to find relevant information faster and to automate complex tasks.

The multimodal skills of Gemini Live are continuously expanded. Originally only available in English, Gemini now supports over 45 languages, including German. This linguistic versatility is an important step towards the global market launch of the Smart Glasses. The ability to have conversations in up to two languages ​​on the same device and even change the language in the middle of the sentence is particularly remarkable.

Gemini Live's visual skills go far beyond simple image analysis. Users can upload photos or watch YouTube videos and talk about it at the same time with Gemini. With videos, Gemini can summarize the content and answer questions about it, for example for a product review on YouTube. With PDF files, the AI ​​can not only summarize and clarify questions, but even create quizzes to test the knowledge of the users.

Suitable for:

Market potential and social effects

The market potential for AI-based smart glasses is enormous. While Google Glass failed primarily due to data protection concerns and practical applicability, the integration of Gemini could partially overcome these challenges. The practical application cases are diverse and range from everyday aids to specialized professional applications to assistance systems for people with disabilities.

Nevertheless, important questions remain open, especially in the area of ​​data protection. The permanent parts of the field of vision with a AI raises new ethical and legal questions that Google must address in order to achieve wider acceptance than on Google Glass. Cooperation with established glasses manufacturers could help to make the technology more subtle and socially acceptable.

Google is in an intensive competition with other technology companies in the field of expanded reality. While Apple rely on a more comprehensive XR solution with the Vision Pro, Google focuses on a lighter, more everyday form of augmented reality with the smart glasses. Google has also announced the development of Android XR, a platform that is intended to support both smart glasses and more comprehensive VR glasses.

Gemini Live as a harbinger of a new era of human-Ki interaction

The integration of visual skills in Gemini Live marks a decisive step in Google's long-term long-term vision for omnipresent AI assistance. What starts on smartphones will probably find its peak into the upcoming smart glasses. The technological foundations are already available, and Google uses the widespread distribution of smartphones as a test field for functions that are later to be implemented in Smart Glasses.

The development of Gemini Live illustrates Google's strategic approach: New AI functions are initially introduced, tested and optimized on smartphones before they are integrated into specialized hardware such as Smart Glasses. This step -by -step procedure could help Google to avoid the mistakes of the past and develop a product that is both technologically mature and socially accepted.

The coming months will show how quickly Google will pass from the extended Gemini live functions to smartphones to a full-fledged smart glass solution. The organizational restructuring with the integration of the Project Astra team into the Gemini team indicates an acceleration of this development. With the introduction of the visual functions of Gemini Live at the end of March 2025, important foundations are created that will pave the way for Google's next big step in the development of portable AI technologies.

Suitable for:

 

Your global marketing and business development partner

☑️ Our business language is English or German

☑️ NEW: Correspondence in your national language!

 

Digital Pioneer - Konrad Wolfenstein

Konrad Wolfenstein

I would be happy to serve you and my team as a personal advisor.

You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein xpert.digital

I'm looking forward to our joint project.

 

 

☑️ SME support in strategy, consulting, planning and implementation

☑️ Creation or realignment of the digital strategy and digitalization

☑️ Expansion and optimization of international sales processes

☑️ Global & Digital B2B trading platforms

☑️ Pioneer Business Development / Marketing / PR / Trade Fairs


⭐️ Artificial Intelligence (AI) - AI blog, hotspot and content hub ⭐️ Augmented & Extended Reality - Metaverse planning office / agency ⭐️ XPaper