Language selection 📢


Update: Google Smart Glasses Prototype with Gemini-Ai at the TED conference “Humanity Reimagined” presented in Vancouver

Published on: April 22, 2025 / update from: April 22, 2025 - Author: Konrad Wolfenstein

Google Smart Glasses Prototype with Gemini-Ai at the TED conference

Google Smart Glasses Prototype with Gemini-Ai at the TED conference “Humanity Reimagined” presented in Vancouver-Image: Xpert.digital

Google's Smart Glasses of the next generation: A look into the future

Google's Smart Glasses with display and Gemini features: a new era of expanded reality

Google recently presented an advanced prototype of its new smart glasses with integrated display and Gemini AI functions at the TED conference “Humanity Reimagined” in Vancouver, Canada. The TED2025 conference took place from April 7th to 11th, 2025. This presentation marks a significant progress in Google's AR strategy and could fundamentally change the way we interact with digital information in everyday life. After the commercial failure of the original Google Glass, the company returns with a more technologically mature and everyday solution that combines impressive AI functions with an inconspicuous design.

Suitable for:

Design and hardware of the new Smart Glasses

The Smart Glasses from Google presented at the TED conference are characterized by a remarkable, almost conventional brille design. In contrast to the striking first generation of Google Glass, the new smart glasses are hardly distinguishable from normal glasses at first glance. The glasses have a black frame and is very easy despite the integrated technology, which indicates a well thought-out hardware concept.

An essential design feature is close integration with the user's smartphone. Shahram Izadi, head of Android XR on Google, explained during the demonstration: "This glasses work with your smartphone, streams back and forth content and enables the glasses to be very light and can access all of your cell phone apps". This architecture represents a fundamental difference to the original Google Glass concept, in which attempts were made to integrate as much hardware as possible into the glasses itself.

The technical specifications were not announced in detail during the demonstration. However, it was confirmed that the glasses have a miniature display that information can be projected onto. A camera is also integrated for various functions such as the detection of objects and environments. There is also a microphone to accept voice commands and enable communication with the Gemini AI assistant.

Technological integration and lightweight design

The crucial technological progress that enables the compact design lies in relocating computing power. Due to the constant connection between smartphone and glasses as well as the permanent cloud connection, almost all of the technology can be moved to the smartphone. The glasses themselves mainly need a camera, a microphone and display technology, which leads to a much lighter and more unobtrusive design than in previous attempts in the AR glasses segment.

Gemini-Ki as the heart of the Smart Glasses

The central element, which distinguishes the new Google Smart Glasses from previous AR glasses, is the integration of Google's advanced AI model Gemini. The AI ​​functions are implemented on the smart glasses under the name “Gemini Live”, a version that was specially optimized for the interaction in real time.

Gemini acts in the smart glasses as a constant companion, which can be activated by natural voice commands and accesses visual and auditory context information. The AI ​​can “see” and “understand” the environment of the user, which enables completely new types of interaction. Due to the recent expansion of Gemini Live by camera and display release functions as well as the “Talk About” function, which enables conversations about images or videos, Google has already established the technological foundations for Smart Glasses on smartphones.

Gemini is integrated into the Smart Glasses via the Android XR platform, which was specially developed for XR devices (Extended Reality). Google describes Android XR as “an open, uniform platform for XR headsets and glasses”, which should offer users “more selection of devices and access to apps that they already know and love”.

Suitable for:

Shown functions and areas of application

During the TED presentation, Shahram Izadi and Nishta Bathia, product manager for Glasses & AI on Google, demonstrated several impressive functions of the smart glasses.

Memory function: The digital memory assistant

A particularly highlighted function is “memory” (memory), in which Gemini uses the integrated camera to follow what is happening around the user and can remember where everyday objects were stored. In the demonstration, Bathia Gemini asked: "Do you know where I finally put the card?", Where the KI replied: "The hotel map is located on the left of the record" and on the glasses on the shelf behind it.

This function uses a “rolling context window, in which the AI ​​remembers what you see without having to tell her what to look out for”. This represents significant progress in the contextual perception of AI systems and could significantly simplify everyday problems such as searching for laid objects.

Real -time translation and word processing

Another demonstrated function is the real -time translation, in which the Smart Glasses can record, translate or transcribe conversations in real time. Izadi performed a live translation from Farsi to English, with the translation as a subtitle on the display of the glasses. The translation into other languages ​​such as Hindi was also mentioned.

The glasses can also scan and process texts, which was demonstrated by scanning a book during the presentation. These functions could be of great use, especially for international trips, multilingual work environments or when studying foreign language texts.

Creative and practical everyday functions

At the presentation, the writing of a Haiku poem was also shown, which illustrates the creative possibilities of AI integration. In addition, the glasses can act as a navigation aid, as Google shows in its blog. The Gemini-KI can display route suggestions directly in the user's field of vision and thus facilitate navigation in unknown environments.

The Gemini Sight Glasses should make it possible to automate numerous everyday tasks via simple voice commands-from managing the calendar to sending e-mails to writing documents or reserving a table in a nearby restaurant.

Android XR: The new operating system for XR devices

The Smart Glasses presented are based on Android XR, a new operating system that Google announced in December 2024. Android XR was specially developed for XR devices and is intended to offer a uniform platform for different types of AR and VR hardware.

Cooperation with Samsung and expansion of the XR ecosystem

Google develops Android XR in cooperation with Samsung, which indicates a broader industrial support for the new ecosystem. In addition to the Smart Glasses, a mixed reality headset called “Project Moohan” was also presented at the TED conference, which is also based on Android XR and developed in partnership with Samsung.

The mixed reality headset uses Passhrough video technology "to create a seamless overlay of the real and virtual world". It is much more chunky than the smart glasses and aims at more intensive mixed reality experiences, while the glasses are designed for all-day use.

Developer platform and app ecosystem

Android XR should enable developers to create applications for various XR devices with familiar Android tools. For developers, it should offer a uniform platform with possibilities "to develop experiences for a wide range of devices with familiar Android tools and frameworks". With this strategy, Google could quickly build up an extensive app ecosystem for its XR devices, as developers can use their existing Android know-how.

Suitable for:

Market perspectives and future prospects

Although Google did not announce an official release date for the Smart Glasses during the TED presentation, various indications indicate that a market launch may not be far away. The South Korean news portal ETNEWS recently reported that a product with the coden seed “Haean” is still scheduled for this year and that the features and specifications are currently being finalized.

Media reactions and hands-on experiences

The fact that Google has already made a 20-minute hands-on with the new glasses indicates an advanced stage of development. The first reactions were positive and there seems to be a certain enthusiasm for the technology. It can be seen between the lines that Google has already announced the product unofficially and that the official presentation should not be far away.

Positioning in the growing AR market

Compared to the competition, especially the Ray-Ban Meta Smart Glasses, which are already on the market, Google seems to be aiming for a technological lead with the integration of advanced AI functions and a display. The Ray-Ban Meta Smart Glasses do not have a display, which limits the interaction options.

The combination of inconspicuous design, powerful AI and practical everyday functions could give Google a competitive advantage and redefine the market for AR glasses. In particular, deep integration with the Android ecosystem and Google services could be a convincing argument for many users.

Google Gemini features: A milestone for expanded reality

The presentation of the new Google Smart Glasses with a display and Gemini features at the TED conference marks an important milestone in the development of augmented reality technologies. After the failed first attempt with Google Glass, the company seems to have drawn the teachings from the past and now presents a product that is more technologically mature and at the same time more socially acceptable.

The inconspicuous design, the close integration with the smartphone and the powerful AI functions could help to find this new generation of smart glasses width. Functions such as “memory”, real -time translation and navigation offer specific added value in everyday life.

The development of Android XR as a uniform platform for various XR devices and the cooperation with Samsung indicates Google's long-term strategy to build a comprehensive ecosystem for expanded reality. In this ecosystem, different devices - from light glasses to powerful headsets - could work seamlessly together and thus enable completely new types of interaction with digital content.

Although questions about the exact market launch, price and battery life are still open, the progress shown so far suggests that Google has taken an important step towards the future of the expanded reality with these smart glasses. It remains to be seen how this technology will develop and which new areas of application will result.

Suitable for:

 

Your global marketing and business development partner

☑️ Our business language is English or German

☑️ NEW: Correspondence in your national language!

 

Digital Pioneer - Konrad Wolfenstein

Konrad Wolfenstein

I would be happy to serve you and my team as a personal advisor.

You can contact me by filling out the contact form or simply call me on +49 89 89 674 804 (Munich) . My email address is: wolfenstein xpert.digital

I'm looking forward to our joint project.

 

 

☑️ SME support in strategy, consulting, planning and implementation

☑️ Creation or realignment of the digital strategy and digitalization

☑️ Expansion and optimization of international sales processes

☑️ Global & Digital B2B trading platforms

☑️ Pioneer Business Development / Marketing / PR / Trade Fairs


⭐️ Augmented & Extended Reality - Metaverse planning office / agency ⭐️ XPaper