Google recently offered the public its first real-world look at its highly anticipated XR smartglasses during a TED Talk. While previous glimpses have been limited to polished promotional videos, this demonstration provided a more tangible understanding of the smartglasses’ capabilities, powered by Google Gemini AI. Though exciting, the technology remains firmly in the “future” category.
Gemini’s Capabilities on Display
The majority of the 16-minute presentation, introduced by Shahram Izadi, Google’s VP of Augmented and Extended Reality, focused on demonstrating the smartglasses’ functionality. Built on the Android XR platform, which Google is developing with Samsung, these glasses aim to bring Google Gemini to various XR hardware, including headsets and future form factors.
The demonstration showcased a pair of sleek, black-framed smartglasses, equipped with a camera, microphone, and speaker, enabling Gemini to perceive the wearer’s surroundings. Connected to a phone, the glasses can handle calls, but their true distinction lies in the integrated color in-lens display. Gemini’s ability to “remember” visual information was highlighted, correctly recalling a book title and the location of a hotel keycard. This short-term visual memory offers potential for various applications, from memory aids and fact-checking to time management.
A screenshot from Google
The AI-powered vision also facilitated diagram explanation, text translation, and real-time spoken language translation. Navigating to a local landmark showcased the in-lens display, with directions appearing seamlessly. The live demonstration emphasized Gemini’s rapid response and smooth integration with the smartglasses.
Android XR Headset Experience
Following the smartglasses demo, Android XR was showcased on a headset, offering a visual experience reminiscent of Apple’s Vision Pro. Multiple windows and pinch gestures were used for control, but Gemini remained central. The demonstration highlighted the AI’s ability to describe and explain visual content conversationally.
The Project Moohan headset.
Release Date Uncertainty
Izadi’s closing remarks painted a compelling vision of the future, where AI converges with lightweight XR devices, providing instant access to information. He emphasized the evolving nature of AI, becoming more contextually aware, conversational, and personalized. While the presentation sparked excitement, it offered no concrete release dates.
A person holding the Ray-Ban Meta and Solos AirGo 3 smart glasses.
Reports suggest a 2026 launch for the Google-Samsung smartglasses, potentially pushing the release beyond earlier rumors. While this timeline places it ahead of Meta’s Orion smartglasses, the delay raises concerns about the rapidly evolving landscape of AI-powered hardware.
The Challenge of a Crowded Market
A screenshot from a Google video showing its smartglasses in action.
The demonstrated smartglasses seem to combine elements of Google Glass, Ray-Ban Meta, and existing smart glasses technologies, integrated with Google Gemini. However, the prolonged wait, coupled with a surge of AI-powered devices and Ray-Ban Meta alternatives, poses a risk of Google’s offering becoming less novel by its eventual release. The Android XR headset, Project Moohan, is anticipated to launch in 2025. While the demonstrated technology holds immense promise, the lengthy wait leaves potential users eager for a concrete release date and a chance to experience this augmented intelligence firsthand.