Ray-Ban Meta
I have always been interested in wearable technology, specifically head-mounted devices.
Over the years, I have tested a wide variety of solutions, covering professional hardware such as RealWare, as well as consumer products, like Google Glass and Apple Vision Pro.
In my opinion, these devices have the potential to change the way we interact with technology via a concept known as ubiquitous/pervasive computing, which is where technology is an integrated part of common objects and is made to appear seamlessly anytime and anywhere.
If implemented effectively, this concept can make technology more personal, delivering value only when it is required, without becoming a distraction (a common issue with smartphones). For example, imagine walking up to a sign in a foreign language and having it automatically translated.
Unfortunately, every attempt to deliver a viable head-mounted device has failed in one key area, social acceptance.
For example, the term “glasshole” is used to describe anyone brave enough to wear Google Glass in public. Ironically, Google Glass was arguably the least offensive head-mounted device, when compared to products such as the Apple Vision Pro, etc.
Recently, I have been testing the Ray-Ban Meta smart glasses, specifically the Wayfarer model. What makes this head-mounted device interesting is that it looks and feels just like an ordinary pair of sunglasses.
In fact, I recently wore them to a family day out and not a single person noticed anything different, unique or special.
The embedded technology is fairly simple, specifically:
- Camera (12MP)
- Microphone
- Speakers
- Touchpad
- Capture Button
- Bluetooth and Wi-Fi
The Ray-Ban Meta smart glasses link to the Meta View app (iOS and Android), providing access to review photos/videos, whilst also triggering the the voice assistant.
Currently, the voice assistant is fairly limited (reading notifications, messages, etc.) However, this is an area where I expect to see continuous innovation, as Meta looks to integrate more Generative AI capabilities, leveraging the Meta Llama 3.1 language model.
At this point, I would expect the Ray-Ban Meta smart glasses to provide access to rich conversational content, similar to what we see from ChatGPT and Google Gemini.
As you would expect from a Meta product, the cameras include tight integration with Meta social services such as Facebook, Instagram, WhatsApp, Facebook Messenger, etc.
I don’t use any social networks, therefore I had feared the camera would be rendered useless. Thankfully, it can be used independently and privately, including offline (no Internet connection required).
The short video below shows my daughter and I going down a slide together. It was fun to be able to capture this moment, whilst also remaining completely present to enjoy the ride (not having to hold a smartphone, etc.)
The battery life of the Ray-Ban Meta smart glasses is rated at 4 hours from a single charge and up to 36 hours when combined with the included charging case. In my experience, I have been able to use the sunglasses on day trips without any issues, admittedly this was with limited/occasional usage of the smart features (notifications, messages, camera, etc.)
Recognising that the sunglasses are built by Ray-Ban, they are also very comfortable and excellent at protecting your eyes from the sun. They are even available with prescription lenses, meaning they deliver value, even when the technology is disabled or off. This is not the case with most wearable technology, which becomes useless when not active.
Finally, I think it is worth mentioning the newly announced Even Realities G1 smart glasses, which similar to the Ray-Ban Meta smart glasses, were designed as glasses first, smart glasses second.
Instead of delivering value via microphone and speakers, they include a system called HAOS (Holistic Adaptive Optical System). In short, Micro-LED optical engines transmit content onto a pair of waveguide lenses, which then project information approximately two meters ahead of you. This can include notifications, messages, navigation, and even real-time translations (e.g., subtitles).
I am excited to test the Even Realities G1 smart glasses but can see a world where these complimentary audio/video/display capabilities are combined. This could be an incredible combination, that feels feasible, without impacting the design.
In my opinion, this is a turning point for head-mounted devices, which has been achieved by acknowledging that “less is more”. For example, in most scenarios, users do not need the “ultra high tech” capabilities of an Apple Vision Pro. Instead, specific capabilities that directly target and enrich real-world scenarios are more valuable, whilst also making them viable for daily use and critically, and socially acceptable.