Tech
Gremiln Messing With Engine
NO WAY
SAN JOSE
Getty Images

Just because Silicon Valley is pumped to make this junk doesn’t mean people will buy it

Big Tech is fixated on making computers for your face and portable AI companions. Turns out there may be bad ideas in brainstorming.

For all its innovation, it can sometimes feel like all of Big Tech coalesces along a single idea. Lately it’s seemed like Silicon Valley’s best idea for the next best thing has been the same thing: an AI-powered device that isn’t a phone.

Last week, OpenAI announced that it’s buying io, former Apple design lead Jony Ive’s AI hardware startup. The plan so far is to make a “companion” device that sits on your desk, in your pocket, or maybe around your neck, and works in addition to your phone or computer.

Most everyone else has settled on AI-powered smart glasses as the device du jour.

Apple itself plans to release its AI-enhanced glasses at the end of next year. Like its competitors, the device would have cameras, microphones, and speakers allowing it to assess the outside world and let users communicate with its voice assistant.

Last week, Google announced Android XR, a framework meant to bring AI to face computers, including those made by Samsung, Warby Parker, and Gentle Monster.

Amazon has been working on AI glasses for internal use that would provide turn-by-turn navigation within buildings to help it decrease delivery times.

Meta, which partnered with Ray-Ban owner EssilorLuxottica, is the furthest along, having already sold more than 2 million units since they came out in October 2023. The company says it expects to produce 10 million a year starting next year and is expected to offer a more deluxe, more expensive version of the glasses by the end of 2025.

The idea is all roughly the same: create a new form factor through which you can interact with AI. It’s meant to give users access to their phone without having to pick it up and to overlay useful information on the outside world. But it all raises some important questions.

Why is everyone having the same idea at the same time?

It’s easy to see why Big Tech companies might want this. Anything that enables them to sell you devices or services, serve you ads, collect data, and generally be closer to your everyday lives is a big win for them.

And the tech is more ready for prime time than it’s ever been.

“With Google Glass, when they first came out, everybody was creeped out”

While the general idea of AI glasses has been floating around since before Google’s first foray with Glass more than a decade ago, with the rise of generative AI and other advances, we’re much closer now to having the technology to make it actually work. AI is better able to correctly identify what you’re looking at, chips are more powerful, and batteries are smaller.

Society, too, is more ready.

“With Google Glass, when they first came out, everybody was creeped out that all of a sudden there’s potentially somebody with a camera on you all the time,” Counterpoint Technology senior analyst Gerrit Schneemann told Sherwood News. “But now with social media, the mainstream user is more aware of the constant nature of being potentially on camera, so there’s less of a friction there.”

It’s also possible that a separate AI device is a genuinely good idea. In other words, a type of carcinization is going on where the form factor is so useful that a number of unrelated things are evolving to look like it (and it’s not just tech companies wanting you to use their proverbial crab).

While much of what these smart devices are offering is already possible on your phone, glasses or other hands-free devices open up a world of possibilities.

“If you look at insert company here —  it could be Google, it could be Meta, it could be Microsoft or Apple or Jony Ive, etc. — there is a recognition of the potential conveniences that are unique to this device that you can’t get anywhere else,” Ramon Llamas, a research director at IDC who specializes in wearables, told Sherwood.

The companies themselves are quick to advertise use cases for these devices.

As Shahram Izadi, vice president of AR/XR at Google, spun it at Google’s developer conference last week, “Unlike Clark Kent, you can get superpowers when you put your glasses on.”

Those powers were included in a demo where Product Manager Nishtha Bhatia sent and received texts without her hands and instead used those hands to double high-five basketball player Giannis Antetokounmpo. She asked her glasses to look up a band and then play songs from that band. The glasses recalled a logo from her coffee cup earlier, made a calendar invite with someone at that coffee shop, and provided step-by-step directions to the coffee shop. Perhaps most impressive: with only a small glitch, Bhatia and Izadi were able to see real-time translations of what each person was saying in Hindi and Farsi, respectively.

Remember, of course, that demos at tech conferences are not real life.

A lot of the draw for these devices is their potential. In other words, they will presumably get more powerful down the line and possibilities for how people might eventually use them are more interesting than what we’ve come up with to date.

And that promise is so tantalizing that Big Tech doesn’t want to be left out. Everyone is keenly aware that it’s been almost two decades since the iPhone launched, and they don’t want to be BlackBerry.

“ The great thing about starting now and everybody else starting now is the gold standard hasn’t been determined yet,” Llamas said. “Right now there’s a huge land grab.  If you are one of those companies to help guide and shape that market as it gets started and as it grows, you dictate the terms.”

That means these companies might be willing to tolerate lots of losses — looking at you, Meta — to jockey for position.

Do people really want this?

In a word, no. It’s not like consumers are out in the streets clamoring to put a computer on their face.

Rather, these devices seem a bit like a solution in search of a problem.

In one Google Gemini Live demo, an employee walks around her neighborhood misidentifying objects like a garbage truck as a convertible, only to have the AI assistant correct her. (The bit was reminiscent of HBO’s “Silicon Valley,” when Jian-Yang’s app disappointingly could only tell you whether something was or wasn’t a hotdog.) One can certainly think of single or niche use cases — especially for people with visual impairments — but it’s much more difficult to imagine wide-ranging, mass-market, everyday reasons to shell out hundreds or thousands to wear a computer on your face.

“I don’t see anything where people are going to say, ‘This is going to be so revolutionary that I need to have it.’”

As Llamas put it, “They’re still trying to figure out what spaghetti sticks to the wall.”

It doesn’t yet make a meal.

“If the only ‘benefit’ is to have the content of your phone in eyesight at all times, or with the voice prompt,” Schneemann said,  “I don’t see anything where people are going to say, ‘This is going to be so revolutionary that I need to have it.’”

Big Tech companies, it seems, are hoping that if they build it people will come — and make it useful. They’re providing a nurturing playground and hoping the big ideas come later. The hope is that much like with the iPhone’s subsequent App Store, developers will come up with the killer use case and build the next Uber or Instagram for AI glasses. But notably, the iPhone was pretty useful before the apps. People already need a phone and a computer, and it was good enough to be both.

Then there’s the big issue of whether the tech will actually work outside the narrow confines of the demos, which have been scripted and vetted. Look no further than notable AI device flops like Rabbit’s r1 and Humane’s pin to know that if a device fails to perform in the wild, no one will want it. Perhaps Ive’s ChatGPT-powered device will break the losing streak, but we won’t know till that comes out in late 2026.

Apple’s Vision Pro is also instructive. In the first year it came out, the immersive headset sold fewer than 500,000 units. While considered a technical marvel, the device has so far failed to generate any killer apps or use cases — at least not at its surprisingly expensive $3,500 cost.

Perhaps a relatively cheaper price tag will help smart glasses sell, but they also need people to actually want to use them and other companies that see how to build businesses around them.

For now, these devices will likely be relegated to early adopters.

Even Meta’s relatively successful Ray-Bans, which start around $300, are nowhere near mass market. While 2 million sold sounds like a lot, it’s nothing in comparison with the 232 million iPhones or even 34 million Apple Watches sold last year. It’s hard to believe that a deluxe version costing more than 3x as much would somehow push substantially more units. Also, remember the Metaverse? Just because Meta, née Facebook, wants something to be a big deal, doesn’t mean it will actually pan out.

Ultimately whether such devices become commercial successes will depend on whether they meet a number of thresholds: they will have to be fast enough, light enough, cheap enough, accurate enough, long-lasting enough, and useful enough for people to decide it’s worth toting around a whole new expensive device. We don’t know yet how much enough is enough, but we’re willing to bet we’re still at least a few years away.

More Tech

See all Tech
tech

Meta announces new Texas data center, partnership with Arm

Meta announced today it’s breaking ground on a new “AI-optimized” data center in El Paso, Texas that will scale to 1GW. That’s not to be confused with the city-sized AI data center it’s building in Louisiana that’s expected to scale to 5GW.

In other Meta AI data center news, Reuters reports that Meta is also partnering with chip tech provider Arm Holdings for “data center platforms to power its AI ranking and recommendation systems, which are key to discovery and personalization across its apps.” The partnership also likely represents an effort to diversify away from Nvidia chips.

Meta is expected to spend up to $72 billion in capex this year, as it amps up AI-related infrastructure projects.

Meta is expected to spend up to $72 billion in capex this year, as it amps up AI-related infrastructure projects.

tech

Report: OpenAI scrambles to find new revenue in its 5-year business plan

After a flurry of enormous (and confusing) deals, OpenAI has committed to spending more than $1 trillion with various partners in the AI ecosystem. Now it has to figure out how to pay for it all.

The Financial Times has some details of OpenAI’s five-year business plan and how it’s exploring “creative” ideas to secure more capital.

Among the elements of the plan:

OpenAI is currently pulling in $13 billion in annual recurring revenue, with 70% of that coming from consumer ChatGPT subscriptions, according to the report. But it also plans on burning $115 billion through 2029.

Among the elements of the plan:

OpenAI is currently pulling in $13 billion in annual recurring revenue, with 70% of that coming from consumer ChatGPT subscriptions, according to the report. But it also plans on burning $115 billion through 2029.

England’s Coldstream Guards

Google’s Waymo plans to launch autonomous rides in London next year

This marks the company’s second international expansion after Tokyo.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.