Tech
Chatbot looking out of a smartphone display. Text bubbles floating around. Pink background.
Getty Images

Amazon and Apple are struggling with their AI voice assistants

Two separate pieces on the AI assistants suggest good versions are a ways off.

Rani Molla

As we wrote last year, voice assistants have seemingly gotten worse as Big Tech companies try to transition the technology underlying them to compete in a ChatGPT world. The idea is to supercharge Apple’s Siri and Amazon’s Alexa with AI so they can finally become true assistants, advancing beyond just reliably playing music, setting timers, and telling you the weather.

The problem lies in the switch from natural language processing — a rigid but consistent system that detects what you’re trying to say using preprogrammed intent models — to large language models, which are more expansive and generate probable answers by analyzing vast amounts of text. In the process, something seems to have broken, and these tools are now often worse at the basic tasks they used to handle well.

It turns out, things aren’t really getting better with time. Behold: two recent pieces on the state of Apple and Amazon’s voice assistants:

The New York Times’ Kevin Roose:

“The good news is that the new Alexa+ is, in fact, more fun to talk to than the old one, with more realistic synthetic voices and a more humanlike cadence. (There are eight voices to choose from; I used the default setting, an upbeat female voice.)

And I liked some of Alexa+’s new capabilities, such as booking a table at a restaurant and generating long stories and reading them to my 3-year-old.

The new Alexa is also better at handling multistep requests. ‘Set three kitchen timers for 15, 25 and 45 minutes’ and ‘write a one-day itinerary for a trip to San Diego and send it to my email’ were two prompts that worked for me. And Alexa+ doesn’t require you to say its wake word every time you talk to it, so you can go back and forth or ask it follow-up questions, which is a nice change. The bad news is that despite its new capabilities, Alexa+ is too buggy and unreliable for me to recommend. In my testing, it not only lagged behind ChatGPT’s voice mode and other A.I. voice assistants I’ve tried, but was noticeably worse than the original Alexa at some basic tasks. When I asked Alexa+ to cancel an alarm the other morning — a request I had made to the old Alexa hundreds of times with no issues — it simply ignored me.

When I emailed a research paper to alexa@alexa.com, in order to hear Alexa+ summarize it while I washed the dishes, I got an error message saying the document couldn’t be found.

Alexa+ also hallucinated some facts and made some inexplicable errors. When I asked it to look up Wirecutter’s recommended box grater and add it to my Amazon cart, it responded that ‘according to Wirecutter, the best box grater is the OXO Good Grips Box Grater.’ Wirecutter’s actual box grater pick is the Cuisipro 4-Sided Box Grater. Luckily, I caught the mistake before ordering. When I asked Alexa+ to walk me through installing a new A.I. model on my laptop, it got tripped up and started repeating, ‘Oh, no, my wires got crossed.’”

Bloomberg’s Mark Gurman:

“The plan now is to ship [Apple Intents, a tool that lets you voice operate your iPhone with precision] alongside a broader Siri infrastructure overhaul in the spring and market it heavily. But there’s some concern inside the company, I’m told. Engineers have been struggling to ensure that the system works with a sufficient number of apps and is accurate enough to handle high-stakes scenarios. There are worries about the software failing in categories where precision is nonnegotiable, like in health or banking apps.

For years, users have struggled with Siri not understanding them. It’s annoying but not critical if your phone misunderstands the city you want a weather report from or tries to navigate you to the wrong restaurant. But letting the Siri brain of today control all of your apps would obviously be a lot riskier.

That’s why Apple is waiting on the new Siri and won’t roll it out universally on day one. Testing is underway with select third-party apps, including Uber, AllTrails, Threads, Temu, Amazon, YouTube, Facebook, WhatsApp, and even a few games, in addition to Apple’s own apps. For banking and other sensitive categories, Apple is considering sharply limiting what Siri can do — or excluding those areas altogether.

This isn’t just about making Siri smarter. It’s about giving Apple’s ecosystem a new, voice-first interface. If the company is actually able to bring it to market (and that’s a gigantic if), it could potentially be a hit that many users didn’t see coming.”

So Amazon’s AI voice assistant hasn’t been able to balance its new talents while performing its old ones, and it’s buggy all around. And Apple doesn’t expect its AI assistant to come out until spring 2025, as it works out problems internally.

More Tech

See all Tech
tech

AI agent fatigue may be hitting enterprise customers

You may have noticed that recently, every piece of business or productivity software seems to have an “AI agent” feature that keeps getting pushed in front of you, whether you want it or not.

That’s leading to AI agent fatigue among enterprise customers, according to The Information.

Companies like Salesforce, Microsoft, and Oracle have been pushing their AI agent features to help with tasks such as customer service, IT support, and hiring. But many of those features are all powered by AI services from OpenAI and Anthropic, leading to a similar set of functions, according to the report.

As companies race to tack on AI agents to their legacy products, it remains to be seen which functions will become the “killer app” for enterprise AI.

Companies like Salesforce, Microsoft, and Oracle have been pushing their AI agent features to help with tasks such as customer service, IT support, and hiring. But many of those features are all powered by AI services from OpenAI and Anthropic, leading to a similar set of functions, according to the report.

As companies race to tack on AI agents to their legacy products, it remains to be seen which functions will become the “killer app” for enterprise AI.

tech

Google’s Waymo has started letting passengers take the freeway

Waymo’s approach to robotaxi expansion has been slow and steady — a practice that has meant the Google-owned autonomous ride-hailing service that launched to the public in 2020 is only just now taking riders on freeways.

On Wednesday, Waymo announced that “a growing number of public riders” in the San Francisco Bay Area, Phoenix, and Los Angeles can take the highway and are no longer confined to local routes. The company said it will soon expand freeway capabilities to Austin and Atlanta. It also noted that its service in San Jose is now available, meaning Waymos can traverse the entire San Francisco Peninsula.

Waymo’s main competitor, Tesla, so far operates an autonomous service in Austin as well as a more traditional ride-hailing service across the Bay Area, where a driver uses Full Self-Driving (Supervised). On the company’s last earnings call, CEO Elon Musk said Tesla would expand its robotaxi service to 8 to 10 markets this year.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.