Tech

Hey Google!

What happened to my assistant?

Google Assistant is bugging out
Bronson Stamp / Sherwood

Google Assistant is bad now. But why?

They promised us flying cars, and all we got were mediocre virtual assistants that seem to be getting worse.

Recently I brought Google Assistant to its cognitive limits by asking it to tell jokes to my toddler.

Our Google Nest Hub Max got through three jokes before repeating the one where the left eye says to the right eye, “Between you and me, something stinks.” When I prompted it to tell us another, the same way I had previously, it said it didn’t understand. Another what? 

These days Google Assistant seems more and more confused.

I first adopted a number of smart-home devices in 2018, when Google (and Amazon and Apple) were hyping their voice assistants nonstop and it felt like every developer conference or trade show Google was a part of was meant to indoctrinate you into the ways of the Assistant

Back then, Google Assistant was pretty good and getting better. There was a high-water mark for Google’s virtual assistant, perhaps a year or two ago. I would ask and Google would do a decent job answering. Now I’d say one out of three times it gets it wrong.

It frequently gives me the weather for the wrong town, despite having my address in its settings. When I ask it to play music on both Google speakers, it only sometimes obliges. The same goes for the TV and lights. There’s a very specific kind of shame that comes from arguing with a smart assistant for five minutes before getting off the couch.

It’s not just me. Plenty of Google’s more than 500 million monthly active users have noticed too. Reddit is full of people complaining that Google Assistant is no longer as helpful as it used to be and seems to be degenerating. The same thing seems to be happening to Amazon’s Alexa and Apple’s Siri.

As Computerworld’s JR Raphael put it last summer: “It's not only the apparent lack of focus, emphasis, and ongoing investment internally around the service. It's an apparent deterioration of the existing functions that made Assistant worth relying on.”

What's going on? 

Google and Amazon have said they’re still committed to their assistants, but both are in the process of making them into something fundamentally different than what they were — by implementing generative AI.

The previous generation of assistants that we know and love(d) worked by using natural language processing, or NLP, to detect what you were trying to say, matching that to a predefined intent model, then spitting out a predetermined and vetted answer.

“You could say anything you wanted on the input, but the output was always going to be something that existed,” Bret Kinsella, founder and CEO of voice tech and AI publications Voicebot.ai and Synthedia, explained. “Which means that you had to have forethought about everything everybody would ask. And then you had to come up with some sort of canned response.”

If the assistant figured out you were asking about the weather, it would read to you from the weather app. If you asked it a question, it would pull an answer straight from a top Google search result or Wikipedia. If it didn’t know, it said so.

This certainly had its limits, and assistants were terrible conversationalists, but they got the job done.

The large language models, or LLMs, on which newer generative AI assistants are based, use machine-learning algorithms to go through large amounts of text and then output a probable answer that sounds much more like a human response.

“A generative AI bot will respond to your question by reviewing 5 to 10 webpages, and then synthesizing that information. Its word-prediction algorithm will produce a response based on those webpages using the model's training on human language to form the response,” Kinsella said. “It will respond to you like it's an expert. That's a much richer experience that results in an answer complete with source references, compared with 10 blue links and the expectation the user wants to be an information archeologist.”

Unless, of course, the generative AI makes up nonsense

Google didn’t respond to several requests for comment about the decline of their assistants. Neither did Amazon or Apple.

What we know is that Google and Amazon made big cuts to their assistant teams after the tech slump in 2022 and after years of losing money. That was also right about the time ChatGPT launched and changed the game for what interactions between human and machine might be. In other words, it’s been a one-two punch for existing assistants.

Axios reported on an internal Google email last summer saying that the company was reorganizing its assistant team to create a “supercharged Assistant, powered by the latest LLM technology.” That process has begun with Google’s Android mobile phones, where you can now opt in to use Gemini (Google’s answer to ChatGPT, formerly called Bard) as your main assistant. (Android Authority reported that new downloads of Assistant now come with Gemini by default.) Gemini can do things like generate captions for pictures you take but, so far, Gemini is having trouble doing some of the basic Assistant functions. So it looks like Google Assistant still has to handle bread-and-butter requests like setting timers and controlling your smart devices. 

At its hardware event in September, Amazon previewed a “smarter and more conversational Alexa” that would be powered by generative AI. So far Amazon has revealed three AI-powered Alexa experiences, Character.AI (chat with fictional characters and historical figures); Splash (make your own music); and Volley (20-questions game). Fun, but not exactly a game changer yet, especially if older capabilities aren’t working as well as they once did. 

Apple, outwardly lagging the other tech giants, is reportedly in talks with Google and Baidu to outsource generative AI for its iPhones.

“I would be surprised if by this time next year all of the home assistants are not drastically better.”

It seems we’re in some liminal state where these tech companies are promising a better future with assistants supported by generative AI, but at the same time the existing technology that hundreds of millions of people use — and some, including those who are visually impaired, rely on — has languished. It’s also possible that in the decade since these smart assistants first came out and since the advent of generative AI like ChatGPT, people’s expectations for communicating with technology have gotten higher. 

Ishan Shah, a founding engineer at an AI startup who for fun recently built a generative AI bot that completes actions for you online, said newer LLMs are superior at figuring out what we want than older NLP models. When these generative AI assistants are fully overlaid onto existing ones — the ones that are connected to your smart speakers, lights, and TV — the experience could be “extremely powerful,” as these tools strike an ideal balance between creativity and competence.

Shah said bridging the old and new technology won’t necessarily be that hard, but it will take time for big companies to complete.

“I would be surprised if by this time next year all of the home assistants are not drastically better,” he added. 

But we’re definitely not there yet. For now, my toddler has suggested we unplug Google Assistant and plug it back in.

More Tech

See all Tech
tech

Tesla’s 45 Austin Robotaxis now have 14 crashes on the books since launching in June

Since launching in June 2025, Tesla’s 45 Austin Robotaxis have been involved in 14 crashes, per Electrek reporting citing National Highway Traffic Safety Administration data.

Electrek analysis found that the vehicles have traveled roughly 800,000 paid miles in that time period, amounting to a crash every 57,000 miles. According to the NHTSA, US drivers crash once every 500,000 miles on average.

The article says Tesla submitted five new crash reports in January of this year that happened in December and January. Electrek wrote:

“The new crashes include a collision with a fixed object at 17 mph while the vehicle was driving straight, a crash with a bus while the Tesla was stationary, a collision with a heavy truck at 4 mph, and two separate incidents where the Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph.”

Tesla updated a previously reported crash that was originally filed as only having damaged property to include a passenger’s hospitalization.

Last month, Tesla shares climbed after CEO Elon Musk said in a post on X that the company’s Austin Robotaxis had begun operating without a safety monitor.

The article says Tesla submitted five new crash reports in January of this year that happened in December and January. Electrek wrote:

“The new crashes include a collision with a fixed object at 17 mph while the vehicle was driving straight, a crash with a bus while the Tesla was stationary, a collision with a heavy truck at 4 mph, and two separate incidents where the Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph.”

Tesla updated a previously reported crash that was originally filed as only having damaged property to include a passenger’s hospitalization.

Last month, Tesla shares climbed after CEO Elon Musk said in a post on X that the company’s Austin Robotaxis had begun operating without a safety monitor.

tech
Jon Keegan

Ahead of IPO, Anthropic adds veteran executive and former Trump administration official to board

Anthropic is moving to put the pieces in place for a successful IPO this year.

Today, the company announced that Chris Liddel would join its board of directors.

Liddel is an seasoned executive who previously served as CFO for Microsoft, GM, and International Paper.

Liddel also comes with experience in government, having served as the deputy White House chief of staff during the first Trump administration.

Ties to the Trump world could be helpful for Anthropic as it pushes to enter the public market. Its reportedly not on the greatest terms with the current administration, as the startup has pushed back on using its Claude AI for surveillance applications.

Liddel is an seasoned executive who previously served as CFO for Microsoft, GM, and International Paper.

Liddel also comes with experience in government, having served as the deputy White House chief of staff during the first Trump administration.

Ties to the Trump world could be helpful for Anthropic as it pushes to enter the public market. Its reportedly not on the greatest terms with the current administration, as the startup has pushed back on using its Claude AI for surveillance applications.

tech
Rani Molla

Meta is bringing back facial recognition for its smart glasses

Meta is reviving its highly controversial facial recognition efforts, with plans to incorporate the tech into its smart glasses as soon as this year, The New York Times reports.

In 2021, around the time Facebook rebranded as Meta, the company shut down the facial recognition software it had used to tag people in photos, saying it needed to “find the right balance.”

Now, according to an internal memo reviewed by the Times, Meta seems to feel that it’s at least found the right moment, noting that the fraught and crowded political climate could allow the feature to attract less scrutiny.

“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” the document reads.

The tech, called “Name Tag” internally, would let smart glass wearers identify and surface information about people they see with the glasses by using Meta’s artificial intelligence assistant.

Now, according to an internal memo reviewed by the Times, Meta seems to feel that it’s at least found the right moment, noting that the fraught and crowded political climate could allow the feature to attract less scrutiny.

“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” the document reads.

The tech, called “Name Tag” internally, would let smart glass wearers identify and surface information about people they see with the glasses by using Meta’s artificial intelligence assistant.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.