Tech

HALLUCIN-AI-TIONS

SYNTAX ERROR

Closeup of a Red Robot
A close-up of a red robot

AI hallucinations appear to be creeping into consulting reports

AI-detection firm GPTZero says a recent report from consultancy EY looks circumspect, joining other big firms with likely hallucinations.

Chris Stokel-Walker

A 44-page report titled “Points of Attack: Uncovering Cyber Threats and Fraud in Loyalty Systems” looks like many others that management consultancy EY publishes every year. 

These types of reports are the bread and butter of the biggest consultancies on the planet — thought pieces about business that help cement their position as organizations to be commissioned to help companies understand their place in the world, and how to maintain it. EY’s consulting arm generated over $16.4 billion in revenue last year.

There’s just one problem. Dig into this particular report, published in December 2025 — particularly its “resources” section, which provides links to sources of the data and claims cited throughout the document — and things don’t add up. 

“It’s riddled with hallucinations,” said Edward Tian, a former employee at journalism website Bellingcat and Princeton University researcher who set up GPTZero, an AI-detection firm that shared its investigation exclusively with Sherwood News.

EY didn’t respond to multiple requests for comment.

The document is a standard advisory report describing the state of cybersecurity weaknesses in the travel industry’s loyalty points ecosystem. But it appears to have issues with citing nonexistent sources.

On page 4, it describes the global loyalty program economy as a $200 billion business, with between 30% and 50% of loyalty points being unused — something that makes them “a prime target for exploitation” by cyber criminals. One of the sources of this claim is a Forbes article cited in the report’s Resources section titled, “The $200 Billion Loyalty Economy.” The report’s link to that story, purportedly published in October 2023 by writer Blake Morgan, a customer experience futurist, returns an error that says, “We can’t find the page that you are looking for.” It’s not clear that Morgan published a story with that headline in October 2023, and the URL has not been indexed on the Internet Archive’s Wayback Machine.

“It’s just not built for truth, and it’s very convincing, and so very persuasive, and also designed to be persuasive.”

That’s not the only thing that raised the suspicions of the team at GPTZero.

Included in the report’s references are citations to a broad range of articles and reports that appear not to exist. One link to a WIRED story, titled “AI Voice Deepfakes Targeting Call Centers,” also returns a 404 error. A second, to a story purportedly titled “AI Security Gaps,” also leads nowhere. A link to a CyberNews report does the same.

And the claim that 30% to 50% of loyalty points are never redeemed is attributed to McKinsey & Co. — though the references section cites a “Loyalty Economics Report” that GPTZero says doesn’t exist. Instead, GPTZero suspects the report has incorrectly hoovered up a fictional reference on a second website, FinancialIT.net, in a separate story. “This is what we would call secondhand hallucinations,” said Tian. McKinsey didn’t respond to a request for comment.

In all, GPTZero’s investigation alleges that 60% of the references in EY’s report are hallucinated.

“We thought this was unbelievable,” Tian said. “This report went into the spotlight as one of the most egregious reports,” he said, out of more than 3,000 consulting PDFs the firm has scraped and run through a hallucination-detection workflow. 

All too aware of the risks of AI making stuff up, GPTZero’s process is backed up by human review to check whether citations exist and if they accurately and actually support the claims being made. The EY report is one of six reports from various consultancies that GPTZero has found so far that it says are full of fake citations, broken URLs, and contradictory statistics.

So what’s going on?

“There is the technical reason why this is happening, and the societal reason why this is happening,” said Sandra Wachter, professor of technology and regulation at the Oxford Internet Institute at the University of Oxford. The technical reason is perhaps the simpler one: AI remains unintelligent. “It doesn’t understand the question that you’re asking,” Wachter said. “It doesn’t understand the concepts that you are trying to convey. It doesn’t go back to a well-curated library to help you find an answer to your question.”

Yet people think it does when they use it, which leads to the second — societal — reason for the embarrassing snafu. “I’ve described them as bullshitters,” said Wachter. “It’s just not built for truth, and it’s very convincing, and so very persuasive, and also designed to be persuasive,” she said.

“It’s very ironic that the biggest experts in AI technology in the world were generating AI-hallucinated citations,” said Tian. “Even the most prestigious companies in the world, such as these big four companies, are not immune to AI generating hallucinations.”

“I think we have to say goodbye to the idea that we can quickly decide if something is truthful or not.”

Tian said he expected EY to have better quality checks to prevent such hallucinations from making it through to published work. That’s important because that work ends up being spread elsewhere — the EY report was referenced in a Canberra Times article that wound up being syndicated to more than 60 newspapers in Australia. GPTZero also says the data appears to have crept into AI tools like ChatGPT, Claude, and Perplexity and is being cited as a reputable source. (GPTZero hasn’t brought its findings about the report to EY; as we mentioned previously, Sherwood put in multiple requests for comment with EY.)

“I think we have to say goodbye to the idea that we can quickly decide if something is truthful or not,” said Wachter. “If you want to figure out what is truthful, you probably need more time than you used to, and there are no perfect or completely trustworthy sources anymore, because everything can be fake now.”

EY isn’t the only organization to face this kind of issue. GPTZero has previously reported on issues, including 19 hallucinations, in a report published by Deloitte, an EY competitor. Those issues were first identified by University of Sydney academic Christopher Rudge. Deloitte said it used generative AI in the report, and offered a partial refund to the Australian government. And last month, major law firm Sullivan & Cromwell acknowledged hallucinations in a filing it submitted to a federal bankruptcy court, saying the company’s internal AI policies on using AI had not been followed. 

Tian’s worry — along with others’ — is that this problem is going to get worse before it gets better. “The use of [AI] technology is getting ahead of processes like hallucination detection and quality checks to ensure AI is used properly.” Tian is particularly concerned that his early insights into the reports he’s scraped from a number of organizations is just the tip of the iceberg. “We certainly don’t think it’s an isolated case,” he said. “And there’s no evidence yet this problem is self-correcting or getting better.”

Some AI experts argue it will never actually get better, and that AI is doomed to repeatedly fabricate information because of the way it’s designed. A recent preprint study from researchers at the University of Singapore suggested that hallucinations are inevitable because of the way the large language models that power AI systems are designed, even if they had perfect training data or guardrails to dissuade them from doing so.

“It’s not like a calculator you can trust where the rules are clear and where the answer is always correct,” Wachter said. “It’s more like a broken clock that is correct twice a day, but very often, is just not.”

More Tech

See all Tech
tech

Bloomberg: Relationship between OpenAI and Apple has deteriorated and legal action may be imminent

The two-year-old alliance between Apple and OpenAI has deteriorated, Bloomberg reports, with the AI giant now consulting legal counsel about issuing a potential breach of contract notice.

OpenAI executives allege that Apple failed to adequately integrate and promote ChatGPT on the iPhone, causing the AI firm to lose out on billions a year in subscriptions and hurt its brand, according to the report.

Meanwhile, Apple has expressed concerns over OpenAI’s privacy protection, and has been miffed that OpenAI has been working on its own hardware with former Apple design lead Jony Ive.

More recently, Apple, which has trailed its peers in developing AI, has decided to offer users their choice of AI models, rather than aligning exclusively with OpenAI’s.

Meanwhile, Apple has expressed concerns over OpenAI’s privacy protection, and has been miffed that OpenAI has been working on its own hardware with former Apple design lead Jony Ive.

More recently, Apple, which has trailed its peers in developing AI, has decided to offer users their choice of AI models, rather than aligning exclusively with OpenAI’s.

tech

Report: Mythos is used to crack MacOS

Apple’s MacOS has long been considered to have some of the strongest cybersecurity protections in the industry.

But researchers using a preview release of Anthropic’s Mythos AI model were able to take control of a Mac, in a significant example of the unreleased AI model’s cyber capabilities, according to a report from The Wall Street Journal.

It took two security researchers five days to pull off the feat, which chained together bugs to corrupt the Mac’s memory, per the report. The researchers told the Journal that human expertise was required to use Mythos, and it would not be able to execute the attack on its own. The researchers reportedly said some of the Mythos hype was “overblown.”

Apple said it was taking the bug report “very seriously” and has not yet issued a fix.

It took two security researchers five days to pull off the feat, which chained together bugs to corrupt the Mac’s memory, per the report. The researchers told the Journal that human expertise was required to use Mythos, and it would not be able to execute the attack on its own. The researchers reportedly said some of the Mythos hype was “overblown.”

Apple said it was taking the bug report “very seriously” and has not yet issued a fix.

tech

Survey: 70% of Americans don’t want data centers in their community

America loves a good boogeyman, and data centers have become one.

It was once easy for the hyperscalers to sidle up to state legislators, utility executives, and local officials with the promise of jobs and the high-tech glow of AI for their economically challenged areas without much local opposition.

But now the script has been flipped, and public opposition to data centers is starting to solidify. A new Gallup survey asked 1,000 Americans for their thoughts on data centers, the first such survey for the polling company. Among the findings:

  • 70% of survey respondents opposed local construction of AI data centers.

  • Opposition to local data centers was much stronger than opposition to local nuclear power plants.

  • Dislike for data centers is bipartisan — majorities of both Democrats and Republicans were opposed to data centers, but more so for Democrats.

  • Among those opposed to data centers, the impact on the environment and energy usage were top concerns.

Local communities and state governments around the US have introduced bans or moratoriums on data center construction. Senators have also introduced similar legislation in Congress.

Last month, Maine Governor Janet Mills vetoed legislation that would have enacted the first statewide bill to pause data center construction.

But now the script has been flipped, and public opposition to data centers is starting to solidify. A new Gallup survey asked 1,000 Americans for their thoughts on data centers, the first such survey for the polling company. Among the findings:

  • 70% of survey respondents opposed local construction of AI data centers.

  • Opposition to local data centers was much stronger than opposition to local nuclear power plants.

  • Dislike for data centers is bipartisan — majorities of both Democrats and Republicans were opposed to data centers, but more so for Democrats.

  • Among those opposed to data centers, the impact on the environment and energy usage were top concerns.

Local communities and state governments around the US have introduced bans or moratoriums on data center construction. Senators have also introduced similar legislation in Congress.

Last month, Maine Governor Janet Mills vetoed legislation that would have enacted the first statewide bill to pause data center construction.

$100B
Jon Keegan

Each day of the Musk v. Altman trial in Oakland, California, more details of Microsoft’s complicated $13 billion partnership emerge from the courtroom.

Yesterday, Microsoft executive Michael Wetter said that the company has spent over $100 billion on the OpenAI partnership. A big chunk of that came from the fact that Microsoft needed to build the costly infrastructure before OpenAI could use it, according to Wetter.

Microsoft’s investment looks like it was worth it, as OpenAI is currently valued at $852 billion, making Microsoft’s stake worth about $135 billion. OpenAI is planning for an IPO later this year.

Latest Stories

Sherwood Media, LLC and Chartr Limited produce fresh and unique perspectives on topical financial news and are fully owned subsidiaries of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Money, LLC, Robinhood U.K. Ltd, Robinhood Derivatives, LLC, Robinhood Gold, LLC, Robinhood Asset Management, LLC, Robinhood Credit, Inc., Robinhood Ventures DE, LLC and, where applicable, its managed investment vehicles.