HALLUCIN-AI-TIONS
SYNTAX ERROR
AI hallucinations appear to be creeping into consulting reports
AI-detection firm GPTZero says a recent report from consultancy EY looks circumspect, joining other big firms with likely hallucinations.
A 44-page report titled “Points of Attack: Uncovering Cyber Threats and Fraud in Loyalty Systems” looks like many others that management consultancy EY publishes every year.
These types of reports are the bread and butter of the biggest consultancies on the planet — thought pieces about business that help cement their position as organizations to be commissioned to help companies understand their place in the world, and how to maintain it. EY’s consulting arm generated over $16.4 billion in revenue last year.
There’s just one problem. Dig into this particular report, published in December 2025 — particularly its “resources” section, which provides links to sources of the data and claims cited throughout the document — and things don’t add up.
“It’s riddled with hallucinations,” said Edward Tian, a former employee at journalism website Bellingcat and Princeton University researcher who set up GPTZero, an AI-detection firm that shared its investigation exclusively with Sherwood News.
EY didn’t respond to multiple requests for comment.
The document is a standard advisory report describing the state of cybersecurity weaknesses in the travel industry’s loyalty points ecosystem. But it appears to have issues with citing nonexistent sources.
On page 4, it describes the global loyalty program economy as a $200 billion business, with between 30% and 50% of loyalty points being unused — something that makes them “a prime target for exploitation” by cyber criminals. One of the sources of this claim is a Forbes article cited in the report’s Resources section titled, “The $200 Billion Loyalty Economy.” The report’s link to that story, purportedly published in October 2023 by writer Blake Morgan, a customer experience futurist, returns an error that says, “We can’t find the page that you are looking for.” It’s not clear that Morgan published a story with that headline in October 2023, and the URL has not been indexed on the Internet Archive’s Wayback Machine.
That’s not the only thing that raised the suspicions of the team at GPTZero.
Included in the report’s references are citations to a broad range of articles and reports that appear not to exist. One link to a WIRED story, titled “AI Voice Deepfakes Targeting Call Centers,” also returns a 404 error. A second, to a story purportedly titled “AI Security Gaps,” also leads nowhere. A link to a CyberNews report does the same.
And the claim that 30% to 50% of loyalty points are never redeemed is attributed to McKinsey & Co. — though the references section cites a “Loyalty Economics Report” that GPTZero says doesn’t exist. Instead, GPTZero suspects the report has incorrectly hoovered up a fictional reference on a second website, FinancialIT.net, in a separate story. “This is what we would call secondhand hallucinations,” said Tian. McKinsey didn’t respond to a request for comment.
In all, GPTZero’s investigation alleges that 60% of the references in EY’s report are hallucinated.
“We thought this was unbelievable,” Tian said. “This report went into the spotlight as one of the most egregious reports,” he said, out of more than 3,000 consulting PDFs the firm has scraped and run through a hallucination-detection workflow.
All too aware of the risks of AI making stuff up, GPTZero’s process is backed up by human review to check whether citations exist and if they accurately and actually support the claims being made. The EY report is one of six reports from various consultancies that GPTZero has found so far that it says are full of fake citations, broken URLs, and contradictory statistics.
So what’s going on?
“There is the technical reason why this is happening, and the societal reason why this is happening,” said Sandra Wachter, professor of technology and regulation at the Oxford Internet Institute at the University of Oxford. The technical reason is perhaps the simpler one: AI remains unintelligent. “It doesn’t understand the question that you’re asking,” Wachter said. “It doesn’t understand the concepts that you are trying to convey. It doesn’t go back to a well-curated library to help you find an answer to your question.”
Yet people think it does when they use it, which leads to the second — societal — reason for the embarrassing snafu. “I’ve described them as bullshitters,” said Wachter. “It’s just not built for truth, and it’s very convincing, and so very persuasive, and also designed to be persuasive,” she said.
“It’s very ironic that the biggest experts in AI technology in the world were generating AI-hallucinated citations,” said Tian. “Even the most prestigious companies in the world, such as these big four companies, are not immune to AI generating hallucinations.”
Tian said he expected EY to have better quality checks to prevent such hallucinations from making it through to published work. That’s important because that work ends up being spread elsewhere — the EY report was referenced in a Canberra Times article that wound up being syndicated to more than 60 newspapers in Australia. GPTZero also says the data appears to have crept into AI tools like ChatGPT, Claude, and Perplexity and is being cited as a reputable source. (GPTZero hasn’t brought its findings about the report to EY; as we mentioned previously, Sherwood put in multiple requests for comment with EY.)
“I think we have to say goodbye to the idea that we can quickly decide if something is truthful or not,” said Wachter. “If you want to figure out what is truthful, you probably need more time than you used to, and there are no perfect or completely trustworthy sources anymore, because everything can be fake now.”
EY isn’t the only organization to face this kind of issue. GPTZero has previously reported on issues, including 19 hallucinations, in a report published by Deloitte, an EY competitor. Those issues were first identified by University of Sydney academic Christopher Rudge. Deloitte said it used generative AI in the report, and offered a partial refund to the Australian government. And last month, major law firm Sullivan & Cromwell acknowledged hallucinations in a filing it submitted to a federal bankruptcy court, saying the company’s internal AI policies on using AI had not been followed.
Tian’s worry — along with others’ — is that this problem is going to get worse before it gets better. “The use of [AI] technology is getting ahead of processes like hallucination detection and quality checks to ensure AI is used properly.” Tian is particularly concerned that his early insights into the reports he’s scraped from a number of organizations is just the tip of the iceberg. “We certainly don’t think it’s an isolated case,” he said. “And there’s no evidence yet this problem is self-correcting or getting better.”
Some AI experts argue it will never actually get better, and that AI is doomed to repeatedly fabricate information because of the way it’s designed. A recent preprint study from researchers at the University of Singapore suggested that hallucinations are inevitable because of the way the large language models that power AI systems are designed, even if they had perfect training data or guardrails to dissuade them from doing so.
“It’s not like a calculator you can trust where the rules are clear and where the answer is always correct,” Wachter said. “It’s more like a broken clock that is correct twice a day, but very often, is just not.”
