Reddit sinks on heavy volumes, social media chatter of reduced referral traffic from ChatGPT
RBC Capital Markets says it’s tied to a recent change made by Google; ChatGPT thinks OpenAI’s recent model tweak is a plausible explanation.
Reddit is getting smoked amid elevated trading activity, deepening a rout that has seen shares sink 15% in the trailing eight sessions heading into today.
The consternation appears to be driven by reports on social media that suggest ChatGPT is using Reddit content as a source much less often.
The best example:
Apparently ChatGPT is not using Reddit much anymore for their answers. I guess they realized that what random people say can’t be considered a trusted source after all. You can all stop spamming it with your fake brand mentions now. pic.twitter.com/PrDuFJhpNz
— Andrea Bosoni (@theandreboso) September 30, 2025
RBC Capital Markets analysts tied this loss of citation share to Google’s disabling of the “&num=100” parameter, which had the effect of limiting how many results large language models could draw upon to deliver answers.
They write:
This is consistent with our note from last week where we noted a 3p [third party] study posted online by agency showed that RDDT's citation share on ChatGPT had dropped significantly (from 29.2% range to 5.3% just since September 10th). At the time, we'd concluded that Google changing its indexing parameters from 100 (num=100) to 10 per page was causing 3p LLM's to essentially see 1/10th of their prior results as they indexed Google's search data. We are not yet clear as to whether companies like ChatGPT could increase their compute costs in accessing more pages to restore that indexing data or if that's even an option. A key unknown here is how much of RDDT's traffic was actually coming from ChatGPT which could have this type of an impact. To our knowledge, nothing about GOOGL's index parameter changes would have affected AI Overviews citations which we'd have thought would have been necessary to drive this pronounced of a decline in DAU's (per SimilarWeb).
OpenAI, for its part, updated its model on September 12 — just about the time these referrals apparently began to slide. I asked ChatGPT (who better?) to explain whether any tweaks made could have produced such an effect. Here’s what it told me:
What is possible/plausible from indirect/architectural changes
Even though there is no explicit statement, the kinds of changes made (or the framing of the update) could allow or encourage behavior shifts that indirectly reduce Reddit referrals. Some possibilities:
Stricter guardrails/stronger authority constraints
Because the update emphasizes clarity and stricter authority levels, internal policies might now more heavily vet or filter which external sources the model is allowed to cite, especially user‑generated content (like Reddit).Reprioritization of source quality/authority levels
The model spec changes might shift the implicit priorities, favoring more authoritative, vetted sources (academic, official, news) over forums, social media, or user‑generated content by default.Nonpublic internal adjustments
Even without a public note, internal (nonpublic) rule changes might have been made alongside the September 12 rollout — e.g. adjustments in how citations are ranked, filters applied, or which domains are considered “safe” to reference.Cascading effect from other policies/safety rules
The new spec may create constraints (or higher “cost” in internal scoring) for referencing less controlled sources (where misinformation risk is higher). Over time, that could reduce those referrals as a side effect.
Update:
We asked ChatGPT which of Google or OpenAI’s tweak was playing a bigger role in reducing the share of Reddit citations. Here’s what it told us:
“I think it is quite probable that the mid‑September shift is largely driven by OpenAI’s internal changes (model / retrieval / policy), rather than being purely a downstream effect of Google’s n=100 removal.”
