Tech
Mimeograph Machine
Getty Images
Platformer

AI “plagiarism engines” like Perplexity cannot be the future of the web

We can still have the internet we want — but we have to try new business models.

Casey Newton

I.

For a while now, I’ve been gloomy about the state of the web. Plagiarism engines like Perplexity and Arc Search have attracted millions of users by ripping off other people’s work, depriving publishers of the traffic and advertising revenue that once sustained them. The results have been successful enough that Google is following them.

Today, I want to talk about a more positive vision for the future of the internet — one where AI companies and creators work hand in hand to grow the web again, sharing the wealth they create with one another.

Before I get there, though, it’s worth taking a moment to reflect on how bad the status quo has gotten.

Earlier this month, Forbes noticed that Perplexity had been stealing its journalism. The AI startup had taken a scoop about Eric Schmidt’s new drone project and repurposed it for its new “pages” product, which creates automated book-report style web pages based on user prompts. Perplexity had apparently decided to take Forbes’ reporting to show off what its plagiarism can do.

Here’s Randall Lane, Forbes’ chief content officer, in a blog post.

“Not just summarizing (lots of people do that), but with eerily similar wording, some entirely lifted fragments — and even an illustration from one of Forbes’ previous stories on Schmidt,” noted “More egregiously, the post, which looked and read like a piece of journalism, didn’t mention Forbes at all, other than a line at the bottom of every few paragraphs that mentioned “sources,” and a very small icon that looked to be the “F” from the Forbes logo – if you squinted. [...]

Perplexity then sent this knockoff story to its subscribers via a mobile push notification. It created an AI-generated podcast using the same (Forbes) reporting — without any credit to Forbes, and that became a YouTube video that outranks all Forbes content on this topic within Google search. 

Any reporter who did what Perplexity did would be drummed out of the journalism business.

Any reporter who did what Perplexity did would be drummed out of the journalism business. But CEO Aravind Srinivas attributed the problem here to “rough edges” on a newly released product, and promised attribution would improve over time. “We agree with the feedback you've shared that it should be a lot easier to find the contributing sources and highlight them more prominently,” he wrote in an X post.

In person, Srivinas can come across as earnest and a bit naive, as I learned when he came on Hard Fork in February. But any notion that Perplexity’s problems stem from a simple misunderstanding was dashed this week when Wired published an investigation into how the company sources answers for users’ queries. In short, Wired found compelling evidence that Perplexity is ignoring the Robots Exclusion Protocol, which publishers and other websites use to grant or deny permissions to automated crawlers and scrapers. 

Here are Dhruv Mehrotra and Tim Marchman:

Until earlier this week, Perplexity published in its documentation a link to a list of the IP addresses its crawlers use—an apparent effort to be transparent. However, in some cases, as both Wired and Knight were able to demonstrate, it appears to be accessing and scraping websites from which coders have attempted to block its crawler, called Perplexity Bot, using at least one unpublicized IP address. The company has since removed references to its public IP pool from its documentation. [...]

Wired verified that the IP address in question is almost certainly linked to Perplexity by creating a new website and monitoring its server logs. Immediately after a Wired reporter prompted the Perplexity chatbot to summarize the website's content, the server logged that the IP address visited the site. This same IP address was first observed by Knight during a similar test.

Forbes sent Perplexity a cease-and-desist letter, and I imagine it won’t be the last publisher to do so. There are open legal questions about whether copyrighted material can be used to train large language models or answer chatbot queries, but I see no legal way Perplexity can get away with one of its other core techniques for building pages: using copyrighted images from Getty, the Wall Street Journal, Forbes and others. You simply are not allowed to re-publish other people’s copyrighted photos and illustrations without permission, even if your plagiarism engine is new and has “rough edges.”

Perhaps Perplexity will clean up its act; once it came under fire, the company ran to Semafor to promise that it is “working on” deals with publishers. In the meantime, though, I’ve come to think of it as the Clearview AI of generative artificial intelligence companies: scraping billions of pieces of data without permission and daring courts to stop it. 

Like Clearview, Perplexity’s core innovation is ethical rather than technical. In the recent past, it would have been considered bad form to steal and repurpose journalism at scale. Perplexity is making a bet that the advent of generative AI has somehow changed the moral calculus to its benefit. 

“I think we need to work together to build all these things, rather than trying to see it as, hey, like you’re taking my stuff and using it,” Srinivas told us in February. 

But then he just kept taking everyone’s stuff and using it. The working together part, I guess, is meant to come later.

II.

One path forward for the web, as I shared on a recent episode of Search Engine, is the Fediverse. Decentralized, federated apps; portable identities and follower graphs; permissionless innovation on open protocols: this is a way journalists can once again begin to build audiences — stable ones! — rather than simply courting traffic. This is a years-long project, and I can only barely see the outlines of it taking shape. But it’s an appealing alternative to a world where all content is subsumed into a large language model and accessed by an opaque and proprietary set of algorithms. 

But this is a long-term solution, and a partial one. And it carries with it the embedded assumption that today’s AI systems cannot be reshaped in ways that actually grow the web, and pay for the labor of the people who make it. The Fediverse is about giving up on the consumer internet as we know it today — the big walled gardens, the metastasizing LLMs — and trying to build something different.

Tim O’Reilly is thinking differently. As a publisher, investor, and open source advocate, O’Reilly sits at the intersection of many of the business problems and opportunities presented by AI. On Tuesday, he offered his solution to parasitic companies like Perplexity: developing new business models for AI companies that pay creators based on the amount of material that the companies use.

O’Reilly is starting with his own publishing business, sharing a portion of subscription revenue with (or paying a fixed fee to) authors when it uses AI to generate summaries, test questions, translations, or other derivative works based on their writing. 

He concludes:

When someone reads a book, watches a video, or attends a live training, the copyright holder gets paid. Why should derivative content generated with the assistance of AI be any different? Accordingly, we have built tools to integrate AI-generated products directly into our payment system. This approach enables us to properly attribute usage, citations, and revenue to content and ensures our continued recognition of the value of our authors’ and teachers’ work.

And if we can do it, we know that others can too.

To O’Reilly, this view of AI is a natural extension of the modern web, which is built on what he calls an “architecture of participation.” The earlier web consisted of giant walled gardens like AOL and MSN, which sought to keep as much activity within their own borders as possible. In this view, companies like Google, OpenAI, and Perplexity are all competing to become the next AOL. It is a vision in which most of the benefits of AI are reaped by a very small number of companies.

“Only the most short-term of business advantage can be found by drying up the river AI companies drink from.”

But this would be a mistake, he writes, if only because the current AI business models are ultimately self-defeating. “If the long-term health of AI requires the ongoing production of carefully written and edited content — as the currency of AI knowledge certainly does — only the most short-term of business advantage can be found by drying up the river AI companies drink from,” O’Reilly writes. “Facts are not copyrightable, but AI model developers standing on the letter of the law will find cold comfort in that if news and other sources of curated content are driven out of business.”

We know that AI companies are running out of data to train their frontier models on. Given that fact, it seems ludicrous that companies like Perplexity are building systems that all but ensure they will have less data to train on in the future.

O’Reilly is taking the opposite approach. And while it remains to be seen whether the average writer on his platform benefits meaningfully from AI royalties, if nothing else he has gotten the incentive structure right. Pay people to create high-quality writing and other content; use that content with permission to train powerful AI systems; and share the wealth that those systems create to fund and incentivize the production of further high-quality writing.

If Srinivas meant it when he said he “we need to work together to build all these things,” he can now look to O’Reilly for a powerful example of what working together actually looks like.


Casey Newton writes Platformer, a daily guide to understanding social networks and their relationships with the world. This piece was originally published on Platformer.

More Tech

See all Tech
tech

Anthropic projections for 2028: Up to $70 billion in revenue, could be profitable by 2027

Anthropic’s Claude API business is doing so well with enterprise customers, the company is upping its revenue forecasts significantly. According to a report from The Information, the company’s robust corporate sales have caused it to revise its most optimistic forecast up to $70 billion in sales by 2028.

Anthropic estimates its API business will be double that of OpenAI’s API sales. OpenAI is currently burning through much more money per month than Anthropic, and reportedly expects to spend as much as $115 billion through 2029, while Anthropic is forecasting that it could be cash positive by 2027, per the report.

Anthropic estimates its API business will be double that of OpenAI’s API sales. OpenAI is currently burning through much more money per month than Anthropic, and reportedly expects to spend as much as $115 billion through 2029, while Anthropic is forecasting that it could be cash positive by 2027, per the report.

tech

Amazon, which is developing AI shopping agents, doesn’t want Perplexity’s AI shopping agents on its site

Amazon has sent a cease and desist letter to Perplexity AI, demanding that it stop letting its AI browser agent, Comet, make online purchases for users, Bloomberg reports.

Amazon, which is developing its own AI shopping agents and is having “conversations” with builders of third-party agents, accused the AI startup of “committing computer fraud by failing to disclose when its AI agent is shopping on a user’s behalf, in violation of Amazon’s terms of service.”

Perplexity, in response, said Amazon is attempting to “eliminate user rights” in order to sell more ads.

Amazon, which is developing its own AI shopping agents and is having “conversations” with builders of third-party agents, accused the AI startup of “committing computer fraud by failing to disclose when its AI agent is shopping on a user’s behalf, in violation of Amazon’s terms of service.”

Perplexity, in response, said Amazon is attempting to “eliminate user rights” in order to sell more ads.

tech

Apple to challenge Google Chromebooks with low-cost Mac laptop, Bloomberg reports

Apple is designing a new sub-$1,000 Mac laptop aimed at the education market, Bloomberg reports.

Google’s low-cost Chromebooks currently dominate the K-12 education market, and Apple’s reentry into the education market that it once owned could disrupt the sectors status quo.

According to the report, Apple plans on using the custom mobile chips it currently uses in iPhones to power the more affordable devices.

Apple’s recent earnings demonstrated that iPhone sales have been steady, and the tech giant is looking to find new areas of growth, like services. A low-cost Mac could be popular with consumers, in addition to education buyers.

According to the report, Apple plans on using the custom mobile chips it currently uses in iPhones to power the more affordable devices.

Apple’s recent earnings demonstrated that iPhone sales have been steady, and the tech giant is looking to find new areas of growth, like services. A low-cost Mac could be popular with consumers, in addition to education buyers.

tech

Getty Images suffers partial defeat in UK lawsuit against Stability AI

Stability AI, the creator of image generation tool Stable Diffusion, largely defended itself from a copyright violation lawsuit filed by Getty Images, which alleged the company illegally trained its AI models on Getty’s image library.

Lacking strong enough evidence, Getty dropped the part of the case alleging illegal training mid-trial, according to Reuters reporting.

Responding to the decision, Getty said in a press release:

“Today’s ruling confirms that Stable Diffusion’s inclusion of Getty Images’ trademarks in AI‑generated outputs infringed those trademarks. ... The ruling delivered another key finding; that, wherever the training and development did take place, Getty Images’ copyright‑protected works were used to train Stable Diffusion.”

Stability AI still faces a lawsuit from Getty in US courts, which remains ongoing.

A number of high-profile copyright cases are still working their way through the courts, as copyright holders seek to win strong protections for their works that were used to train AI models from a number of Big Tech companies.

Responding to the decision, Getty said in a press release:

“Today’s ruling confirms that Stable Diffusion’s inclusion of Getty Images’ trademarks in AI‑generated outputs infringed those trademarks. ... The ruling delivered another key finding; that, wherever the training and development did take place, Getty Images’ copyright‑protected works were used to train Stable Diffusion.”

Stability AI still faces a lawsuit from Getty in US courts, which remains ongoing.

A number of high-profile copyright cases are still working their way through the courts, as copyright holders seek to win strong protections for their works that were used to train AI models from a number of Big Tech companies.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.