Tech
tech
Jon Keegan

Don’t get AI high on its own supply

A new study published in Nature provides fresh evidence of a serious problem that AI researchers have been warning about: ”model collapse.”

According to this study, when an AI model is trained on AI generated content, it “becomes poisoned with its own projection of reality” and after a few cycles of such training, starts to produce gibberish.

AI companies are feverishly searching for troves of human generated text and images (breaking rules and possibly laws along the way), and the well is running dry, having scraped and ingested a significant portion of our output as a species. This has led to a race among AI companies to lock up content deals with major publishers to slake their thirst for more content.

AI companies are considering using “synthetic” or AI generated text to help train the next generation of AI models, which could lead to exactly the kind of nonsense the study noted.

AI companies are feverishly searching for troves of human generated text and images (breaking rules and possibly laws along the way), and the well is running dry, having scraped and ingested a significant portion of our output as a species. This has led to a race among AI companies to lock up content deals with major publishers to slake their thirst for more content.

AI companies are considering using “synthetic” or AI generated text to help train the next generation of AI models, which could lead to exactly the kind of nonsense the study noted.

More Tech

See all Tech
Form Energy iron-air battery system leaving Form Factory 1

Big batteries are the newest answer to Big Tech’s big energy needs

America’s booming energy demand is creating a powerful case for large-scale energy storage.

Patrick Sisson9h
Astronaut on the Moon

Over 50 years since it last sent astronauts to the moon, the US is now reentering a very different space race

The successful launch of the Artemis II lunar flyby marked one small step for NASA, while China’s already making giant leaps in its own space program.

tech
Jon Keegan

Judge blocks Pentagon’s move to blacklist Anthropic

A federal judge in Northern California has granted a preliminary injunction blocking the Pentagon from labeling Anthropic as a national security supply chain risk.

The ruling temporarily prevents the Defense Department from restricting the AI company’s access to federal contracts amid a dispute over its refusal to allow certain military and surveillance uses of its technology. The designation could also have shifted lucrative government work toward competitors, including OpenAI.

Earlier this month, Anthropic, the company behind Claude, sued 17 federal agencies and their heads, alleging the government exceeded its statutory authority.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.