Tech
DeepSeek App
(Greg Baker/Getty Images)

DeepSeek’s $6 million AI model just blew a $1 trillion hole in the market. Here’s the only explainer you’ll need on this “Sputnik moment”

A fast-moving story is shaking up the AI industry in many different ways.

Over the weekend, the DeepSeek AI story really exploded. There are a lot of different aspects to this story that strike right at the heart of the moment of this AI frenzy from the biggest tech companies in the world. Let’s break this complicated but fascinating story down.

To catch you up, Chinese startup DeepSeek released a group of new “DeepSeek R1” AI models, which have burst onto the scene and caused the entire AI industry (and the investors giving them billions to spend freely) to freak out in different ways. These models are free, mostly open-source, and appear to be beating the latest state-of-the-art models from OpenAI and Meta.

Faster, cheaper, better

What makes these models so noteworthy? Unlike OpenAI and Anthropic’s AI models, they are free for anyone to download, refine, and use for any purpose. Meta did a similar thing with its Llama 3 AI model, making it free for anyone to download, modify, and use. DeepSeek’s latest models were actually based off Llama. But there are lots of free models you can use today that are all pretty good.

The big thing that makes DeepSeek’s latest R1 models special is that they use multistep “reasoning,” just like OpenAI’s o1 models, which up until last week were considered best in class. The reasoning process is a bit slower, but it leads to better responses and reveals a “chain of thought” that shows the steps it takes.

DeepSeek is offering up models with the same secret sauce that OpenAI is charging a significant amount for. And OpenAI offers its models only on its own hosted platform, meaning companies can’t just download and host their own AI servers and control the data that flows to the model. With DeepSeek, you can host this on your own hardware and control your own stack, which obviously appeals to a lot of industries with sensitive data.

DeepSeek does offer hosted access to its models, too, but at a fraction of the cost of OpenAI. For example, OpenAI charges $15 per 1 million input “tokens” (pieces of text that get entered into a chat, which could be a word or letter in a sentence). But DeepSeek’s hosted model charges just $0.14 for 1 million input tokens. That’s a jaw-dropping difference if you’re running any kind of volume of AI queries.

Another crazy part of this story — and the one that’s likely moving the market today — is how this Chinese startup built this model. DeepSeek’s researchers said it cost only $5.6 million to train their foundational DeepSeek-V3 model, using just 2,048 Nvidia H800 GPUs (which were apparently acquired before the US slapped export restrictions on them).

For comparison, Meta has been hoarding more than 600,000 of the more powerful Nvidia H100 GPUs, and plans on ending the year with more than 1.3 million GPUs. DeepSeek’s V3 model was trained using 2.78 million GPU hours (a sum of the computing time required for training) while Meta’s Llama 3 took 30.8 million GPU hours.

And this faster, cheaper approach didn’t just result in a model that matched the leaders’ models; in some cases, it beat them. DeepSeek’s R1 models are beating OpenAI o1 in some math and coding benchmarks.

Did we bet on the wrong horse?

So a better, faster, cheaper Chinese AI model just dropped, and it could upend the industry’s big plans for the next generation of AI models. The biggest tech companies (Meta, Microsoft, Amazon, and Google) have been bracing their investors for years of massive capital expenditures because of the consensus that more GPUs and more data leads to exponential leaps in AI model capabilities. Recently, there are signs that this “AI scaling law” may have reached a plateau, and Nvidia’s place at the top of the AI food chain may be in peril.

A lot of the success DeepSeek had was a result of its using other AI models to generate “synthetic data” to train its models, rather than hunting for new stores of human-written texts.

If that bet on zillions of GPUs, Manhattan-size data centers, and hundreds of billions in AI infrastructure investment is wrong, what are we doing here? Cue the massive freak-out in the market today.

Top of the App Store

As if this story couldn’t get any crazier, this weekend the DeepSeek chatbot app soared to the top of the iOS App Store “Free Apps” list. Observers are calling this a “Sputnik moment” in the global race for AI dominance, but there are a lot of things we don’t know.

One thing we do know is that for all of Washington’s freak-out over TikTok leaking Americans’ personal data to China, this AI chatbot is absolutely sending your data to China, and is even subject to Chinese censorship policies. So don’t go asking DeepSeek about Tiananmen Square, the plight of Uyghurs in China, or Taiwan’s pro-democracy movement, and who knows what else.

Fallout

This weekend, The Information reported that inside Meta they’re indeed freaking out, setting up war rooms and rethinking AI strategy.

The new Trump administration is not going to like this, either, as it’s highlighted a vision of American domination of AI and plans to expedite approvals for new power plants and infrastructure to build massive data centers.

It’s unclear how the admin and lawmakers will react to these developments, but events are moving much faster than any branch of government can.

More Tech

See all Tech
$100M

Salesforce is using AI to to handle customer service, and it’s saving the company $100 million per year, CEO Marc Benioff said at the company’s Dreamforce conference, per Bloomberg reporting. Benioff also announced that 12,000 customers are using its “Agentforce” AI-driven customer service platform.

$100 million seems impressive, but to put that number in perspective, last quarter, the company reported over $10 billion in revenue.

Benioff has enthusiastically embraced the use of AI and has slashed thousands of positions as the company automates roles.

tech

Sam Altman says OpenAI fixed ChatGPT’s serious mental health issues in just a month. Anyway, here comes the erotica

Well that was quick. Just over a month ago, OpenAI CEO Sam Altman announced a 120-day plan to roll out new protections for identifying and helping ChatGPT users who are suffering a mental health crisis, after a series of reports brought attention to such users harming themselves and others after using the company’s AI chatbot.

Today, Altman says the company has built new tools to address these issues and “mitigated” these problems.

Altman is so confident that they’ve addressed mental health safety that the company is reverting ChatGPT’s behavior so it “behaves more like what people liked about 4o.” Altman essentially apologized to users for the changes that were made to address mental health problems that arose with use of the chatbot:

“We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.”

Separately, the company announced the members of its Expert Council on Well-Being and AI, an eight-person council of mental health experts.

As a reward for the adults who aren’t suffering mental health issues exacerbated by confiding in the chatbot, Altman says that erotica is on the way.

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”

In response to Altman’s post on X, Missouri Senator Josh Hawley quoted Altman’s post with this message:

“You made ChatGPT ‘pretty restrictive’? Really. Is that why it has been recommending kids harm and kill themselves?”

Altman is so confident that they’ve addressed mental health safety that the company is reverting ChatGPT’s behavior so it “behaves more like what people liked about 4o.” Altman essentially apologized to users for the changes that were made to address mental health problems that arose with use of the chatbot:

“We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.”

Separately, the company announced the members of its Expert Council on Well-Being and AI, an eight-person council of mental health experts.

As a reward for the adults who aren’t suffering mental health issues exacerbated by confiding in the chatbot, Altman says that erotica is on the way.

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”

In response to Altman’s post on X, Missouri Senator Josh Hawley quoted Altman’s post with this message:

“You made ChatGPT ‘pretty restrictive’? Really. Is that why it has been recommending kids harm and kill themselves?”

tech

Meta says Instagram teen accounts will default to a PG-13 content limit

Meta is introducing new guidelines for the content on Instagram teen accounts. The company is turning to the well-known PG-13 standard from the Motion Picture Association, used by the film industry.

Any user under the age of 18 will have their content limited to PG-13.

Parents who administer their child’s teen account will have the ability to change the settings — including placing their child in a more restrictive level than PG-13 — but that assumes the teen hasn’t just tried to sign up on their own using a fake birthday.

To counter those wily kids, Instagram will use “age prediction technology” to set content restrictions, according to the company.

In a blog post announcing the new policy, Meta acknowledged the new settings may not catch all prohibited content:

“Just like you might see some suggestive content or hear some strong language in a PG-13 movie, teens may occasionally see something like that on Instagram — but we’re going to keep doing all we can to keep those instances as rare as possible.”

Parents who administer their child’s teen account will have the ability to change the settings — including placing their child in a more restrictive level than PG-13 — but that assumes the teen hasn’t just tried to sign up on their own using a fake birthday.

To counter those wily kids, Instagram will use “age prediction technology” to set content restrictions, according to the company.

In a blog post announcing the new policy, Meta acknowledged the new settings may not catch all prohibited content:

“Just like you might see some suggestive content or hear some strong language in a PG-13 movie, teens may occasionally see something like that on Instagram — but we’re going to keep doing all we can to keep those instances as rare as possible.”

tech

Smartphone upgrades grew for Apple and Samsung last quarter

The global smartphone market grew 2.6% in the third quarter, thanks in part to interest in the latest phones from Apple and Samsung, according to new shipment data from market intelligence firm IDC.

“Apple and Samsung posted strong results as their latest devices encouraged consumers to upgrade in the premium segment, while new, affordable AI-enabled smartphones also drove high upgrades in more affordable price categories,” IDC Vice President of Client Devices Francisco Jeronimo said in a press release for the data, which would include roughly half a month of new iPhone sales. “Demand for Apple’s new iPhone 17 lineup was robust, with pre-orders surpassing those of the previous generation. At the same time, Samsung’s Galaxy Z Fold 7 and Galaxy Z Flip 7 outperformed all earlier foldable models, creating renewed momentum for the foldables segment.”

Here’s the year-over-year growth in third-quarter shipments:

And here’s how the absolute number of shipments compared last quarter:

The “other” bin is made up of dozens of smaller, often regional and low-cost manufacturers.

tech
Jon Keegan

Sora’s ghoulish reanimation of dead celebrities raises alarms

OpenAI’s video generation app Sora has spent its first two weeks at the top of the charts.

The startup’s fast-and-loose approach to enforcing intellectual property rights has seen the app flooded with videos of trademarked characters in all sorts of ugly scenarios.

But another area where Sora users have been pushing the limits involves videos that reanimate dead celebrities.

And we’re not talking just JFK, MLK, and Einstein. Videos featuring more recently deceased figures such as Robin Williams (11 years ago), painter Bob Ross (30 years ago), Stephen Hawking (seven years ago), and even Queen Elizabeth II (three years ago) have been generated. Some of the videos are racist and offensive, shocking the relatives of the figures.

OpenAI told The Washington Post that it is now allowing representatives of “recently deceased” celebrities and public figures to request that their likenesses be blocked from the service, though the company did not give a precise time frame for what it considered recent.

But another area where Sora users have been pushing the limits involves videos that reanimate dead celebrities.

And we’re not talking just JFK, MLK, and Einstein. Videos featuring more recently deceased figures such as Robin Williams (11 years ago), painter Bob Ross (30 years ago), Stephen Hawking (seven years ago), and even Queen Elizabeth II (three years ago) have been generated. Some of the videos are racist and offensive, shocking the relatives of the figures.

OpenAI told The Washington Post that it is now allowing representatives of “recently deceased” celebrities and public figures to request that their likenesses be blocked from the service, though the company did not give a precise time frame for what it considered recent.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.