Tech
tech

AI researchers trained an OpenAI competitor in 26 minutes for less than $50

Researchers at Stanford and the University of Washington have developed an AI model that could compete with Big Tech rivals — and trained it in 26 minutes for less than $50 in cloud compute credits.

In a research paper published last Friday, the new “s1” model demonstrated similar performance on tests measuring mathematical problem-solving and coding abilities to advanced reasoning models like OpenAI’s o1 and DeepSeek’s R1.

Researchers said that s1 was distilled from “Gemini 2.0 Flash Thinking Experimental,” one of Google’s AI models, and that they used “test-time scaling” — or, presenting a base model with a dataset of questions and giving it more time to think before it answers. While this technique is widely used, researchers attempted to achieve the “simplest approach” through a process called supervised fine-tuning, where the model is explicitly instructed to mimic certain behaviors.

In the paper, the researchers discuss using simple commands like “wait”:

“...by appending Wait’ multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps.”

With their methodology, the researchers report using a relatively small dataset on an off-the-shelf base model to cheaply recreate an AI model’s “reasoning” abilities. Now, the s1 model, along with the data and code used to train it, is on GitHub… which will, presumably, not best please big AI companies. (It was only days ago that OpenAI accused DeepSeek of ripping off ChatGPT to train its models.) Indeed, the mounting concern about unauthorized distilling has given rise to the word “distealing” among the AI community.

The researchers said that the fine-tuning was done on 16 H100 GPUs from Nvidia.

Researchers said that s1 was distilled from “Gemini 2.0 Flash Thinking Experimental,” one of Google’s AI models, and that they used “test-time scaling” — or, presenting a base model with a dataset of questions and giving it more time to think before it answers. While this technique is widely used, researchers attempted to achieve the “simplest approach” through a process called supervised fine-tuning, where the model is explicitly instructed to mimic certain behaviors.

In the paper, the researchers discuss using simple commands like “wait”:

“...by appending Wait’ multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps.”

With their methodology, the researchers report using a relatively small dataset on an off-the-shelf base model to cheaply recreate an AI model’s “reasoning” abilities. Now, the s1 model, along with the data and code used to train it, is on GitHub… which will, presumably, not best please big AI companies. (It was only days ago that OpenAI accused DeepSeek of ripping off ChatGPT to train its models.) Indeed, the mounting concern about unauthorized distilling has given rise to the word “distealing” among the AI community.

The researchers said that the fine-tuning was done on 16 H100 GPUs from Nvidia.

More Tech

See all Tech
Two Cat Businessmen Holding Drinks

The most outlandish tech CEO quotes from 2025

Tech CEOs have been nuttier than ever.

tech

Trump AI executive order is a “major win” for Open AI, Google, Microsoft, and Meta, says Ives

President Trump’s new executive order aiming to keep states from enacting AI laws that inhibit US “global AI dominance” is a “major win” for OpenAI, Google, Microsoft, and Meta, according to Wedbush Securities analyst Dan Ives. Big Tech companies have collectively plowed hundreds of billions into the technology, while seeing massive stock price gains, and Ives believes they stand to gain much more.

“Given that there have been over 1,000 AI laws proposed at the state level, this was a necessary move by the Trump Administration to keep the US out in front for the AI Revolution over China,” Ives wrote, adding that state-by-state regulation “would have crushed US AI startup culture.” The presidential order would withhold federal funds from states that put in place onerous AI regulations.

This morning, Whitehouse AI adviser Sriram Krishnan said in a CNBC interview that he’d be working with Congress on a single national framework for AI.

Despite Ives’ rosy read-through on the order, with the exception of Nvidia, which jumped on a report of boosted Chinese demand, many AI stocks are in the red early today. The VanEck Semiconductor ETF is down nearly 1% premarket, as the AI trade struggles thanks to underwhelming earnings results from Oracle earlier this week.

“Given that there have been over 1,000 AI laws proposed at the state level, this was a necessary move by the Trump Administration to keep the US out in front for the AI Revolution over China,” Ives wrote, adding that state-by-state regulation “would have crushed US AI startup culture.” The presidential order would withhold federal funds from states that put in place onerous AI regulations.

This morning, Whitehouse AI adviser Sriram Krishnan said in a CNBC interview that he’d be working with Congress on a single national framework for AI.

Despite Ives’ rosy read-through on the order, with the exception of Nvidia, which jumped on a report of boosted Chinese demand, many AI stocks are in the red early today. The VanEck Semiconductor ETF is down nearly 1% premarket, as the AI trade struggles thanks to underwhelming earnings results from Oracle earlier this week.

tech
Rani Molla

Epic scores two victories as “Fortnite” returns to Google Play and appeals court keeps injunction against Apple

“Fortnite” maker Epic Games notched two wins Thursday in its drawn-out battle against Big Tech’s app stores. “Fortnite” returned to the Google Play app store in the US, Reuters reports, as Epic continues working with Google to secure court approval for their settlement.

Meanwhile, a US appeals court partly reversed sanctions against Apple in Epic’s antitrust case, calling parts of the order overly broad, but upheld the contempt finding and left a sweeping injunction in place — keeping pressure on Apple to allow developers to steer users to outside payment options and reduce its tight control over how apps can communicate and monetize on iOS.

tech
Jon Keegan

Report: AI-powered toys tell kids where to find matches, parrot Chinese government propaganda

You may want to think twice before buying your kids a fancy AI-powered plush toy.

A new report from NBC News found that several AI-powered kids toys could easily be steered to dangerous as well as sexually explicit conversations in a shocking demonstration of the loose safety guardrails in this novel category of consumer electronics.

A report out by the Public Interest Research Group details what researchers found when they tested five AI-powered toys for kids bought from Amazon. Some of the toys offered instructions on where to find matches and how to start fires.

NBC News also bought some of these toys and found they parroted Chinese government propaganda and gave instructions for how to sharpen knives. Some of the toys also discussed inappropriate topics for kids, like sexual kinks.

The category of AI-powered kids toys is under scrutiny as major AI companies like OpenAI have announced partnerships with toy manufacturers like Mattel (which has yet to release an AI-powered toy).

A report out by the Public Interest Research Group details what researchers found when they tested five AI-powered toys for kids bought from Amazon. Some of the toys offered instructions on where to find matches and how to start fires.

NBC News also bought some of these toys and found they parroted Chinese government propaganda and gave instructions for how to sharpen knives. Some of the toys also discussed inappropriate topics for kids, like sexual kinks.

The category of AI-powered kids toys is under scrutiny as major AI companies like OpenAI have announced partnerships with toy manufacturers like Mattel (which has yet to release an AI-powered toy).

tech
Jon Keegan

OpenAI releases GPT-5.2, the “best model yet for real-world, professional use”

After feeling the heat from Google’s recent launch of its powerful Gemini 3 model, OpenAI’s response to its “code red” has been released, reportedly on an accelerated schedule to keep up with the competition.

The company’s new flagship model, GPT-5.2, is out, and the company is calling it “the most capable model series yet for professional knowledge work.”

OpenAI CEO Sam Altman called it the “smartest generally-available model in the world” and shared benchmarks that showed it achieving higher scores than Gemini 3 Pro and Anthopic’s Claude Opus 4.5 in some software engineering tests and abstract reasoning, math, and science problems.

In a press release announcing the new model, the company said: “Overall, GPT‑5.2 brings significant improvements in general intelligence, long-context understanding, agentic tool-calling, and vision — making it better at executing complex, real-world tasks end-to-end than any previous model.”

OpenAI CEO Sam Altman called it the “smartest generally-available model in the world” and shared benchmarks that showed it achieving higher scores than Gemini 3 Pro and Anthopic’s Claude Opus 4.5 in some software engineering tests and abstract reasoning, math, and science problems.

In a press release announcing the new model, the company said: “Overall, GPT‑5.2 brings significant improvements in general intelligence, long-context understanding, agentic tool-calling, and vision — making it better at executing complex, real-world tasks end-to-end than any previous model.”

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.