Power
power
Jon Keegan

Nvidia and other chipmakers push to stop impending “AI diffusion” rule

Unless the White House acts, a strict rule regulating the global spread of American AI technology will take effect on May 15.

Going above and beyond current export controls covering the most advanced AI hardware, the “AI diffusion” rule places countries into one of three tiers based on their geopolitical alignment in relation to the US:

- Tier 1 includes America’s closest allies: Canada, most of western Europe, Japan, Taiwan, and Australia. These countries face few restrictions on American AI technology. But the other tiers face caps on computing power exports and outright bans, depending on the country.

- Tier 2 includes India, Mexico, much of the Middle East, and most of South America. These countries would need to comply with tight US security regulations for any AI projects using American AI technology.

- Tier 3 contains US adversaries China and Russia. No chips or AI for you!

Bloomberg reports that AI chipmakers and world leaders are pushing the Trump administration to make changes to the rule before it takes effect. Companies want to shift away from formal government approval to a self-reporting mechanism for compliance.

Nvidia and Oracle both want the Trump administration to kill the rule outright, which is unlikely, according to the report.

The rule was put in place in the last weeks of the Biden administration.

- Tier 1 includes America’s closest allies: Canada, most of western Europe, Japan, Taiwan, and Australia. These countries face few restrictions on American AI technology. But the other tiers face caps on computing power exports and outright bans, depending on the country.

- Tier 2 includes India, Mexico, much of the Middle East, and most of South America. These countries would need to comply with tight US security regulations for any AI projects using American AI technology.

- Tier 3 contains US adversaries China and Russia. No chips or AI for you!

Bloomberg reports that AI chipmakers and world leaders are pushing the Trump administration to make changes to the rule before it takes effect. Companies want to shift away from formal government approval to a self-reporting mechanism for compliance.

Nvidia and Oracle both want the Trump administration to kill the rule outright, which is unlikely, according to the report.

The rule was put in place in the last weeks of the Biden administration.

More Power

See all Power
US-POLITICS-CONGRESS-AI

Anthropic sues the US government

In response to the Pentagon’s unprecedented, punitive determination that Anthropic is a national security supply chain risk, the AI startup has sued the US government.

power

OpenAI is reportedly working with Pentagon to hash out guardrails amid Anthropic standoff over AI safety

OpenAI CEO Sam Altman said the company is working with the Pentagon to negotiate safety guardrails for AI models used in the battlefield, which comes as one of its top competitors, Anthropic, is at a standoff with the government.

According to a memo obtained by several media outlets, Altman told staff OpenAI believes “that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”

Anthropic, the company behind the AI chatbot Claude, was one of several firms that received a $200 million contract from the Department of Defense for “agentic workflows.”

Since then, tensions between Anthropic and the Pentagon have reportedly risen as the startup insists on surveillance restrictions. The government’s attack on Venezuela last month that led to the capture of President Nicolás Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Anthropic has until 5:01 p.m. ET on Friday to reach a deal with the Pentagon, which has threatened consequences against the company if it doesn’t allow the government unrestricted use.

Altman’s comments come as the Financial Times reports that executives at Amazon, Google, and Microsoft are being pushed by workers to support Anthropic in its dispute with the Pentagon and adopt similar guardrails as the Claude company in any work they undertake with the US military.

According to a memo obtained by several media outlets, Altman told staff OpenAI believes “that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”

Anthropic, the company behind the AI chatbot Claude, was one of several firms that received a $200 million contract from the Department of Defense for “agentic workflows.”

Since then, tensions between Anthropic and the Pentagon have reportedly risen as the startup insists on surveillance restrictions. The government’s attack on Venezuela last month that led to the capture of President Nicolás Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Anthropic has until 5:01 p.m. ET on Friday to reach a deal with the Pentagon, which has threatened consequences against the company if it doesn’t allow the government unrestricted use.

Altman’s comments come as the Financial Times reports that executives at Amazon, Google, and Microsoft are being pushed by workers to support Anthropic in its dispute with the Pentagon and adopt similar guardrails as the Claude company in any work they undertake with the US military.

power
Jon Keegan

Report: Anthropic CEO Amodei meeting with Hegseth at the Pentagon as tensions mount

Anthropic CEO Dario Amodei has been summoned to meet with Defense Secretary Pete Hegseth at the Pentagon on Tuesday, according to a report from Axios. Tensions are running high between the Trump administration and Anthropic, as the startup’s surveillance restrictions on the use of its AI are reportedly causing outrage within the Pentagon.

Last month’s attack on Venezuela that led to the capture of Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Per the report, the Pentagon is considering effectively blacklisting Anthropic’s AI from government work if it doesn’t capitulate to the administration’s terms.

Antagonizing the Trump administration could cause Anthropic to face potential regulatory hurdles as it races toward an IPO this year. The company recently hired former Microsoft CFO Chris Liddel to its board, who formerly served as deputy White House chief of staff in the first Trump administration.

Last month’s attack on Venezuela that led to the capture of Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Per the report, the Pentagon is considering effectively blacklisting Anthropic’s AI from government work if it doesn’t capitulate to the administration’s terms.

Antagonizing the Trump administration could cause Anthropic to face potential regulatory hurdles as it races toward an IPO this year. The company recently hired former Microsoft CFO Chris Liddel to its board, who formerly served as deputy White House chief of staff in the first Trump administration.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.