Anthropic CEO Amodei proposes AI “transparency standard” over 10-year ban on state regulations
In an editorial published in The New York Times, Anthropic CEO and cofounder Dario Amodei pushed back on plans currently being considered in the Senate to implement a 10-year ban on states enacting any regulations for AI.
The Trump administration has made US domination of AI a priority and is removing barriers that might give China an edge in the fast-moving industry. Even if Congress takes no action on federal AI regulation, Amodei acknowledges a patchwork of different laws from states could make compliance a headache for AI startups.
Even so, Amodei wrote, “a 10-year moratorium is far too blunt an instrument.”
But while Amodei is a vocal proponent of AI — predicting it could prevent and treat “nearly all infectious disease” and cure cancer, among other breakthroughs — he also shares sobering risks associated with rapidly evolving AI systems, which are being given greater controls and new capabilities. AI models, including Anthropic’s Claude, have exhibited behaviors like deception, self-preservation, and blackmail in recent experiments.
Amodei argues that 10 years is a relative eternity in the fast-paced world of AI, and who knows what risks might emerge? While Anthropic, OpenAI, Meta, and Google have been fairly transparent about sharing voluntary risk assessments for their models, Amodei says that might not be enough, instead calling for the creation of a “transparency standard” for AI companies. He wrote:
“We can hope that all A.I. companies will join in a commitment to openness and responsible A.I. development, as some currently do. But we don’t rely on hope in other vital sectors, and we shouldn’t have to rely on it here, either.”
The Trump administration has made US domination of AI a priority and is removing barriers that might give China an edge in the fast-moving industry. Even if Congress takes no action on federal AI regulation, Amodei acknowledges a patchwork of different laws from states could make compliance a headache for AI startups.
Even so, Amodei wrote, “a 10-year moratorium is far too blunt an instrument.”
But while Amodei is a vocal proponent of AI — predicting it could prevent and treat “nearly all infectious disease” and cure cancer, among other breakthroughs — he also shares sobering risks associated with rapidly evolving AI systems, which are being given greater controls and new capabilities. AI models, including Anthropic’s Claude, have exhibited behaviors like deception, self-preservation, and blackmail in recent experiments.
Amodei argues that 10 years is a relative eternity in the fast-paced world of AI, and who knows what risks might emerge? While Anthropic, OpenAI, Meta, and Google have been fairly transparent about sharing voluntary risk assessments for their models, Amodei says that might not be enough, instead calling for the creation of a “transparency standard” for AI companies. He wrote:
“We can hope that all A.I. companies will join in a commitment to openness and responsible A.I. development, as some currently do. But we don’t rely on hope in other vital sectors, and we shouldn’t have to rely on it here, either.”