Tech
tech

Anthropic CEO Amodei proposes AI “transparency standard” over 10-year ban on state regulations

In an editorial published in The New York Times, Anthropic CEO and cofounder Dario Amodei pushed back on plans currently being considered in the Senate to implement a 10-year ban on states enacting any regulations for AI.

The Trump administration has made US domination of AI a priority and is removing barriers that might give China an edge in the fast-moving industry. Even if Congress takes no action on federal AI regulation, Amodei acknowledges a patchwork of different laws from states could make compliance a headache for AI startups.

Even so, Amodei wrote, “a 10-year moratorium is far too blunt an instrument.”

But while Amodei is a vocal proponent of AI — predicting it could prevent and treat “nearly all infectious disease” and cure cancer, among other breakthroughs — he also shares sobering risks associated with rapidly evolving AI systems, which are being given greater controls and new capabilities. AI models, including Anthropic’s Claude, have exhibited behaviors like deception, self-preservation, and blackmail in recent experiments.

Amodei argues that 10 years is a relative eternity in the fast-paced world of AI, and who knows what risks might emerge? While Anthropic, OpenAI, Meta, and Google have been fairly transparent about sharing voluntary risk assessments for their models, Amodei says that might not be enough, instead calling for the creation of a “transparency standard” for AI companies. He wrote:

“We can hope that all A.I. companies will join in a commitment to openness and responsible A.I. development, as some currently do. But we don’t rely on hope in other vital sectors, and we shouldn’t have to rely on it here, either.”

The Trump administration has made US domination of AI a priority and is removing barriers that might give China an edge in the fast-moving industry. Even if Congress takes no action on federal AI regulation, Amodei acknowledges a patchwork of different laws from states could make compliance a headache for AI startups.

Even so, Amodei wrote, “a 10-year moratorium is far too blunt an instrument.”

But while Amodei is a vocal proponent of AI — predicting it could prevent and treat “nearly all infectious disease” and cure cancer, among other breakthroughs — he also shares sobering risks associated with rapidly evolving AI systems, which are being given greater controls and new capabilities. AI models, including Anthropic’s Claude, have exhibited behaviors like deception, self-preservation, and blackmail in recent experiments.

Amodei argues that 10 years is a relative eternity in the fast-paced world of AI, and who knows what risks might emerge? While Anthropic, OpenAI, Meta, and Google have been fairly transparent about sharing voluntary risk assessments for their models, Amodei says that might not be enough, instead calling for the creation of a “transparency standard” for AI companies. He wrote:

“We can hope that all A.I. companies will join in a commitment to openness and responsible A.I. development, as some currently do. But we don’t rely on hope in other vital sectors, and we shouldn’t have to rely on it here, either.”

More Tech

See all Tech
tech

Report: OpenAI shuttering 4o model due to sycophancy that was hard to control

This week, OpenAI plans to permanently remove its 4o model from ChatGPT.

The model has developed an unusually devoted group of users. But it also has been criticized for being overly sycophantic and allegedly may have led to a series of dangerous outcomes for its users, including suicide, murder, and mental health crises.

The Wall Street Journal reports that OpenAI’s decision to shutter 4o stems from the fact that the company was not able to successfully mitigate these potentially dangerous outcomes, and wanted to move users to safer models. Thirteen lawsuits against OpenAI alleging harm from the use of ChatGPT have been consolidated into one case by a California judge, according to the report. At least some of them are tied to users of the 4o model.

The company says only 0.1% of ChatGPT users still choose to use the model, but with 800 million weekly users, that’s still a lot of people.

Fans of the 4o model are decrying the deprecation of the model, citing its unique ability to offer affirmation and support.

The decision to get rid of 4o illustrates the strange new world of moderation that AI companies must now figure out.

The Wall Street Journal reports that OpenAI’s decision to shutter 4o stems from the fact that the company was not able to successfully mitigate these potentially dangerous outcomes, and wanted to move users to safer models. Thirteen lawsuits against OpenAI alleging harm from the use of ChatGPT have been consolidated into one case by a California judge, according to the report. At least some of them are tied to users of the 4o model.

The company says only 0.1% of ChatGPT users still choose to use the model, but with 800 million weekly users, that’s still a lot of people.

Fans of the 4o model are decrying the deprecation of the model, citing its unique ability to offer affirmation and support.

The decision to get rid of 4o illustrates the strange new world of moderation that AI companies must now figure out.

tech

Morgan Stanley says solar manufacturing could add as much as $50 billion in value to Tesla

Tesla’s recently reported move into solar manufacturing could add $25 billion to $50 billion in value to the company’s energy business, Morgan Stanley writes.

The bank currently values the energy business at $140 billion, so an increase of as much as $50 billion isn’t anything to sneeze at, though it’s also a drop in the bucket of Tesla’s gargantuan $1.3 trillion market cap, or the $1 trillion opportunity Wedbush Securities analyst Dan Ives thinks is packed into Tesla’s AI and autonomy efforts.

Reporting on Tesla’s solar ambitions knocked First Solar shares lower last week. But Morgan Stanley writes that Tesla is unlikely to compete directly with the country’s leading photovoltaic panel maker, instead pairing it with its fast-growing energy business and using much of that production internally. Rather than adding solar panels to an already glutted global market, Tesla could use them internally to avoid supply chain bottlenecks and meet its own growing power demands.

The bank expects Tesla to vertically integrate its solar capacity to meet data center demand, including for data centers in space. (As we’ve noted, the mission of Elon Musk’s SpaceX has been seeming very similar to Tesla’s these days.)

“We believe the decision to allocate capital to adding solar capacity may be  justified by the value creation and growth opportunities that having a vertically  integrated solar + energy storage business can yield,” the Morgan Stanley note reads.

Notably, Morgan Stanley estimates the solar panel endeavor will cost Tesla $30 billion to $70 billion — a sum that Tesla didn’t include as part of its doubled $20 billion-plus capex plan this year.

tech

OpenAI is now officially showing ads

Just a day after Anthropic’s Super Bowl ad aired, making fun of the concept of ad-backed AI chatbots, OpenAI began testing ads in ChatGPT for its free and Go subscription tiers.

In a blog post, OpenAI reiterated that ads wouldn’t affect ChatGPT’s responses and would be “clearly labeled as sponsored and visually separated from the organic answer.”

“Our goal is for ads to support broader access to more powerful ChatGPT features while maintaining the trust people place in ChatGPT for important and personal tasks,” the company wrote. “We’re starting with a test to learn, listen, and make sure we get the experience right.”

Advertising is one way the company, which is expected to go public late this year, could offset the massive cost of running its service.

The Information previously reported that OpenAI aiming for ad spending commitments of less than $1 million per advertiser during the testing phase — far cheaper than a Super Bowl prime-time spot like Anthropic’s.

“Our goal is for ads to support broader access to more powerful ChatGPT features while maintaining the trust people place in ChatGPT for important and personal tasks,” the company wrote. “We’re starting with a test to learn, listen, and make sure we get the experience right.”

Advertising is one way the company, which is expected to go public late this year, could offset the massive cost of running its service.

The Information previously reported that OpenAI aiming for ad spending commitments of less than $1 million per advertiser during the testing phase — far cheaper than a Super Bowl prime-time spot like Anthropic’s.

tech

New study finds AI doesn’t reduce work — it intensifies it

The rapid adoption of AI by businesses was fueled by the promise of huge productivity boosts that could supercharge workers. A new study has found that while it did indeed boost workers’ productivity, the use of generative AI at work also made work more intense and creep into workers’ downtime.

Researchers Aruna Ranganathan and Xingqi Maggie Ye followed about 200 workers at a US tech company for eight months. They found that AI did speed up work, allowing employees to take on more responsibilities. But after the novelty of their newfound superpowers wore off, workers reported “cognitive fatigue, burnout, and weakened decision-making.”

The researchers noted that to avoid AI-inspired burnout and turnover, organizations should adopt an “AI practice,” spelling out how the technology is expected to be used and what kinds of limits are in place.

Researchers Aruna Ranganathan and Xingqi Maggie Ye followed about 200 workers at a US tech company for eight months. They found that AI did speed up work, allowing employees to take on more responsibilities. But after the novelty of their newfound superpowers wore off, workers reported “cognitive fatigue, burnout, and weakened decision-making.”

The researchers noted that to avoid AI-inspired burnout and turnover, organizations should adopt an “AI practice,” spelling out how the technology is expected to be used and what kinds of limits are in place.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.