AI’s new rulebook… Yesterday, President Biden introduced a first-of-its kind executive order targeting AI “safety, security, and trust.” The eight-part order aims to address fears around AI’s potential role in propagating biases, displacing workers, and undermining national security. In July, techies including Meta, Amazon, OpenAI, and Alphabet pledged to monitor AI safety, but the Biden admin says it’s not enough. Now:
Red report: Companies building advanced AI systems must run safety tests (dubbed “red teaming”) and notify the gov’t of the results before any product rollouts.
AI labels: The Commerce Department must create guidelines for checking and watermarking AI content to identify it (picture: less “fake news”).
Protecting consumers: Health and Human Services will have to create a program evaluating the safety of AI use in healthcare and diagnoses.
Worker support: Biden ordered a report on how AI could disrupt the labor market, and ways the gov’t could support people who lose their jobs to the tech. A Goldman Sachs report estimated that AI could eventually fill 300M full-time roles worldwide.
Threat watch: The Energy Department and Homeland Security must address potential AI threats in critical sectors like infrastructure, nuclear energy, and cybersecurity.
Rocky rise… GenAI has exploded since ChatGPT’s viral public rollout last fall. But the tech has been met with legal battles and lawmaker scrutiny. Also, only 10% of Americans think AI helps more than it hurts. While some tech titans have supported AI regulation, VC heavyweights like Marc Andreessen think it should run free (see: “the techno-optimist manifesto”).
With great tech comes great regulation… The pace of AI innovation has governments scrambling to draft rules. The EU passed the first AI regulation, called the AI Act, in June, and China passed its own AI regulations in August. While an executive order usually lasts only as long as the administration, it could provide a blueprint for lawmakers trying to pass new legislation before the year ends.