Bots on the hill… VP Kamala Harris met last week with the bosses of Alphabet, Microsoft, OpenAI, and Anthropic as AI innovation draws greater scrutiny in DC. The White House said AI developers are expected to have their products reviewed at August’s Defcon cybersecurity conference, and debuted a $140M plan to build AI-research hubs. As large language models (LLMs) like ChatGPT go mainstream, Washington says companies have an “ethical, moral, and legal” responsibility to ensure their tech is safe. A few concerns:
Data privacy: There are worries LLMs could leak sensitive user data they collect. Last week Samsung barred its staff from using ChatGPT after a leak.
Misinfo: LLMs are known for spewing false or biased info. Governments are concerned bad actors could use LLMs to spread propaganda.
Copyright: Legal issues have emerged over AI-related infringement, from cover art to fake Drake songs. It’s also an element of the recent WGA strike: writers worry that studios could use their scripts to churn out AI-generated stories without them.
Plagiarism: Teachers are concerned, too, as students lean on LLMs for homework. Last week Chegg shares fell 40%+ after the edu-tech company noted a spike in students flocking to ChatGPT.
Workin’ 9-to-AI… Another trigger of AI-anxiety: job security. Last week IBM froze hiring for nearly 8K roles that it said could be replaced by AI, and said the tech could replace almost a third of its non-customer-facing roles. By some estimates, 300M jobs could be affected by AI, while Goldman Sachs estimates it could boost global GDP by 7%.
With great tech comes great responsibility… From fake Drake to fake news and AI-fueled propaganda campaigns, the risks of unregulated AI are clear. As major companies start to integrate the tech into products that billions of people use (like: search engines), we can expect governments to get involved in regulating it. But the rapid innovation means it could be harder to move responsibly.