Meta wants the US to use its AI for defense and national security
The company declares it wants “to play its part to support the safety, security, and economic prosperity of America.”
Meta wants the US government to use its Llama AI model for defense and national-security applications.
In a press release announcing the new initiative, Nick Clegg, Meta’s president of global affairs, wrote:
“We are pleased to confirm that we are also making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work.”
The announcement mentioned how some of these companies are already using their Llama model for defense-related uses, such as Scale.AI’s use of Llama for “identifying adversaries’ vulnerabilities.”
Governments around the world are racing to secure “sovereign AI,” a term for home-grown AI software and hardware innovations that can be relied on to advance a nation’s interests without relying on other countries’ resources.
AI-computing juggernaut Nvidia has been pitching its products to foreign governments as the key to creating sovereign-AI resources.
In Meta’s very patriotic announcement, Clegg wrote:
“In a world where national security is inextricably linked with economic output, innovation and job growth, widespread adoption of American open source AI models serves both economic and security interests. Other nations — including China and other competitors of the United States — understand this as well, and are racing to develop their own open source models, investing heavily to leap ahead of the US. As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security and economic prosperity of America — and of its closest allies too.”
Meanwhile, both OpenAI and Anthropic have already signed agreements with the National Institute of Standards and Technology to grant the government agency access to early models for testing and assessment before wide release.