Tech
Zuckerberg AI chains
A screengrab from Mark Zuckerberg’s Instagram post demonstrating Llama AI/Instagram
Platformer

Meta’s radical plan for AI is Mark Zuckerberg’s love letter to nerd culture

Its open-source AI is now roughly on par with rivals from OpenAI and Google, but the company is taking a decidedly more hacker-first approach to the future.

Casey Newton

Today, Meta released the largest ever open-source large language model to power generative artificial intelligence applications. Llama 3.1 represents an ambitious effort to shape the development of the AI industry in ways that favor Meta. It is also likely to spark new conversations around whether open source models are likely to generate more harm than their closed counterparts.

In January I wrote about Meta’s unusual plan to make available for free a technology that it has spent more than a decade and tens of billions of dollars to build. Unlike its peers, including Google and OpenAI, Meta is not selling subscriptions to its AI for individuals or teams, and says it has no plans to do so. 

In the short term, Meta says that open-sourcing Llama helps its own systems get better more quickly and cheaply than they otherwise would. Inviting a global ecosystem of developers to iterate and improve on its models, and funnel those improvements back into Meta’s core systems, accelerates the development of Meta’s own products and systems and can save it money in doing so.

In the long-term, owning one of the most powerful LLMs could provide the company with a basis for all manner of money-making products. It could also — and Meta lets this part go unsaid — thwart the development of competitors like OpenAI by giving away their core business for free. (One way to think of Llama 3.1 is that it is the free Google Docs to Microsoft’s paid Office 365.) 

To make that happen, of course, Meta’s AI needs to be among the best in its class. The company said today that it is making strides in that direction: Llama 3.1’s largest, 405-billion parameter model outperformed OpenAI’s GPT-4 Omni and Claude 3.5 Sonnet on some benchmarks, it said.

“If you compare them to the GPT family, if you compare them to the Claude family, if you compare them to Gemini — I'll let the numbers speak for themselves,” Chris Cox, Meta’s chief privacy officer, said in an interview with Platformer. “But we're pretty proud now that some of them we’re beating, and then on some of them we’re just in range of best-in-class models. So that's really cool.” 

“If you compare them to the GPT family, if you compare them to the Claude family, if you compare them to Gemini — I'll let the numbers speak for themselves.”

And while developers have only had it in their hands for a few hours, the early reception has been positive. On Hacker News, some developers found that Llama 3.1 performed at or near GPT-4o’s level, according to comments posted there.

People with extremely high-end hardware may be able to host the largest Llama 3.1 model on their own hardware. But most developers will want to use a hosted service, and to that end today Meta announced partnerships with 25 companies that will make Llama 3.1 available to use through their cloud computing platforms, including Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, and Dell.

The fact that an open-source model now rivals closed alternatives speaks to the way that every major AI developer’s LLMs are converging on one another in quality. Seemingly every few weeks now, one of the big AI players releases a new model, or variation of a model, with slightly improved cost, performance, or other attributes. And for the most part, at least in my testing of the major players, the resulting products are mostly indistinguishable from one another.

That’s bad news if you’re selling a subscription for $20 a month, as OpenAI, Google, and Anthropic all are. But it’s great news for Meta, which can expedite the development of its own systems and undercut its rivals at the same time, all without damaging its core advertising business.

If there are risks to Meta here, they lie in the potential for regulation and damage to its reputation. That’s because open-source systems, by their nature, allow anyone to take them and repurpose them for their own ends, no matter what they might be. Fears that the open development of superhuman systems could create existential risk for humanity are among the reasons that OpenAI abandoned that approach in favor of a closed one.

If there are risks to Meta here, they lie in the potential for regulation and damage to its reputation.

Over the past two years, open-source vs. closed development has become one of the most hotly debated issues in tech. The Biden administration has not definitively favored one approach over the other. But in its executive order last year, the administration did call for projects that use past a certain computing threshold to disclose that fact to the government and to perform safety testing. Some open-source advocates argue that the administration is laying the groundwork for further restrictions that will effectively outlaw open-source, and in so doing create a tiny cartel of vastly powerful AI companies that will reap most of the benefits of the technology. 

Among the voices making this argument is Marc Andreessen, the venture capitalist and longtime Meta board member. It was among the reasons that he and his business partner Ben Horowitz offered a full-throated endorsement of Donald Trump for president last week

Zuckerberg has reached a similar conclusion. In a kind of manifesto published today, Meta’s CEO argued that open source development is the best approach not only for Meta’s business, but for the world at large.

He writes:

It’s worth noting that unintentional harm covers the majority of concerns people have around AI – ranging from what influence AI systems will have on the billions of people who will use them to most of the truly catastrophic science fiction scenarios for humanity. On this front, open source should be significantly safer since the systems are more transparent and can be widely scrutinized. Historically, open source software has been more secure for this reason.

Zuckerberg also suggests that there are few harms that can be surfaced through AI today that you can’t already Google. “We must keep in mind that these models are trained by information that’s already on the internet, so the starting point when considering harm should be whether a model can facilitate more harm than information that can quickly be retrieved from Google or other search results,” he writes.

These arguments seem sound enough today, when the models are barely capable of doing more than regurgitating their training data. It’s less clear how they will sound in a future, when open-source software can reason, scheme, and execute complex plans — no matter what those plans might be. 

And Meta is among the companies building toward that future. Cox told me that the next radical advancements in LLMs will come when they can master three skills: solving problems that require multiple steps; personalizing themselves to their users and developing a more complete sense of who their users are and what they need; and successfully taking action on your behalf online. 

Should LLMs develop those skills, and become part of the Llama of the future, Meta will surely attempt to put safeguards around it to prevent it from being misused. But the nature of open source technology is that the moment Llama 3.1 goes online, it is out of Meta’s hands. And where its rivals will retain broad power to shut down rogue systems, it’s less clear that Meta will be able to build a similar killswitch.

Open source development has served Meta’s interests well so far, and the relatively benign AI tools we have today give us little reason to fear what Llama 3.1 can do. But it’s hard to make good policy for a technology that is improving in exponential leaps every few years. As AI grows more powerful, Meta’s case for open-source development will be worth revisiting. 

More Tech

See all Tech
tech

Apple stock takes a hit on report it’s pushing back AI Siri features — again

Apple customers may have to wait even longer for the company’s long-awaited AI Siri, Bloomberg reports.

The iPhone maker had been planning to include a number of upgrades to Siri in a March operating system update, but the company now is planning to spread those out over future versions. That means some features first announced in June 2024 — an AI Siri that can tap into personal data and on-screen content — might not arrive until September with iOS 27.

The postponements happened after “testing uncovered fresh problems with the software,” Bloomberg said, including instances where Siri didn’t properly process queries or took too long to respond.

The stock, which had been trading up more than 2% today, has pared some of those gains on the news.

For what it’s worth, Apple’s iPhone sales — a record last quarter — don’t appear to be suffering for lack of AI.

The postponements happened after “testing uncovered fresh problems with the software,” Bloomberg said, including instances where Siri didn’t properly process queries or took too long to respond.

The stock, which had been trading up more than 2% today, has pared some of those gains on the news.

For what it’s worth, Apple’s iPhone sales — a record last quarter — don’t appear to be suffering for lack of AI.

tech

Meta breaks ground on massive $10 billion AI data center — and the costs won’t stop there

Meta announced today that it broke ground on a new, giant AI data center. This one is located in Indiana, has 1 gigawatt of capacity, and will cost more than $10 billion.

In a press release, the company touted the 4,000 construction jobs and 300 operational positions Meta expects to bring to the area. It did not disclose any tax incentives tied to the project.

But much like with the company’s Hyperion data center in Louisiana — where we calculated incentives in the billions — the number of long-term jobs is likely small relative to any public subsidies the company ultimately receives.

The $10 billion build represents a notable chunk of Meta’s planned $115 billion to $135 billion in capital expenditure this year. And operating costs will add substantially to that total over time.

Earlier this year, President Trump warned tech giants to “pay their own way” when it comes to energy, as data centers have driven up electricity costs in some regions. Meta’s announcement appears to anticipate that criticism, dedicating significant space to explaining how it will mitigate the energy and water impact of the facility:

“With all our data centers, we strive to be good neighbors. We pay the full costs for energy used by our data centers and work closely with utilities to plan for our energy needs years in advance to ensure residents aren’t negatively impacted. To help support local families in need, we’re providing $1 million each year for 20 years to the Boone REMC Community Fund to provide direct assistance with energy bills, and funding emergency water utility assistance through The Caring Center. We also pay the full cost of water and wastewater service required to support our data centers. Over the course of this project, Meta will make investments of more than $120 million, toward critical water infrastructure in Lebanon, as well as other public infrastructure improvements including roads, transmission lines and utility upgrades.”

Unlike hyperscalers such as Google and Microsoft, which can offset infrastructure costs by selling cloud capacity to customers, Meta bears those expenses largely on its own. That dynamic could make the economics of AI infrastructure more challenging for the company as its AI spending continues to accelerate.

But much like with the company’s Hyperion data center in Louisiana — where we calculated incentives in the billions — the number of long-term jobs is likely small relative to any public subsidies the company ultimately receives.

The $10 billion build represents a notable chunk of Meta’s planned $115 billion to $135 billion in capital expenditure this year. And operating costs will add substantially to that total over time.

Earlier this year, President Trump warned tech giants to “pay their own way” when it comes to energy, as data centers have driven up electricity costs in some regions. Meta’s announcement appears to anticipate that criticism, dedicating significant space to explaining how it will mitigate the energy and water impact of the facility:

“With all our data centers, we strive to be good neighbors. We pay the full costs for energy used by our data centers and work closely with utilities to plan for our energy needs years in advance to ensure residents aren’t negatively impacted. To help support local families in need, we’re providing $1 million each year for 20 years to the Boone REMC Community Fund to provide direct assistance with energy bills, and funding emergency water utility assistance through The Caring Center. We also pay the full cost of water and wastewater service required to support our data centers. Over the course of this project, Meta will make investments of more than $120 million, toward critical water infrastructure in Lebanon, as well as other public infrastructure improvements including roads, transmission lines and utility upgrades.”

Unlike hyperscalers such as Google and Microsoft, which can offset infrastructure costs by selling cloud capacity to customers, Meta bears those expenses largely on its own. That dynamic could make the economics of AI infrastructure more challenging for the company as its AI spending continues to accelerate.

tech

Humanoid robot maker Apptronik raises $520 million

Apptronik, an Austin, Texas-based robot manufacturer, said it has closed out its Series A fundraising round, raising $520 million. The fundraising is an extension of a $415 million round raised last February, and included investments from Google, Mercedes-Benz, AT&T, and John Deere. Qatar’s state investment firm, QIA, also participated in the fundraising round.

Apptronik makes Apollo, a humanoid robot targeted for warehouse and manufacturing work. The company is one of several US robotics companies that are racing to apply generative-AI breakthroughs to humanoid robots, in anticipation of a new market for robots in homes and workplaces.

Apptronik makes Apollo, a humanoid robot targeted for warehouse and manufacturing work. The company is one of several US robotics companies that are racing to apply generative-AI breakthroughs to humanoid robots, in anticipation of a new market for robots in homes and workplaces.

tech

Ives: Microsoft and Google’s giant capex plans are worth it

Don’t mind the AI sell-off, says Wedbush Securities analyst Dan Ives, who thinks fears around seemingly unfettered Big Tech capex budgets are unfounded, especially in the case of Microsoft and Google. Together, the two hyperscalers are slated to spend around $300 billion on the purchases of property and equipment this year as they double down on AI infrastructure, but he says both have already shown that they can turn the spending into revenue and growth.

“They are reshaping cloud economics around AI-first workloads that carry higher switching costs, deeper customer lock-in, and longer contract durations than before,” Ives wrote, adding that these giant costs will be spread out over time and set the companies up for success in the long run. Per Ives:

“While near-term free cash flow optics remain noisy, the platforms that invest early and at scale are best positioned to capture durable share, pricing power, and ecosystem control as AI workloads mature. Over time, we expect utilization leverage to turn today’s elevated investment into a meaningful driver of long-term value creation.”

“They are reshaping cloud economics around AI-first workloads that carry higher switching costs, deeper customer lock-in, and longer contract durations than before,” Ives wrote, adding that these giant costs will be spread out over time and set the companies up for success in the long run. Per Ives:

“While near-term free cash flow optics remain noisy, the platforms that invest early and at scale are best positioned to capture durable share, pricing power, and ecosystem control as AI workloads mature. Over time, we expect utilization leverage to turn today’s elevated investment into a meaningful driver of long-term value creation.”

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.