Tech
Dog Eating Dog Food
(Getty Images)

Big Tech’s strategy for selling AI: Dogfooding

I’m not only the AI CEO, but I’m also a client.

Meta’s Mark Zuckerberg wants you to know he’s building an AI agent to help him be CEO — and that eventually everyone should have one. Jensen Huang is broadcasting that he’d be “deeply alarmed” if Nvidia’s $500,000 engineers weren’t burning through $250,000 in AI tokens a year. Salesforce keeps talking about “digital labor” like it’s already a line item in your budget.

You can take all of this at face value. Or you can recognize a familiar move: the people selling the future are making a point of telling you they’re living in it first. It’s a little like Hair Club for Men. They’re not just pitching the product — they’re the testimonial.

But what’s interesting isn’t just the marketing; it’s how closely the messaging aligns with their business interests and the billions they’ve already poured in.

Across Big Tech, CEOs are starting to define what “good” looks like in an AI world. At Meta, that means flattening teams and pushing employees to use internal AI tools so aggressively that it shows up in performance reviews. At Nvidia, it means tying productivity to token consumption: if you’re not spending enough on AI, something’s wrong. At Salesforce, CEO Marc Benioff tells anyone who will listen that companies will soon manage fleets of “digital workers” alongside humans.

Of course, tech has a long history of “dogfooding,” or using its own products internally to make them better. But this feels different. AI is still poorly understood by most of the people being asked to buy it, and at the same time they’re being told it’s inevitable. They’re pushing internal adoption, and then pointing to that adoption as proof it works.

To be clear, this doesn’t mean they’re wrong. The uncomfortable part is that they might be early and self-interested at the same time. AI probably can make individuals more productive. Agents probably will change how work gets done. Compute probably will become a core input, like cloud spend before it.

Overall business spending on AI has been growing, and the size of those contracts has been growing as well, according to data from Ramp, a corporate card and expense management platform, suggesting that companies are finding them useful.

“ Companies are not irrational actors that are spending money with no return on investment,” Ara Kharazian, Ramp’s economist, told Sherwood News. “When they’re buying these sort of verticalized specific software solutions, it’s because they’re expanding their contracts and seats in order to capture more gains.”

But so far, the external data showing AI productivity is limited. Federal Reserve Bank of St. Louis Real-Time Population Survey data shows that while about 40% of adults use AI at work, the time saved amounts to only about 2% of total work hours. A survey of 1,000 hiring managers by Resume.org found that AI’s impact on jobs has been minimal so far, with 9% saying it had fully replaced certain roles and 45% saying it had little to no impact on staffing. In one of the first large real-world studies, researchers found that AI boosted productivity among customer service workers by about 15% on average — though gains were uneven and concentrated among less experienced employees.

In the absence of robust proof, marketing fills the gap.

This works because companies don’t just buy software — they copy norms. If the CEO of Nvidia says serious engineers should be using massive amounts of compute, that doesn’t stay contained to Nvidia. It seeps into how other companies evaluate their own teams. If Zuckerberg says the future org chart is flatter and agent-driven, that becomes less of a Meta experiment and more of a managerial benchmark.

Nvidia benefits from a world where “good” engineers use a lot of tokens. Salesforce benefits from a world where every company believes it needs AI “employees.” Meta benefits from a world where agents are everywhere — and always running up the tab. In each case, the definition of competence conveniently expands demand for the thing they sell.

There’s also a simpler explanation for the urgency: AI is really, really expensive. The biggest tech companies are collectively spending hundreds of billions of dollars on data centers, chips, and power. That kind of fixed cost only works if usage keeps climbing. So the message shifts from, “This might help,” to, “You should be doing a lot more of this already.” The faster AI becomes table stakes, the faster those investments start to look justified.

And it’s a very effective way to sell a very expensive product.

More Tech

See all Tech
tech

Tesla’s Model Y just cleared a new federal safety bar

The National Highway Traffic Safety Administration announced today that Tesla Model Ys manufactured after November 12 were the first to pass the agency’s new advanced driver assistance system tests, which are now part of the New Car Assessment Program.

“By successfully passing these new tests, the 2026 Tesla Model Y demonstrates the lifesaving potential of driver assistance technologies and sets a high bar for the industry,” NHTSA Administrator Jonathan Morrison wrote in the press release. “We hope to see many more manufacturers develop vehicles that can meet these requirements.”

The new tests include:

  • Pedestrian automatic emergency braking

  • Lane-keeping assistance

  • Blind spot warning

  • Blind spot intervention

The milestone offers Tesla highly coveted regulatory validation, as it seeks to spur usage of its Full Self-Driving (Supervised) tech. The NHTSA didn’t immediately respond to a request for comment.

80x

We knew Claude Code was driving crazy growth at Anthropic, but it may be much more than the company is expecting.

Speaking at the company’s developer conference yesterday, Anthropic CEO Dario Amodei said that while the company is planning for 10x growth this year, it could be as much as 80x, calling the overwhelming demand “crazy” and that he looked forward to more modest growth, saying such growth is “too hard to handle.”

The demand is so great that Anthropic partnered with Elon Musk’s xAI to buy up the bulk of computing from his Colossus data center in Tennessee.

tech

Tesla’s made-in-China vehicle sales jumped 36% in April

Tesla’s sales of made-in-China vehicles — sold across China, Europe, and other international markets — rose 36% year over year to 79,478 units in April. The increase marks the sixth straight month of annual growth in sales of vehicles made in the worlds largest manufacturing economy, suggesting the EV maker’s overseas business may be stabilizing after a difficult stretch.

That said, China wholesale deliveries fell from March, even as overall new energy vehicle sales rose 7% during the period.

Later this month, the China Passenger Car Association will report China-only sales, offering a clearer picture of performance in Tesla’s second-largest market.

Later this month, the China Passenger Car Association will report China-only sales, offering a clearer picture of performance in Tesla’s second-largest market.

tech

Anthropic’s scramble for compute now includes rival xAI

Another day, another major partnership with an AI rival. This time, Anthropic signed a deal with SpaceX’s xAI to access compute from its Colossus 1 data center to help it improve capacity for its Claude Pro and Claude Max subscribers. Just yesterday, The Information reported that Anthropic planned to spend $200 billion on Google Cloud services over the next five years. As Sherwood News’ Luke Kawa wrote:

“Anthropic has been a victim of its own success: the popularity of Claude Code and Cowork have revealed compute constraints and left users frustrated by caps. In response, the Claude developer has embarked upon a mad scramble for compute, striking or expanding deals with CoreWeave, Amazon, Google, and Broadcom.”

Now, it’s adding xAI to the list — even as the Elon Musk company builds a competing model.

In less terrestrial news, xAI said that as part of the agreement, Anthropic “expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity.”

“Anthropic has been a victim of its own success: the popularity of Claude Code and Cowork have revealed compute constraints and left users frustrated by caps. In response, the Claude developer has embarked upon a mad scramble for compute, striking or expanding deals with CoreWeave, Amazon, Google, and Broadcom.”

Now, it’s adding xAI to the list — even as the Elon Musk company builds a competing model.

In less terrestrial news, xAI said that as part of the agreement, Anthropic “expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity.”

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.