Anthropic’s making AI boom again — and picking the winners
More compute = more revenues, and going custom has its own particular consequences for different AI stocks.
Anthropic is boosting the AI hardware trade today, and it’s not just because of its AI compute deal with CoreWeave.
This announcement came after Bloomberg reported that OpenAI pitched investors on its competitive advantage over the Claude developer due to having secured more computing power. If you buy into the idea that “compute equals revenues,” as Nvidia CEO Jensen Huang has argued, that gap matters. And it means Anthropic has some work to do to catch up.
Hence today’s pact with CoreWeave, which is further entrenching demand for compute and the AI accelerators, networking equipment, memory, and power needed to provide it.
That’s one reason sorted, and helps explain why the likes of Applied Optoelectronics, POET Technologies, IREN, Coherent, Nebius, Oklo, Applied Digital, Cipher Digital, Super Micro Computer, and more are ripping today.
Another reason may be tied to how Anthropic could go about sourcing compute, with Reuters reporting that the firm is “exploring the possibility of designing its own chips.”
The ripple effects from “going custom” seem to be leaving their mark within tech stocks on Friday.
Astera Labs is the belle of the ball, up more than 14%. The silicon connectivity company’s offerings enable chips to communicate with each other within racks (scaling up) as well as scale-out solutions. Shares are still down on the year, however, with traders more attracted to pure-play photonics opportunities.
Astera has a tight relationship and partnership with Amazon; Anthropic’s latest models are trained on Trainium chips, according to AWS’s CEO, and Astera’s offerings are used to scale those up in data center environments. Custom chip specialist Marvell Technology is also a Trainium designer and outperforming peers on Friday.
And, of course, Anthropic announced an expansion of its partnership with Google and Broadcom earlier this week, which will see the firm access 3.5 gigawatts of TPU-based AI compute capacity beginning in 2027.
