Tech
Robot buying a drink from vending machine
(Getty Images)
VEND FOR YOURSELF

Gemini 3 is insanely good at visual reasoning... and running a vending machine

Google’s stock is up maybe because Gemini 3 is good and its powered mostly by Google’s TPUs — or, maybe, because Alphabet’s about to launch a vending machine business.

David Crowther

How do you measure what an AI model can do?

You ask it to spell strawberry, make a video of Will Smith eating spaghetti, or do some basic math.

But, once you’ve exhausted all of the obvious tests, you might want something a little more formal — and it’s a question that researchers have been grappling with for years.

Now, there are a whole swath of benchmark tests that new AI models are put through, by both independent — and not so independent — organizations, in an increasingly weird kind of robot arena. Some of the tests are quizzes. Some require verbal, visual, or inductive reasoning. Many ask the large language models to do a lot of math that I cannot do. But one in particular asks a different question:

How much money can this thing make running a vending machine?

Vending-Bench 2, a test created by Andon Labs, puts LLMs through their paces by making them run “a simulated vending machine business over a year,” scoring them not on how many questions they got right out of 100, but how much cash was left in their virtual piggy banks at the end of the year.

This, it turns out, is hard for LLMs, which are prone to going off on tangents, losing focus, and are generally just quite poor at optimizing for long-term outcomes. That makes sense when you consider that the core of many of the AI models we use every day is, “What’s the most likely bit of text/pixel/image to come after this bit of text/pixel/image?”

Per Andon Labs, in the Vending-Bench 2 test:

“Models are tasked with making as much money as possible managing their vending business given a $500 starting balance. They are given a year, unless they go bankrupt and fail to pay the $2 daily fee for the vending machine for more than 10 consecutive days, in which case they are terminated early. Models can search the internet to find suitable suppliers and then contact them through e-mail to make orders. Delivered items arrive at a storage facility, and the models are given tools to move items between storage and the vending machine. Revenue is generated through customer sales, which depend on factors such as day of the week, season, weather, and price.”

Running the model for “a year” results in as many as 6,000 messages in total, and a model “averages 60-100 million tokens in output during a run,” according to Andon.

In the simulation, the AI model has to negotiate with suppliers as well as deal with costly refunds, delayed deliveries, bad weather, and price scammers.

Google’s Gemini 3 Pro, it turns out, is the best of any model tested yet — ending the year with $5,478 in its account, considerably more than Claude’s Sonnet 4.5, Grok 4, and GPT-5.1. That’s thanks to its relentless negotiating skills. Per Andon, “Gemini 3 Pro consistently knows what to expect from a wholesale supplier and keeps negotiating or searching for new suppliers until it finds a reasonable offer.”

Gemini 3 Vending Machine benchmark
Andon Labs / Vending-Bench 2

OpenAI’s model is, apparently, too trusting. Andon Labs hypothesizes that its relatively weak performance “comes down to GPT-5.1 having too much trust in its environment and its suppliers. We saw one case where it paid a supplier before it got an order specification, and then it turned out the supplier had gone out of business. It is also more prone to paying too much for its products, such as in the following example where it buys soda cans for $2.40 and energy drinks for $6.” Anyone who’s had ChatGPT sycophantically tell them they’re a genius for uttering even the most half-baked idea might understand how this can happen.

For what it’s worth, the $5,000 and change that Gemini averaged over its runs is considered pretty poor relative to what a smart human might be able to do, with Andon Labs estimating that a “good” strategy could make roughly $63,000 in a year.

What do you bench?

Diet Coke negotiations aside, Gemini’s scores on more traditional AI benchmarks were also impressive — at least, according to Google. A table posted on the company’s blog shows that Gemini 3 Pro tops or matches its peers in all but one of the benchmarks.

Gemini 3 benchmarks
Google / Alphabet

Its scores on visual reasoning tests — such as the ARC-AGI-2 test, where it scored 31.1%, way ahead of Anthropic’s and OpenAI’s best efforts — are particularly impressive. On ScreenSpot-Pro, a test that basically asks models to locate certain buttons or icons from a screenshot, Gemini 3 is leaps and bounds ahead of its rivals, scoring 72.7%. (GPT-5.1 scored just 3.5%.)

With Alphabet’s full tech stack responsible for the Gemini models, investor reaction to the release has been very positive so far, building on a wave of good news for the search giant this week. As my colleague Rani Molla wrote:

“[Gemini’s] performance is crucial to Google’s future success as the company embeds its AI models across its products and relies on them to generate new revenue from existing lines — particularly by driving growth in Cloud and reinforcing its ad and search dominance.”

Go Deeper: Check out Vending-Bench 2.

More Tech

See all Tech
tech

Meta jumps after it releases Superintelligence Labs’ first model: Muse Spark

The first big release from Meta’s Superintelligence Labs is here — a new multimodal reasoning model called Muse Spark. Shares of Meta spiked on the news, extending gains it had made earlier in the day on optimism over the ceasefire with Iran. The stock was recently up about 9%.

Meta has been playing catch-up in the generative-AI race, watching startups OpenAI and Anthropic leap ahead with ever more capable models, after the bungled rollout of its Llama 4 models.

After Meta went on an expensive hiring spree assembling an all-star team of AI researchers, investors have been eager to see the fruits of this team, and to see if the accompanying billions of capex dedicated to power it — $115 billion to $135 billion this year alone — were worth it.

Meta says the release is the first in a Muse family of models, which it says it will scale up over time. The benchmark scores released by Meta show Spark to be capable, with solid scores among popular benchmarks, but not any huge leaps over leading models from Anthropic, OpenAI, xAI, and Google.

Meta CEO Mark Zuckerberg said in a post on Threads:

“Looking ahead, we plan to release increasingly advanced models that push the frontier of intelligence and capabilities, including new open source models. We are building products that don’t just answer your questions but act as agents that do things for you. I am optimistic that this will support a wave of creativity, entrepreneurship, growth, and health. I’m looking forward to sharing more soon.”

After Meta went on an expensive hiring spree assembling an all-star team of AI researchers, investors have been eager to see the fruits of this team, and to see if the accompanying billions of capex dedicated to power it — $115 billion to $135 billion this year alone — were worth it.

Meta says the release is the first in a Muse family of models, which it says it will scale up over time. The benchmark scores released by Meta show Spark to be capable, with solid scores among popular benchmarks, but not any huge leaps over leading models from Anthropic, OpenAI, xAI, and Google.

Meta CEO Mark Zuckerberg said in a post on Threads:

“Looking ahead, we plan to release increasingly advanced models that push the frontier of intelligence and capabilities, including new open source models. We are building products that don’t just answer your questions but act as agents that do things for you. I am optimistic that this will support a wave of creativity, entrepreneurship, growth, and health. I’m looking forward to sharing more soon.”

tech

Alibaba launches new data center powered by 10,000 of its custom chips

Alibaba announced a new data center in southern China, in a partnership with China Telecom powered by its own Zhenwu chips. The new data center will contain 10,000 of the homegrown chips, and may scale up to 100,000 over time. The data center will be used for both inference and training.

China is racing to build out its own sovereign AI capabilities, and is making significant progress.

While Chinese companies and labs have released many competitive AI models, such as Alibaba’s Qwen, Z.ai’s new GLM-5.1, and the disruptive DeepSeek R1, China is still behind the US when it comes to AI chips, and it has struggled to get hold of the latest Nvidia GPUs due to US export controls.

China is racing to build out its own sovereign AI capabilities, and is making significant progress.

While Chinese companies and labs have released many competitive AI models, such as Alibaba’s Qwen, Z.ai’s new GLM-5.1, and the disruptive DeepSeek R1, China is still behind the US when it comes to AI chips, and it has struggled to get hold of the latest Nvidia GPUs due to US export controls.

Psychic Boy Wearing Head Band

Anthropic: Our new Mythos model is so powerful, we can’t release it

The unusual announcement of the model highlights its alarming new cybersecurity capabilities.

tech

Bloomberg: Apple’s foldable iPhone is on track for September after all

Scratch that... Actually, Apple’s foldable iPhone may be on track to debut later this year after all.

Hours after a report from Nikkei Asia said Apple was encountering engineering problems with the novel design that could lead to a delayed launch, Bloomberg’s Mark Gurman reports that sources within Apple say the premium foldable iPhone is still on track to launch in September, alongside the iPhone 18 Pro and iPhone 18 Max.

Shares of Apple had plunged more than 5% on word of a possible delay, but pared losses on Gurman’s story.

According to the report, the foldable iPhone will cost more than $2,000 and will be a key part of the company’s plan to revamp the iPhone lineup.

Shares of Apple had plunged more than 5% on word of a possible delay, but pared losses on Gurman’s story.

According to the report, the foldable iPhone will cost more than $2,000 and will be a key part of the company’s plan to revamp the iPhone lineup.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.