Tech
Robot buying a drink from vending machine
(Getty Images)
VEND FOR YOURSELF

Gemini 3 is insanely good at visual reasoning... and running a vending machine

Google’s stock is up maybe because Gemini 3 is good and its powered mostly by Google’s TPUs — or, maybe, because Alphabet’s about to launch a vending machine business.

David Crowther

How do you measure what an AI model can do?

You ask it to spell strawberry, make a video of Will Smith eating spaghetti, or do some basic math.

But, once you’ve exhausted all of the obvious tests, you might want something a little more formal — and it’s a question that researchers have been grappling with for years.

Now, there are a whole swath of benchmark tests that new AI models are put through, by both independent — and not so independent — organizations, in an increasingly weird kind of robot arena. Some of the tests are quizzes. Some require verbal, visual, or inductive reasoning. Many ask the large language models to do a lot of math that I cannot do. But one in particular asks a different question:

How much money can this thing make running a vending machine?

Vending-Bench 2, a test created by Andon Labs, puts LLMs through their paces by making them run “a simulated vending machine business over a year,” scoring them not on how many questions they got right out of 100, but how much cash was left in their virtual piggy banks at the end of the year.

This, it turns out, is hard for LLMs, which are prone to going off on tangents, losing focus, and are generally just quite poor at optimizing for long-term outcomes. That makes sense when you consider that the core of many of the AI models we use every day is, “What’s the most likely bit of text/pixel/image to come after this bit of text/pixel/image?”

Per Andon Labs, in the Vending-Bench 2 test:

“Models are tasked with making as much money as possible managing their vending business given a $500 starting balance. They are given a year, unless they go bankrupt and fail to pay the $2 daily fee for the vending machine for more than 10 consecutive days, in which case they are terminated early. Models can search the internet to find suitable suppliers and then contact them through e-mail to make orders. Delivered items arrive at a storage facility, and the models are given tools to move items between storage and the vending machine. Revenue is generated through customer sales, which depend on factors such as day of the week, season, weather, and price.”

Running the model for “a year” results in as many as 6,000 messages in total, and a model “averages 60-100 million tokens in output during a run,” according to Andon.

In the simulation, the AI model has to negotiate with suppliers as well as deal with costly refunds, delayed deliveries, bad weather, and price scammers.

Google’s Gemini 3 Pro, it turns out, is the best of any model tested yet — ending the year with $5,478 in its account, considerably more than Claude’s Sonnet 4.5, Grok 4, and GPT-5.1. That’s thanks to its relentless negotiating skills. Per Andon, “Gemini 3 Pro consistently knows what to expect from a wholesale supplier and keeps negotiating or searching for new suppliers until it finds a reasonable offer.”

Gemini 3 Vending Machine benchmark
Andon Labs / Vending-Bench 2

OpenAI’s model is, apparently, too trusting. Andon Labs hypothesizes that its relatively weak performance “comes down to GPT-5.1 having too much trust in its environment and its suppliers. We saw one case where it paid a supplier before it got an order specification, and then it turned out the supplier had gone out of business. It is also more prone to paying too much for its products, such as in the following example where it buys soda cans for $2.40 and energy drinks for $6.” Anyone who’s had ChatGPT sycophantically tell them they’re a genius for uttering even the most half-baked idea might understand how this can happen.

For what it’s worth, the $5,000 and change that Gemini averaged over its runs is considered pretty poor relative to what a smart human might be able to do, with Andon Labs estimating that a “good” strategy could make roughly $63,000 in a year.

What do you bench?

Diet Coke negotiations aside, Gemini’s scores on more traditional AI benchmarks were also impressive — at least, according to Google. A table posted on the company’s blog shows that Gemini 3 Pro tops or matches its peers in all but one of the benchmarks.

Gemini 3 benchmarks
Google / Alphabet

Its scores on visual reasoning tests — such as the ARC-AGI-2 test, where it scored 31.1%, way ahead of Anthropic’s and OpenAI’s best efforts — are particularly impressive. On ScreenSpot-Pro, a test that basically asks models to locate certain buttons or icons from a screenshot, Gemini 3 is leaps and bounds ahead of its rivals, scoring 72.7%. (GPT-5.1 scored just 3.5%.)

With Alphabet’s full tech stack responsible for the Gemini models, investor reaction to the release has been very positive so far, building on a wave of good news for the search giant this week. As my colleague Rani Molla wrote:

“[Gemini’s] performance is crucial to Google’s future success as the company embeds its AI models across its products and relies on them to generate new revenue from existing lines — particularly by driving growth in Cloud and reinforcing its ad and search dominance.”

Go Deeper: Check out Vending-Bench 2.

More Tech

See all Tech
tech

AI leaderboard maker LMArena hits $1.7 billion valuation

If you want to know who’s up and who’s down in the AI model world, look no further than LMArena’s leaderboard. The startup has just raised a $150 million series A fundraising round, with a valuation of $1.7 billion.

In seven months, LMArena has raised $250 million, according to TechCrunch.

The leaderboard started as a research project by cofounders Anastasios Angelopoulos and Wei-Lin Chiang when they were graduate students at UC Berkeley.

The public leaderboard — formerly known as “Chatbot Arena” — shows the results of human evaluations of AI models for various tasks. Users can rate which model did a better job on one task in a sort of blind taste test.

The leaderboard is a hotly contested proving grounds for new models, and the company occupies a powerful position in an industry that lacks independent, industry-standard evaluations.

The leaderboard started as a research project by cofounders Anastasios Angelopoulos and Wei-Lin Chiang when they were graduate students at UC Berkeley.

The public leaderboard — formerly known as “Chatbot Arena” — shows the results of human evaluations of AI models for various tasks. Users can rate which model did a better job on one task in a sort of blind taste test.

The leaderboard is a hotly contested proving grounds for new models, and the company occupies a powerful position in an industry that lacks independent, industry-standard evaluations.

tech

Uber jumps after unveiling Lucid robotaxi at CES

Uber shares jumped more than 5% after the company unveiled a production-intent robotaxi developed in partnership with Lucid and Nuro at the Consumer Electronics Show on Monday. The autonomous vehicle runs on Nvidia’s Drive AGX Thor computer. Nvidia itself announced a slate of autonomous hardware and software announcements at CES.

The companies said this fall that the San Francisco Bay Area will be the first market for the joint effort. The robotaxi is already being tested on public roads, with a commercial launch planned for later this year.

Uber + Lucid + Nvidia is just another example of the tangled web of partnerships in the autonomous driving space, where Nvidia is now becoming more and more prominent.

The companies said this fall that the San Francisco Bay Area will be the first market for the joint effort. The robotaxi is already being tested on public roads, with a commercial launch planned for later this year.

Uber + Lucid + Nvidia is just another example of the tangled web of partnerships in the autonomous driving space, where Nvidia is now becoming more and more prominent.

tech

Meta delays international Ray-Ban Display expansion thanks to “unprecedented demand” and “extremely limited inventory”

Meta said today that it’s delaying the early 2026 international expansion of its Ray-Ban Display glasses because of “extremely limited inventory” and “unprecedented demand.” The company didn’t specify whether the issue was more supply or demand, but has previously insisted its smart glasses are a hit.

Waitlists for the smart glasses, which are controlled with a band you wear on your wrist, extend “well into 2026.”

“We’ll continue to focus on fulfilling orders in the US while we re-evaluate our approach to international availability,” the company wrote. Expansion had been planned for the UK, France, Italy, and Canada.

In order to buy the smart glasses, consumers must do an in-person product demo to ensure the tech is “properly fitted to you,” according to Meta. Demos in New York City are unavailable for the next few weeks, the company’s scheduling website shows. It also notes that “that due to high demand, the product may be sold out and unavailable for purchase after your demo.”

US-AI-TECH-COMPUTERS-TELECOM

Nvidia’s autonomous tech gives other automakers a chance to take on Tesla

Nvidia made a number of autonomous vehicle announcements at CES yesterday that should have Tesla worried.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.