Tech

What’s up doc?

HOP TO IT

Rabbit R1 handheld device
Sherwood News

Rabbit’s r1 AI companion is not the future you were promised

The $199 “Large Action Model” device has big ambitions and a pretty face. And that’s about it.

If you've been paying attention to the technology space lately (and let's be honest, you have), you've probably noticed that the hype cycle around AI has reached a fever pitch. You know, the high note before the crash, historically. In one particularly interesting area of AI exploration, there's been something of a recent boom: hardware specifically designed around interacting with large language models (or LLMs, the backbone of most "AI" tech).

Most visible in this space right now is a disastrously reviewed AI “pin” from the company Humane, which received $230M in venture funding and is founded by two ex-Apple employees famed for leading interface design on products like the Apple Watch. But a close second runner-up is a curious device created by a company called Rabbit, dubbed the r1.

Introduced at the Consumer Electronics Show in January, The r1 shares some similarities with Humane's pin in that it seeks to alter the way we think about mobile computing through both hardware differentiation and the belief that the software that powers our future devices will rely heavily on artificial intelligence to help us navigate our daily lives. 

Where Humane supposes that the future of mobile computing is an awkward pin that projects images onto your hand and relies completely on voice control, the r1 takes a decidedly different approach. It has a screen, and can be navigated with a scroll wheel as well as touch, and most importantly, doesn't seek to replace your phone. Instead, you can think of it as a companion to your phone, or a more simplified way to carry out tasks an AI might handle better than the mobile operating systems Apple and Google created nearly 20 years ago.

At the very least, the r1 looks really good

The r1 is a beautiful and fascinating object, with hardware and an interface created by Swedish design house Teenage Engineering, who have taken the mantle from Jony Ive as the most forward thinking and exciting industrial designers of our age. Beyond creating a string of industry-defining audio and synthesizer products (including the iconic OP-1), Teenage Engineering have worked with the Nothing brand on its phone and earbud designs, created the Playdate handheld gaming device alongside developers Panic, and partnered with Ikea on a modular line of lights and speakers called Frekvens. So at the very least, the r1 looks really good.

The device is sized to about 50% of a mobile phone (though it cannot replace your phone), and is striking in its neon orange colorway. It touts a motorized camera which can swivel to view the user or what the user is looking at, and can slip easily into just about any pocket. It is navigated largely through voice interaction, but it also has menus which will let you jump through its OS (albeit sluggishly) without having to say anything.

Rabbit R1 menu
The Rabbit R1’s main menu.

If only the software here and "Large Action Model" (the company's name for the r1's method of interfacing with the world) were as developed and complete as its look and feel. 

From my perspective, the r1 wants to do two things for its users right now. 

The first is to connect disparate services together to create a unified “front door” to all of the things you’re typically doing with your phone that aren’t messaging or talking to people; ordering food, playing music, taking notes, getting directions, finding out what the weather is, and so on. Right now, every time you want to do something like that with your phone, you’re jumping into and out of individual apps; the r1 supposes a future where all your apps connect into a central piece of AI-driven software, disappear, and then are acted on in the background when you request a specific action. For instance, you might say something like “I need an Uber where I am right now and I want to go home” and then instead of opening the app and tapping in all your requests, the r1’s LAM will do it for you.

It said it was getting the music going, the screen went dark, and I never heard anything

This, on its face, is a brilliant idea, and most likely the future state of computing for the vast majority of users. However, the r1 is not nearly ready for primetime to handle these tasks (in fairness, the company doesn’t seem to be presenting this as a mass-market device, rather an experiment in progress). At present, it only connects four apps to the device (Uber, Spotify, Doordash, and confusingly Midjourney, the AI art generation app), and navigating those apps either failed most of the time, or misunderstood what I wanted altogether. For instance, when I tried to play specific music on Spotify by voice command, it would often begin playing something other than what I wanted or just do nothing at all. One time when I asked it to play the new Beyoncé album, it said it was getting the music going, the screen went dark, and I never heard anything. I also had to reboot the device to get the home screen to show up again. Womp.

In fairness, Google Assistant doesn’t seem to fare much better on this one.

The second way the r1 is trying to innovate in mobile computing is adding a new spatial awareness, or physical awareness, to you and your surroundings and augment that with the capabilities of an AI assistant. 

For instance, the r1 can use “vision” to do various tasks, like describe what it’s looking at, or transcribe text, or turn a hand-drawn spreadsheet into a CSV file. It does some of this fairly well, particularly its ability to tell you what it’s looking at, though it did say my chihuahua Ginger was alternately a cat and a rabbit several times. The company touts realtime translation using voice recognition, but I couldn’t get it to listen to and transcribe a video of Bradley Cooper speaking French even a little bit (it just gave me a failure message).

Rabbit r1 AI vision dog / cat pic
The r1 misidentifying Ginger.

Failure, in fact, seems to be a hallmark of the r1’s capabilities right now. It’s a whiz at taking notes and saving it to its cloud “rabbithole,” but it can’t set a reminder, takes forever to look up a basic weather report, doesn’t seem to know what time zone I’m in, and failed to fetch an Uber for me when I asked it several times (“Uber is unavailable, sorry for the inconvenience”). Add to this the fact that the battery drains at a breakneck rate — I watched it go from 88% to 71% in the span of an hour, while it wasn’t being used — and it seems unable to hold onto a WiFi connection reliably, and it’s not a recipe for an enjoyable user experience.

What it demonstrates, like so much AI tech right now, is that the ambition of these products exceed the capabilities of the underlying technology. LLMs (or LAMs) do not process data with the speed or accuracy to really be an integral part of your daily digital life right now. 

Sure, you can have a fun conversation with an AI chatbot, but it might just make up fake news reports

Sure, you can have a fun conversation with an AI chatbot, but if you ask it for factual information, it might just make up fake news reports about Vladimir Putin, or tell people that it has a disabled and gifted child. The r1 wants to be a nexus for all of these services to intertwine, but the underlying services themselves are years away from being commercially viable – even Mark Zuckerberg will tell you that. The other issue that stands out to me here is that I can’t see the r1 or the Humane pin doing anything novel that the phone already in your pocket couldn’t do with the right software. The idea of a secondary or replacement device seems silly when it’s clear all these products are doing is acting as a vehicle for cloud-processed language and vision models.

So where does that leave Rabbit? Clearly right now this is a hobbyist device – a V.5 product that will need time in the hands of enthusiasts and lots of work on its backend software. It’s beautiful and interesting, and a lot of fun to mess around with if you don’t expect too much from it – but it’s not the future of computing. Not yet.

More Tech

See all Tech
tech

Anthropic launches “Claude Design,” sending shares of Figma and Adobe down

Anthropic has been slowly and steadily gaining a leading share in the enterprise AI market by focusing on coding, spreadsheets, and other common productivity and workplace apps.

And now they are going after design apps.

Today Anthropic launched Claude Design, a dedicated app powered by its latest model Claude Opus 4.7 that lets users use text prompts to build web site designs, user interface prototypes, presentations, and marketing materials.

Shares of Figma and Adobe sank on the news.

While Claude has previously had the ability to create designs and user interfaces, breaking it out into a dedicated app signals a major new piece of its enterprise strategy alongside its popular Claude Code product.

Today Anthropic launched Claude Design, a dedicated app powered by its latest model Claude Opus 4.7 that lets users use text prompts to build web site designs, user interface prototypes, presentations, and marketing materials.

Shares of Figma and Adobe sank on the news.

While Claude has previously had the ability to create designs and user interfaces, breaking it out into a dedicated app signals a major new piece of its enterprise strategy alongside its popular Claude Code product.

tech

Apple’s China iPhone shipments surged 20% in Q1 even as overall smartphone shipments fell

Apple’s iPhone shipments in China jumped 20% last quarter, even as the country’s overall smartphone market fell 4%, according to new data from Counterpoint Research. Rising memory costs have pushed prices higher across the industry, weighing on demand.

Apple appears poised to ride out the broader smartphone slump. Its strength at the less price-sensitive high end of the market and its unusual leverage over suppliers, which helps keep costs in check, give it an edge over rivals.

Greater China remains a critical region for Apple, making up about 18% of its total revenue in the fourth quarter. The company accounted for 19% of China’s smartphone market in the first quarter, up from 15% a year earlier, per Counterpoint.

tech
Rani Molla

Anthropic has surged past OpenAI in capturing business spending on generative-AI software

Last quarter, Anthropic attracted the lion’s share of trackable business spending on generative-AI software, according to new data from Ramp, a fintech company that provides corporate cards and expense management software for small firms and Fortune 500 companies alike.

The data showed that in the first quarter, Anthropic saw 37% of spending, its biggest share yet, versus 33% for OpenAI. Notably, the dataset doesn’t capture spending by Google or Microsoft.

OpenAI, which makes ChatGPT, still leads in overall adoption at 81% of AI buyers, but Anthropic is catching up, at nearly 63% in March. Overall, more than half of Ramp’s customers currently pay for AI, up from just 18% two years ago.

Anthropic’s enterprise tools, including Claude Code and Cowork, have been making waves among the business class, sending its revenue soaring.

Anthropic’s revenue share is even higher among companies spending on AI for the first time.

“Anthropic has definitely been on a tear,” Ara Kharazian, Ramp’s economist, told Sherwood News. “Its increase in adoption rates has been driven by its ability to sell to less technical users and smaller contracts than it typically has.”

It’s notable that midway through the first quarter, Anthropic had a falling-out with one of its biggest customers, the US government, which near the end of February decided to shun Anthropic’s products and lean into working with OpenAI.

tech
Jon Keegan

Report: Google ditches its objection to defense work, pitches Gemini to Pentagon

In 2018, Google employees protested against the company’s tech being used for the US military’s Project Maven — a drone targeting program — reminding the company of its “don’t be evil” motto.

After the controversy, the company declined to renew the contract with the Pentagon, drawing a bright line between Big Tech and the national security establishment.

What a difference a few years makes.

Google is now actively working to get its Gemini AI model to be used in classified national security settings, according to a new report from The Information. Seeking a similar deal to the one OpenAI hashed out with the Pentagon, Google reportedly wants a contract that allows use of Gemini in classified work, but with a prohibition on mass domestic surveillance and autonomous lethal weapons.

But Google is playing catch-up in a major way. Amazon and Microsoft both have been widely used for classified defense work, and contractors are already experienced in working with their cloud systems, while Google’s services have never been used in classified work.

What a difference a few years makes.

Google is now actively working to get its Gemini AI model to be used in classified national security settings, according to a new report from The Information. Seeking a similar deal to the one OpenAI hashed out with the Pentagon, Google reportedly wants a contract that allows use of Gemini in classified work, but with a prohibition on mass domestic surveillance and autonomous lethal weapons.

But Google is playing catch-up in a major way. Amazon and Microsoft both have been widely used for classified defense work, and contractors are already experienced in working with their cloud systems, while Google’s services have never been used in classified work.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.