Tech

What’s up doc?

HOP TO IT

Rabbit R1 handheld device
Sherwood News

Rabbit’s r1 AI companion is not the future you were promised

The $199 “Large Action Model” device has big ambitions and a pretty face. And that’s about it.

If you've been paying attention to the technology space lately (and let's be honest, you have), you've probably noticed that the hype cycle around AI has reached a fever pitch. You know, the high note before the crash, historically. In one particularly interesting area of AI exploration, there's been something of a recent boom: hardware specifically designed around interacting with large language models (or LLMs, the backbone of most "AI" tech).

Most visible in this space right now is a disastrously reviewed AI “pin” from the company Humane, which received $230M in venture funding and is founded by two ex-Apple employees famed for leading interface design on products like the Apple Watch. But a close second runner-up is a curious device created by a company called Rabbit, dubbed the r1.

Introduced at the Consumer Electronics Show in January, The r1 shares some similarities with Humane's pin in that it seeks to alter the way we think about mobile computing through both hardware differentiation and the belief that the software that powers our future devices will rely heavily on artificial intelligence to help us navigate our daily lives. 

Where Humane supposes that the future of mobile computing is an awkward pin that projects images onto your hand and relies completely on voice control, the r1 takes a decidedly different approach. It has a screen, and can be navigated with a scroll wheel as well as touch, and most importantly, doesn't seek to replace your phone. Instead, you can think of it as a companion to your phone, or a more simplified way to carry out tasks an AI might handle better than the mobile operating systems Apple and Google created nearly 20 years ago.

At the very least, the r1 looks really good

The r1 is a beautiful and fascinating object, with hardware and an interface created by Swedish design house Teenage Engineering, who have taken the mantle from Jony Ive as the most forward thinking and exciting industrial designers of our age. Beyond creating a string of industry-defining audio and synthesizer products (including the iconic OP-1), Teenage Engineering have worked with the Nothing brand on its phone and earbud designs, created the Playdate handheld gaming device alongside developers Panic, and partnered with Ikea on a modular line of lights and speakers called Frekvens. So at the very least, the r1 looks really good.

The device is sized to about 50% of a mobile phone (though it cannot replace your phone), and is striking in its neon orange colorway. It touts a motorized camera which can swivel to view the user or what the user is looking at, and can slip easily into just about any pocket. It is navigated largely through voice interaction, but it also has menus which will let you jump through its OS (albeit sluggishly) without having to say anything.

Rabbit R1 menu
The Rabbit R1’s main menu.

If only the software here and "Large Action Model" (the company's name for the r1's method of interfacing with the world) were as developed and complete as its look and feel. 

From my perspective, the r1 wants to do two things for its users right now. 

The first is to connect disparate services together to create a unified “front door” to all of the things you’re typically doing with your phone that aren’t messaging or talking to people; ordering food, playing music, taking notes, getting directions, finding out what the weather is, and so on. Right now, every time you want to do something like that with your phone, you’re jumping into and out of individual apps; the r1 supposes a future where all your apps connect into a central piece of AI-driven software, disappear, and then are acted on in the background when you request a specific action. For instance, you might say something like “I need an Uber where I am right now and I want to go home” and then instead of opening the app and tapping in all your requests, the r1’s LAM will do it for you.

It said it was getting the music going, the screen went dark, and I never heard anything

This, on its face, is a brilliant idea, and most likely the future state of computing for the vast majority of users. However, the r1 is not nearly ready for primetime to handle these tasks (in fairness, the company doesn’t seem to be presenting this as a mass-market device, rather an experiment in progress). At present, it only connects four apps to the device (Uber, Spotify, Doordash, and confusingly Midjourney, the AI art generation app), and navigating those apps either failed most of the time, or misunderstood what I wanted altogether. For instance, when I tried to play specific music on Spotify by voice command, it would often begin playing something other than what I wanted or just do nothing at all. One time when I asked it to play the new Beyoncé album, it said it was getting the music going, the screen went dark, and I never heard anything. I also had to reboot the device to get the home screen to show up again. Womp.

In fairness, Google Assistant doesn’t seem to fare much better on this one.

The second way the r1 is trying to innovate in mobile computing is adding a new spatial awareness, or physical awareness, to you and your surroundings and augment that with the capabilities of an AI assistant. 

For instance, the r1 can use “vision” to do various tasks, like describe what it’s looking at, or transcribe text, or turn a hand-drawn spreadsheet into a CSV file. It does some of this fairly well, particularly its ability to tell you what it’s looking at, though it did say my chihuahua Ginger was alternately a cat and a rabbit several times. The company touts realtime translation using voice recognition, but I couldn’t get it to listen to and transcribe a video of Bradley Cooper speaking French even a little bit (it just gave me a failure message).

Rabbit r1 AI vision dog / cat pic
The r1 misidentifying Ginger.

Failure, in fact, seems to be a hallmark of the r1’s capabilities right now. It’s a whiz at taking notes and saving it to its cloud “rabbithole,” but it can’t set a reminder, takes forever to look up a basic weather report, doesn’t seem to know what time zone I’m in, and failed to fetch an Uber for me when I asked it several times (“Uber is unavailable, sorry for the inconvenience”). Add to this the fact that the battery drains at a breakneck rate — I watched it go from 88% to 71% in the span of an hour, while it wasn’t being used — and it seems unable to hold onto a WiFi connection reliably, and it’s not a recipe for an enjoyable user experience.

What it demonstrates, like so much AI tech right now, is that the ambition of these products exceed the capabilities of the underlying technology. LLMs (or LAMs) do not process data with the speed or accuracy to really be an integral part of your daily digital life right now. 

Sure, you can have a fun conversation with an AI chatbot, but it might just make up fake news reports

Sure, you can have a fun conversation with an AI chatbot, but if you ask it for factual information, it might just make up fake news reports about Vladimir Putin, or tell people that it has a disabled and gifted child. The r1 wants to be a nexus for all of these services to intertwine, but the underlying services themselves are years away from being commercially viable – even Mark Zuckerberg will tell you that. The other issue that stands out to me here is that I can’t see the r1 or the Humane pin doing anything novel that the phone already in your pocket couldn’t do with the right software. The idea of a secondary or replacement device seems silly when it’s clear all these products are doing is acting as a vehicle for cloud-processed language and vision models.

So where does that leave Rabbit? Clearly right now this is a hobbyist device – a V.5 product that will need time in the hands of enthusiasts and lots of work on its backend software. It’s beautiful and interesting, and a lot of fun to mess around with if you don’t expect too much from it – but it’s not the future of computing. Not yet.

More Tech

See all Tech
tech

ICE agents arrest workers from Meta’s Hyperion data center site

Yesterday, US Immigration and Customs Enforcement (ICE) officers stopped and arrested two workers from Meta’s massive Hyperion data center construction site in Richland Parish, Louisiana.

According to the Richland Parish Sheriff’s Office, two dump truck drivers were stopped and arrested as part of a traffic stop as they headed to the construction site where thousands of people are working.

Bloomberg reports that unmarked vehicles at the perimeter of the construction site were stopping and checking the identification of workers. The Sheriff’s Office said ICE agents did not enter the Meta site at any time.

Bloomberg reports that unmarked vehicles at the perimeter of the construction site were stopping and checking the identification of workers. The Sheriff’s Office said ICE agents did not enter the Meta site at any time.

tech

Two cofounders leave Thinking Machines Lab to return to OpenAI

A group of researchers have left Mira Murati’s Thinking Machines Lab to go back to OpenAI. Fidji Simo, OpenAI’s head of apps, posted on X that Thinking Machines cofounders Barret Zoph and Luke Metz, along with Sam Schoenholz, will be returning to the company.

In October, Thinking Machines cofounder Andrew Tulloch left to work for Meta.

Thinking Machine Labs was cofounded by Murati, a former OpenAI executive, and the startup has been raising large amounts of money, reportedly with a $50 billion valuation.

Thinking Machine Labs was cofounded by Murati, a former OpenAI executive, and the startup has been raising large amounts of money, reportedly with a $50 billion valuation.

tech

X says it’s stopping Grok from putting real people in bikinis on X

After public and government uproar over sexualized deepfakes of women and children, X’s Safety account posted Wednesday evening that it is no longer allowing the Grok account on X to generate “images of real people in revealing clothing such as bikinis.” The xAI-owned company also said it restricted image generation and editing via Grok on X more broadly to paid subscribers.

For what it’s worth, a subscriber reply to X Safety’s post asking Grok to put the tweet “in a bikini” prompted the chatbot to post an image of a woman in a bikini — though she does not appear to be a real person. Im not a paid X subscriber but, in the process of reporting this piece, I was able to edit the image to be “younger” and “17 years old.”

The post also did not address what the changes mean for Grok’s stand-alone app, which currently ranks No. 5 among free apps in Apple’s App Store. Previous reporting from NBC News found that users could also still generate offensive images using the app.

Tesla and xAI CEO Elon Musk, for his part, said Wednesday that he was “not aware of any naked underage images generated by Grok.”

For what it’s worth, a subscriber reply to X Safety’s post asking Grok to put the tweet “in a bikini” prompted the chatbot to post an image of a woman in a bikini — though she does not appear to be a real person. Im not a paid X subscriber but, in the process of reporting this piece, I was able to edit the image to be “younger” and “17 years old.”

The post also did not address what the changes mean for Grok’s stand-alone app, which currently ranks No. 5 among free apps in Apple’s App Store. Previous reporting from NBC News found that users could also still generate offensive images using the app.

Tesla and xAI CEO Elon Musk, for his part, said Wednesday that he was “not aware of any naked underage images generated by Grok.”

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.