Tech

What’s up doc?

HOP TO IT

Rabbit R1 handheld device
Sherwood News

Rabbit’s r1 AI companion is not the future you were promised

The $199 “Large Action Model” device has big ambitions and a pretty face. And that’s about it.

If you've been paying attention to the technology space lately (and let's be honest, you have), you've probably noticed that the hype cycle around AI has reached a fever pitch. You know, the high note before the crash, historically. In one particularly interesting area of AI exploration, there's been something of a recent boom: hardware specifically designed around interacting with large language models (or LLMs, the backbone of most "AI" tech).

Most visible in this space right now is a disastrously reviewed AI “pin” from the company Humane, which received $230M in venture funding and is founded by two ex-Apple employees famed for leading interface design on products like the Apple Watch. But a close second runner-up is a curious device created by a company called Rabbit, dubbed the r1.

Introduced at the Consumer Electronics Show in January, The r1 shares some similarities with Humane's pin in that it seeks to alter the way we think about mobile computing through both hardware differentiation and the belief that the software that powers our future devices will rely heavily on artificial intelligence to help us navigate our daily lives. 

Where Humane supposes that the future of mobile computing is an awkward pin that projects images onto your hand and relies completely on voice control, the r1 takes a decidedly different approach. It has a screen, and can be navigated with a scroll wheel as well as touch, and most importantly, doesn't seek to replace your phone. Instead, you can think of it as a companion to your phone, or a more simplified way to carry out tasks an AI might handle better than the mobile operating systems Apple and Google created nearly 20 years ago.

At the very least, the r1 looks really good

The r1 is a beautiful and fascinating object, with hardware and an interface created by Swedish design house Teenage Engineering, who have taken the mantle from Jony Ive as the most forward thinking and exciting industrial designers of our age. Beyond creating a string of industry-defining audio and synthesizer products (including the iconic OP-1), Teenage Engineering have worked with the Nothing brand on its phone and earbud designs, created the Playdate handheld gaming device alongside developers Panic, and partnered with Ikea on a modular line of lights and speakers called Frekvens. So at the very least, the r1 looks really good.

The device is sized to about 50% of a mobile phone (though it cannot replace your phone), and is striking in its neon orange colorway. It touts a motorized camera which can swivel to view the user or what the user is looking at, and can slip easily into just about any pocket. It is navigated largely through voice interaction, but it also has menus which will let you jump through its OS (albeit sluggishly) without having to say anything.

Rabbit R1 menu
The Rabbit R1’s main menu.

If only the software here and "Large Action Model" (the company's name for the r1's method of interfacing with the world) were as developed and complete as its look and feel. 

From my perspective, the r1 wants to do two things for its users right now. 

The first is to connect disparate services together to create a unified “front door” to all of the things you’re typically doing with your phone that aren’t messaging or talking to people; ordering food, playing music, taking notes, getting directions, finding out what the weather is, and so on. Right now, every time you want to do something like that with your phone, you’re jumping into and out of individual apps; the r1 supposes a future where all your apps connect into a central piece of AI-driven software, disappear, and then are acted on in the background when you request a specific action. For instance, you might say something like “I need an Uber where I am right now and I want to go home” and then instead of opening the app and tapping in all your requests, the r1’s LAM will do it for you.

It said it was getting the music going, the screen went dark, and I never heard anything

This, on its face, is a brilliant idea, and most likely the future state of computing for the vast majority of users. However, the r1 is not nearly ready for primetime to handle these tasks (in fairness, the company doesn’t seem to be presenting this as a mass-market device, rather an experiment in progress). At present, it only connects four apps to the device (Uber, Spotify, Doordash, and confusingly Midjourney, the AI art generation app), and navigating those apps either failed most of the time, or misunderstood what I wanted altogether. For instance, when I tried to play specific music on Spotify by voice command, it would often begin playing something other than what I wanted or just do nothing at all. One time when I asked it to play the new Beyoncé album, it said it was getting the music going, the screen went dark, and I never heard anything. I also had to reboot the device to get the home screen to show up again. Womp.

In fairness, Google Assistant doesn’t seem to fare much better on this one.

The second way the r1 is trying to innovate in mobile computing is adding a new spatial awareness, or physical awareness, to you and your surroundings and augment that with the capabilities of an AI assistant. 

For instance, the r1 can use “vision” to do various tasks, like describe what it’s looking at, or transcribe text, or turn a hand-drawn spreadsheet into a CSV file. It does some of this fairly well, particularly its ability to tell you what it’s looking at, though it did say my chihuahua Ginger was alternately a cat and a rabbit several times. The company touts realtime translation using voice recognition, but I couldn’t get it to listen to and transcribe a video of Bradley Cooper speaking French even a little bit (it just gave me a failure message).

Rabbit r1 AI vision dog / cat pic
The r1 misidentifying Ginger.

Failure, in fact, seems to be a hallmark of the r1’s capabilities right now. It’s a whiz at taking notes and saving it to its cloud “rabbithole,” but it can’t set a reminder, takes forever to look up a basic weather report, doesn’t seem to know what time zone I’m in, and failed to fetch an Uber for me when I asked it several times (“Uber is unavailable, sorry for the inconvenience”). Add to this the fact that the battery drains at a breakneck rate — I watched it go from 88% to 71% in the span of an hour, while it wasn’t being used — and it seems unable to hold onto a WiFi connection reliably, and it’s not a recipe for an enjoyable user experience.

What it demonstrates, like so much AI tech right now, is that the ambition of these products exceed the capabilities of the underlying technology. LLMs (or LAMs) do not process data with the speed or accuracy to really be an integral part of your daily digital life right now. 

Sure, you can have a fun conversation with an AI chatbot, but it might just make up fake news reports

Sure, you can have a fun conversation with an AI chatbot, but if you ask it for factual information, it might just make up fake news reports about Vladimir Putin, or tell people that it has a disabled and gifted child. The r1 wants to be a nexus for all of these services to intertwine, but the underlying services themselves are years away from being commercially viable – even Mark Zuckerberg will tell you that. The other issue that stands out to me here is that I can’t see the r1 or the Humane pin doing anything novel that the phone already in your pocket couldn’t do with the right software. The idea of a secondary or replacement device seems silly when it’s clear all these products are doing is acting as a vehicle for cloud-processed language and vision models.

So where does that leave Rabbit? Clearly right now this is a hobbyist device – a V.5 product that will need time in the hands of enthusiasts and lots of work on its backend software. It’s beautiful and interesting, and a lot of fun to mess around with if you don’t expect too much from it – but it’s not the future of computing. Not yet.

More Tech

See all Tech
8%

Some 8% of kids ages 5-12 have interacted with AI chatbots like OpenAI’s ChatGPT or Google’s Gemini, according to a new Pew Research Center survey of their parents. While that’s nowhere near the usage rates of other devices like smartphones or even voice assistants, it’s still notable for a relatively new technology — especially one that’s already had devastating consequences for young people.

tech

Ives says he’s “relatively disappointed” in the price point of lower-cost Tesla models

On Tuesday, Tesla unveiled its long-awaited lower-cost cars, which turned out to be downgraded versions of the existing Model Y and Model 3. Tesla bull and Wedbush Securities analyst Dan Ives wasn’t particularly impressed with the price point, noting that it’s “still relatively high versus other vehicles on the market.”

The Model Y Standard and Model 3 Standard cost about $40,000 and $37,000, respectively. That’s more than the Model Y Premium and Model 3 Premium — what previous editions (or “trim levels”) are now called — cost last month, before the US federal government’s $7,500 tax credit expired. And the Standard models are missing a lot of Premium features, including Autopilot, second-row screens, and Tesla’s iconic glass roofs, among numerous other downgrades.

In other words, Tesla buyers will now be paying more for less, in what amounts to car-sized shrinkflation.

The stock closed down 4.5% yesterday on the news.

Ives doesn’t think it’s the end of the world but is “disappointed” in the price tag:

“We believe the launch of a lower cost model represents the first step to getting back to a ~500k quarterly delivery run-rate which will be important to stimulate demand for its fleet with the EV tax credit expiring at the end of September but we are relatively disappointed with this launch as the price point is only $5k lower than prior Model 3’s and Y’s.”

tech

Nvidia helps boost xAI funding round to $20 billion

xAI’s latest funding round has now doubled to $20 billion from $10 billion a month ago, thanks in part to backing from Nvidia, which invested $2 billion in the equity portion of the transaction, Bloomberg reports. In an interview with CNBC, Nvidia CEO Jensen Huang confirmed the investment, adding that his “only regret” was that he didn’t give xAI more money.

The mix of $7.5 billion in equity and $12.5 billion in debt will finance a special purpose vehicle that will purchase Nvidia chips that xAI will then rent. It’s one of many circular AI deals these days that’s contributing to chatter over an AI bubble by some, while being seen by others as a rational way for industry leaders to boost the potential size of the addressable market and lift their longer-term prospects in the process.

Investors in Elon Musk’s other company, Tesla, will vote next month at the company’s annual shareholder meeting on whether to invest in xAI as well — an outcome Musk has he said supports.

The mix of $7.5 billion in equity and $12.5 billion in debt will finance a special purpose vehicle that will purchase Nvidia chips that xAI will then rent. It’s one of many circular AI deals these days that’s contributing to chatter over an AI bubble by some, while being seen by others as a rational way for industry leaders to boost the potential size of the addressable market and lift their longer-term prospects in the process.

Investors in Elon Musk’s other company, Tesla, will vote next month at the company’s annual shareholder meeting on whether to invest in xAI as well — an outcome Musk has he said supports.

$1T
Jon Keegan

In the past few weeks, OpenAI has announced a flurry of massive deals with Oracle, Nvidia, CoreWeave, AMD, and others as hundreds of billions fly between technology partners racing to expand AI infrastructure at unprecedented scale. The Financial Times tallied it all up and found that the company has signed about $1 trillion worth of deals, and it isn’t clear at all that it will be able to fund them.

The “circular” nature of some of these arrangements is also one factor playing into fears that we’re in an AI bubble.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.