Designed in collaboration with Teenage Engineering and developed by a startup called Rabbit, the R1 is a small AI-powered pocket-sized device which is expected to eventually replace the smartphone.
Rabbit R1 Launch & First Look
Basically, this device is square in shape which features a 2.88-inch LCD touch screen alongside a rotating camera which captures photos and videos that can roll back and forth.
The device also comes with a clickable scroll wheel as it helps in navigating around the user interface and also talk to the R1’s built-in voice assistant.
As far as the looks are concerned, this device resembles a half-flip phone.
A normal smartphone uses apps to serve its user, while R1 takes a Humane AI Pin-like approach as it lets users control the device using voice commands.
Besides this, the device also sports two microphones and a speaker.
What About Functioning?
In functioning, it is similar to Humane AI Pin as users can interact with the Rabbit R1 by pressing and holding the ‘Push-to-talk’ button, which initiates the built-in voice assistant.
With the help of this, users can ask questions of their choice to know about.
When it comes to the battery, Rabbit did not share any details about its capacity but the company claims that the R1 has an ‘all day’ battery, like Humane.
Further, the Rabbit R1 is powered by an in-house developed operating system called RabbitOS that uses a ‘Large Action Model’ instead of a large language model like ChatGPT, said the company CEO Jesse Lyu, in a video demo.
While talking about the initiative, he said that the company wanted to “find a universal solution that can trigger services” regardless of what platform or app the people are using.
It seems to be more in line with voice assistants such as Google Assistant and Amazon Alexa as they can send messages, call contacts and do other actions on your behalf.
Further this device uses a training model that can be used on top of existing apps instead of the traditional method of using APIs.
Moreover, the device maker claims that its Large Action Model was trained using human interaction with apps like Spotify and Uber.
This is to make it understand how the settings button looks like and how to check if an order was confirmed and remember where the navigation buttons were.