As expected, CES 2024 features a more diverse launch of artificial intelligence (AI)-powered devices. But one specific launch, by Rabbit Inc. Emerging, attracting attention: Rabbit R1, a device that uses artificial intelligence to learn how to launch applications based on user interactions.
This means that we will finally be able to have a mobile device capable of performing actions in apps solely through voice commands, something that big tech companies have been pursuing for some time with their virtual assistants.
Meet the R1 rabbit
Rabbit R1 is a pocket-sized mobile device powered by a MediaTek Helio P35 processor. It has a push-to-talk button, a microphone for voice commands, a computer vision-enabled 360-degree rotating camera called the Rabbit Eye, Wi-Fi, Bluetooth, and even a YES 4G slot.
The big innovation of this equipment is its artificial intelligence-based approach. Thanks to a Large Action Model (LAM)-based operating system called Rabbit OS, R1 can navigate between applications and learn user intentions and behavior.
From now on, he can reproduce this behavior later, when he receives the command. In other words, it can learn how to use user applications and perform actions through voice commands, all intuitively, even actions that require several steps.
“Our mission is to create the simplest computer, something so intuitive that you don't need to learn how to use it. The best way to achieve this is to move away from the app-based operating system that smartphones currently use. Instead, “we envision a language-centric approach Natural,” Rabbit founder and CEO Jesse Liu explained during the device’s presentation.
The Rabbit R1 aims to be a 'pocket companion'
As CEO Jesse Liu explains, Android and iOS devices currently run on an app ecosystem, which continues to proliferate.
“Do you want to order a car to go to the office? There is an app for that. Do you want to buy groceries? There is another app for that. Every time you want to do something, you can browse different pages and folders to find the app you want to use, and there are always countless From the buttons you need to click, add to cart, go to the next page, etc. The smartphone should be more intuitive,” Liu explained.
To change this, R1 has the ability to execute actions from different applications to satisfy one or more concurrent requests. In the examples the CEO gave in the presentation (from minute 13:00 in the video above), the device is used to order food and even to order a car, turning what might be several steps into a single voice command.
But the most fascinating example is asking the device to organize a trip to London, including searching for tickets, hotels, car rentals and activities to do in the city. What the user usually does in various applications is performed by R1 within seconds. The device sends the information found and asks the user to confirm individually for each.
“The computer we build, which we call a companion, must be able to talk, understand and, most importantly, do things for you,” says the CEO. “The future of human-machine interfaces must be more intuitive.”
Bunny R1 now Available for pre-sale It is priced at US$199.00 (€181.45) and orders to the US should start shipping in March. There is still no information about sales in other countries.
According to the company, R1 will initially be trained to work with the most popular applications. However, the plan is that in the near future, the device will have features that allow users to train their rabbits to perform tasks in other applications that they frequently use.
4gnews editors recommend:
“Friendly zombie fanatic. Analyst. Coffee buff. Professional music specialist. Communicator.”