Google Deepmind, Google’s AI Research Lab, On Wednsday Announced new ai models called gemini robotics Designed to enable real-world machines to interact with objects, navigate environments, and more.
Deepmind published a series of demo videos showing Robots Equipped with Gemini Robotics Folding Paper, Putting a Pair of Glasses Ento a Case, and Other Tasks in Response to VOCE Commands. According to the lab, gemini robotics was trained to generalize behavior across a range of different robotics hardware, and to connect items robots can “see” see “see”
Deepmind Claims that in Tests, Gemini Robotics Allowed Robots to perform well in environments not included in the training data. The lab has released a slimmed-down model, gemini robotics-ear, that resarchers can use to train their own models for robotics control, as well as a benchmark called asimov for gauging Risk AI-Powered Robots.