Diego-san – humanoid robot toddler
While photos and videos of the robot have been presented at scientific conferences in robotics and in infant development, the general public recently got the first glance at the expressive face of a robot named Diego-san. The project is led by a researcher from University of California, San Diego (UCSD) and it will be used to study how babies “learn” to control their bodies and interact with other people.
Led by Javier Movellan, UCSD full research scientist, the research is conducted in joint collaboration with Professor Dan Messinger’s Early Play and Development Laboratory, and Professor Emo Todorov’s Movement Control Laboratory at the University of Washington.
Diego-san robot is a product of the Developing Social Robots project (started in 2008), where the ability of infants to seamlessly solve problems during their first year of life inspired researchers to develop a platform in order to make progress on computational problems that elude the most sophisticated computers and Artificial Intelligence approaches.
The robot is about 1.3 meters (4 feet 3 inches) tall and it weight weighs 30kg (66 pounds). The head of Diego-san is building on Hanson Robotics work with the Machine Perception Lab, where emotionally responsive Albert Einstein head was developed back in 2009. Its body, which was developed by Japan’s Kokoro Co. (we wrote about their i-Fairy robot), has a total of 44 pneumatic joints, while its head, which was developed by Hanson Robotics, has 27 moving parts.
It’s equipped with two cameras, two microphones, inertial measurement units, 38 potentiometers, and 88 pressure sensors. Robot’s sensors and actuators were built to approximate the levels of complexity of human infants, including actuators to replicate dynamics similar to those of human muscles. The technology should allow Diego-san to learn and autonomously develop sensory-motor and communicative skills typical of one-year-old infants. Movellan and his colleagues are developing the software that allows Diego-san to learn to control his body and to learn to interact with people.
“We developed machine-learning methods to analyze face-to-face interaction between mothers and infants, to extract the underlying social controller used by infants, and to port it to Diego-san. We then analyzed the resulting interaction between Diego-san and adults”, said Movellan, who directs the Institute for Neural Computation’s Machine Perception Laboratory, based in the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2).
While the robot surely belongs to the uncanny valley, a hypothesis in the field of robotics which holds that when human replicas look like humans but don’t act like actual human beings causes a repulsive reaction to the human observers, it may lead to computational study of infant development and potentially offer new clues for the understanding of developmental disorders such as autism and Williams syndrome.
Although I admire what they achieved, the face of the baby seems pretty unrealistic.
How come they expect from test subjects to treat it as a human baby when it is repulsive?
I agree with you MissConception. And it definitely belongs to the uncanny valley.