Thinking Robots Learn Cooking, Watching Online Videos
Computer scientist Yiannis Aloimonos from University of Maryland is developing robotic systems which are able to visually recognize objects and generate new behavior based on the observations.
It will not be long when you will be having a personal robot to cook your breakfast without any help from you, as all the steps in cooking will be learned by the robot by watching the YouTube videos.
The Researchers at the University of Maryland Institute for Advanced Computer Studies (UMIACS) along with a scientist from the National Information Communications Technology Research Centre of Excellence in Australia (NICTA) has developed robotic systems that are able to teach themselves.
These robots are capable of learning the grasping and manipulation movements required for cooking by watching cooking videos online. The important breakthrough is that these robots are capable of thinking and they can determine the best motions that will allow them to accomplish the given tasks effectively.
The work done by the researchers will be presented at the Association for the Advancement of Artificial Intelligence Conference in Austin, Texas on January 29th 2015. The researchers were able to achieve this feat by combining the approaches of different research areas such as Artificial Intelligence or computers with thinking capacity to make their own decisions, computer vision which enables the systems to identify shapes and movements and natural language processing which enable the systems to understand spoken commands.
The research team said that though the work involved was complex, they wanted results which are practical and which are related to the daily lives of people.
“We chose cooking videos because everyone has done it and understands it,” said Yiannis Aloimonos, UMD professor of computer science and director of the Computer Vision Lab, one of 16 labs and centers in UMIACS. “But cooking is complex in terms of manipulation, the steps involved and the tools you use. If you want to cut a cucumber, for example, you need to grab the knife, move it into place, make the cut and observe the results to make sure you did them properly.”
We are trying to create a technology so that robots eventually can interact with humans,” said Cornelia Fermüller, an associate research scientist at UMIACS. “So they need to understand what humans are doing. For that, we need tools so that the robots can pick up a human’s actions and track them in real time. We are interested in understanding all of these components. How is an action performed by humans? How is it perceived by humans? What are the cognitive processes behind it?”
Aloimonos said that this will be a great contribution to the next industrial revolution.