Teaching Robots How to Learn
Discover learning from demonstration (LfD), a method that transfers knowledge to robots.
Humans learn from observing, following, or imitating others. With learning from demonstration (LfD), robots do so as well. In this article, we take a closer look at LfD: the types of demonstrations and its various applications, from manufacturing and healthcare to mobile robots.
What is LfD?
LfD is “the paradigm in which robots acquire new skills by learning to imitate an expert.” Unlike more traditional robot programming methods, LfD does not require proficiency in coding or a huge time investment. Instead, humans teach the robot the skills that took them years to master.
What are the different methods of LfD?
In this type of demonstration, the human physically moves the robot through the desired motions. This approach is intuitive, requires minimal training, and does not need additional sensors, inputs, or interfaces.
In this method, the human user employs a joystick or graphical user interface (GUI) to control the robot’s movements. Because the user does not have to be on-site, teleoperation can be done remotely and for large-scale demonstrations.
Wearing sensors for tracking, the human teacher performs the task while the robot shadows him or her. Also referred to as imitation learning, the advantage of this method is that it requires almost no training for the human teacher.
(Also read: Man and Machine: Collaborative Robots)
What are the applications of LfD?
With LfD, those in manufacturing spend less time reprogramming robots. Some tasks for robots include peg insertion, pick-and-place, grasping, polishing, and assembly operations.
(Also read: COVID-19 and the Future of Electronics Manufacturing)
LfD helps robots acquire skills that are important in the medical industry: necessary for assisting patients and providing healthcare services: from robotic surgery and rehabilitation to assisting patients with their daily activities.
(Also read: How Robotics Is Transforming Healthcare)
(Also read: A Driverless Future: Autonomous Vehicles)
While LfD has many advantages and applications, it also has its drawbacks. Learning may be affected by the frequency and quality of demonstrations, and may not leave much room for robots to learn more intuitively.
Last November, researchers from the University of South California (USC) presented a paper entitled “Learning from Demonstrations Using Signal Temporal Logic” at the Conference on Robot Learning (CoRL).
Signal temporal logic (STL) is “an expressive mathematical symbolic language that enables robotic reasoning about current and future outcomes.” With STL, robots can evaluate and rank the quality of demonstrations.
For example, if someone skips a stop sign during a driving demonstration, this would be ranked lower by the system. If the driver uses the brakes to avoid a crash, the robot will still learn from this action.
The system they have designed evaluates the quality of each demonstration, enabling the robot to learn from both successes and mistakes—much the same way a human does.
"When we go into the world of cyber physical systems, like robots and self-driving cars, where time is crucial, linear temporal logic becomes a bit cumbersome, because it reasons about sequences of true/false values for variables, while STL allows reasoning about physical signals,” Jyo Deshmukh, a former Toyota engineer and USC Viterbi assistant professor of computer science, told ScienceDaily.
Meanwhile, researchers from the Massachusetts Institute of Technology (MIT) have designed the Planning with Uncertain Specifications (PUnS) system. This system gives robots “the humanlike planning ability to simultaneously weigh many ambiguous—and potentially contradictory—requirements to reach an end goal.”
After observing human demonstrations of setting the table with objects, researchers tasked the arm with setting the table in a specific configuration. Despite items being hidden, removed, or stacked, the robot arm correctly completed the task in real-world experiments and made only six mistakes out of 20,000 simulated test runs.
During the real-world demonstrations where an object was hidden, the robot was not deterred. It would finish setting the table and then, when the missing item was revealed, would set it in the proper place.
“The vision is to put programming in the hands of domain experts, who can program robots through intuitive ways, rather than describing orders to an engineer to add to their code,” says first author Ankit Shah, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro) and the Interactive Robotics Group. “That way, robots won’t have to perform preprogrammed tasks anymore. Factory workers can teach a robot to do multiple complex assembly tasks. Domestic robots can learn how to stack cabinets, load the dishwasher, or set the table from people at home.”
From manufacturing and healthcare to transportation and household tasks, robots can learn from—and lend a helping hand to—humans, making our everyday lives potentially safer and more efficient.
As one of the Top 20 EMS companies in the world, IMI has over 40 years of experience in providing electronics manufacturing and technology solutions.
At IMI, we believe that humanity drives technology, and we direct our passion at solutions that enhance our way of living. With more than 400,000 square meters of factory space in 22 factories across 10 countries, we are positioned to build your business on a global scale.
Our proven technical expertise, worldwide reach, and vast experience in high-growth and emerging markets make us the ideal global manufacturing solutions partner.
Let's work together to build our future today.