As humans, our sense of touch is one of the most important tools enabling us to interact with our surroundings. Computer scientists have long been developing algorithms that allow robots to interact with their environments in similar ways. For example, impressive progress in computer vision has brought us machines that are able to read handwritten text. Recent work led by Alex Church and Professor Nathan Lepora at the University of Bristol, UK, made progress applying similar techniques to teach a robot arm with an artificial sense of touch to type on a Braille keyboard.

The authors used deep reinforcement learning, which is a category of algorithms that allow an artificial intelligence to teach itself to perform specific tasks. Deep reinforcement learning has been used to train computer programs to play complex human games, such as when AlphaGo famously beat human professionals in the game Go. These algorithms are similar to how we as humans learn to perform tasks through trial and error, remembering what works and what doesn’t. In this experiment, a robot arm with a tactile sensor on a fingertip was tasked with typing on a Braille keyboard. The robot arm explored the keyboard, touching and thus “reading” the letters. The deep reinforcement learning algorithm helped the robot learn the positions of the letters by rewarding it each time it clicked the correct one.

The robot was tested with four benchmark tasks, from performing simple strokes with the arrow keys to more complex sequences with the entire alphabet. The robot learned the first three tests very efficiently, and only the fourth and most complex task proved too difficult. Through this process, the team determined specific methods by which the robot learned more efficiently from the tactile data. For example, the scientists found that first training the robot in a simulated environment, before switching to a real-world system, enabled a faster learning process. These steps in improving the robot’s training efficiency could allow the next generation of tactile robots to learn even more complex tasks, which could therefore could bring us closer to artificially replicating the human sense of touch. Eventually, this could lead us to robots who are able to perform most tasks that we currently rely on our sense of touch to perform.

Alex Church is a doctoral student in Engineering and Mathematics at the University of Bristol, UK. Professor Nathan Lepora is a Professor of Robotics & AI at the University of Bristol, and is the Principal Investigator for the Tactile Robotics group at the Bristol Robotics Laboratory. Senior research associate Dr. John Lloyd and Dr. Raia Hadsell from DeepMind also co-authored the paper.

Managing Correspondent: Anne Hébert

Original article: Deep Reinforcement Learning for Tactile Robotics: Learning to Type on a Braille Keyboard – IEEE

Media coverage: Teaching AI agents to type on a Braille keyboard – Tech Xplore

Image credit: Stefan Malmesjö

Leave a Reply

Your email address will not be published. Required fields are marked *