Can Robots Feel Pain?

Carl Strathearn writes about his research which he hopes will reduce some of the unsubstantiated claims of the field of artificial intelligence and robotics.

Recently I was invited to give a talk on BBC Radio Stoke about new research published by Cambridge and Brussels University researchers regarding the creation of a robot arm that can self-heal and feel pain. I knew instantly upon reading the journal paper before the broadcast that the research was misleading in its title and context. I have spent the last two and a half years of my PhD researching, developing and testing new ways of creating artificial muscles using organic fibres driven by electrostatic energy for use in a novel robotic eye system. I presented my work to the world’s leading experts in the field of soft robotics at the SWARM conference in Japan, which I was awarded best paper and publication, so I have a sound grasp on recent developments and limitations of the field.

I went on air knowing that I would have to demystify many of the claims that these scientists promoted and the potential impact and fear factor of claiming that we are on the verge of creating robots that can feel pain and emotions like a human. However, such claims are not uncommon in the field of AI and robotics and have led to issues such as accrediting citizenship and human rights to an android named ‘Sophia’ before the robot is anywhere near human-like enough to be considered equal to humans. Similarly, a humanoid robot named ‘AIDA’ has amassed over a million pounds in the sales of so-called ‘AI Art, but is nothing more than an animatronic with a pen plotter for an arm’. The problem with putting showmanship before academic rigour is that scientists are trying to advance the field of AI and robotics quicker than it is. This poses a serious problem when presented to the public as fact. A key concern is that, as humanoid robots look and act like humans, it is the human drive to instinctively presume they can also think and feel like us, which is not the case. If I was to put a toaster oven inside a mannequin, would you think AI was making your breakfast? Well of course not, but this is the same thing as claiming a robot can be as creative in producing works of art as a sentient human being and selling it for twice the price because the robot looks slightly human!

I recently published an article in the conversation to outline my PhD work in creating an evaluation procedure called ‘The Multimodal Turing Test’. The objective of my research is to implement the first graded evaluation system for measuring the authenticity of humanoid robots towards creating androids that are perceptually indistinguishable from humans. I hope that this universal evaluation method will be used by future engineers to benchmark their progress towards creating higher modes of human emulation and reduce some of the unsubstantiated claims coming out of the field of AI and robotics today. To test the validity of the Multimodal Turing Test, I have implemented the robotic eyes into a humanoid robot named ‘Euclid’ (pictured) to run a series of experiments starting this term. 

Carl Strathearn is a PhD candidate at Staffordshire University studying AI and Humanoid Robotics and can be contacted here:
carl.strathearn@research.staffs.ac.uk