Industry News for Business Leaders
Artificial IntelligenceRobotics

Interview. Should We Fear Smart Robots?

Interview. Should We Fear Smart Robots?
Sophia the Robot / Courtesy of Hanson Robotics

Laurence Devillers is a researcher in engineering sciences for the CNRS (French National Centre for Scientific Research), a member of the CERNA (Committee for the Study of Research Ethics in Digital Sciences and Technologies) and a specialist in man-machine interactions. She wrote Robots and Humans (Des Robots et des Hommes, Ed. Plon 2017), a book in which she advocates for ethical and responsible robotics.

DirectIndustry e-magazine: In 2018, Google unveiled Google Duplex which can make appointments by imitating a human voice. Do you find this frightening?

Laurence Devillers: All of my work focuses on the idea that we need to keep a boundary between humans and robots. But Google is blurring the lines with this voice. It opens the door to unethical applications. The voice is what’s easiest to imitate for the moment, from a technological standpoint. It’s possible to trick the person who doesn’t know who they’re communicating with. You can make someone say things they never said. You can even make the dead speak. It’s a breach of trust.

DI e-mag: To what extent can a robot imitate man?

L.D: As soon as we hear a machine speak, it’s implied that the machine understands. That it has the capacity of a human. Which is not the case. For example Sophia from Hanson Robotics speaks in a way that is semantically coherent but she is following a script, she’s absolutely not autonomous, she has no desires or intentions. She’s a marionette. Engineers have given her a scripted dialog. Machine learning allows for learning from data, but without understanding. This machine has nothing to do yet with a living embryo.

Illustration of smart robotics

DI e-mag: You start your book with a fictional future in which therapeutic robots are prescribed by doctors. Are robots useful then?

L.D: Yes, my point of view is utilitarian and functional. Robots have a lot to offer us. When I see the percentage of the population that will be elderly in the next few years, it’s clear that we won’t have enough working people to look after them. So it could be interesting to have a robot in charge of monitoring these people when they are at the end of their lives for example. So robots are useful but we have to be aware that they should complement humans and not replace humans.

DI e-mag: Yet we read everywhere that robots are going to take away our jobs and replace journalists, doctors and employees.

L.D: The goal is not to replace the human. The machine is, for the moment, incapable of this. Our robots are not very precise yet. We are making more cognitive advances than physical ones. Mechatronics are going to need a lot more research. Making entirely humanoid robots remains complicated.

DI e-mag: Robots are going to be increasingly present in our lives. What’s the best way to integrate them?

L.D: We first need a certain set of ethics for designers, which is why researchers are in the loop. Then we need to give people the ability to understand the system they’re using. I’m part of a nudging project that aims to explain gently to people. We’ll also need to teach robotics at school so children can have perspective when it comes to questions of artificial intelligence and robotics. And then we’ll need legal rules. In the same way that there’s a committee who decides whether a medicinal product is put on the market or not, we’ll need an ethical committee who validates a robot or not before its market launch. But not everything should be constrained by ethical reasoning. We can do business that’s ethical and responsible. When I say ethics, I’m not talking about philosophy. I’m talking about making machines that respect our values. We have to ask ourselves how these new robots harbor a danger for humans and accordingly how we can regulate it.

SEE ALSO: We met Sophia, Saudi Arabia’s Robot Citizen

DI e-mag: Are manufacturers ready to make ethical robots?

L.D: We understand ethics when it comes to data, this is why we need the new GDPR regulations. But when it comes to co-evolution with machines we aren’t talking about what changes it will lead to in terms of inter-human relations. Psychiatrists are interested in this question but the world of technology isn’t yet. Certain manufacturers I work with such as Softbank Robotics [who make the robots Pepper and Nao] are starting to understand the idea of ethics. But we still need to find a compromise between ethics and business. We need to agree together on a system of human values that will be respected by the robots who interact with us.

DI e-mag: How do we do this when not all countries have the same values?

L.D: Each country has its own values and robots should comply with the values of the country they are in, kind of like labor laws.

DI e-mag: You think that ethics can be a competitive advantage?

L.D: Yes, I think that this is where Europe and also the United States can make a difference. Maybe the price will be the main argument for someone buying a robot. But maybe there will also be an ethical aspect that will influence their choice of one robot over another. For this we need to make ethics “fashionable”.

DI e-mag: For you, what are the 3 areas we have to be careful in for robotics?

L.D: Firstly, it is unconscionable to want to try to reproduce a human as we can easily deceive and manipulate. Here I am targeting for example Google Home and Sophia. Next we must be careful with attachment and empathy. If we live with machines all the time, there will be consequences on our relationships with others. Then there will be a division between those who have access to this technology and understand it and those who don’t. We are widening the gap of technological inequality, and long-term it’s anti-democratic.

DI e-mag: So when will there be a minister of robotics and artificial intelligence?

L.D: I suggest a COP [conference of parties] be held on artificial intelligence. It is urgent that we take an interest in the repercussions of what we’re doing. It’s our responsibility!


More info on these questions, read the CERNA Report on Research Ethics in Machine Learning.

Advertisement
Advertisement
pub
Advertisement
Advertisement
pub
Advertisement