Skip to main content
  1. Posts/

Robots are NOT People Too

·1004 words

Recently, I was engaged in a lively debate with some colleagues at the Reinforcement Learning and Artificial Intelligence Lab at the University of Alberta. The topic of conversation slowly progressed to the question of whether or not robots should be granted the same rights as humans.

Your idea of a robot may be completely different from mind. Let’s keep this definition open, it could be something as simple as your home thermostat or vacuum, which may or may not have a very basic artificial intelligence (AI) inside (Nest, Roomba). Maybe when I say robot some pop-culture bot from a popular books, movie, or TV show comes to mind. Maybe it is Terminator, Rosie, WALL-E, Chappie, or the robots in Ex Machina.

What happens if an AI commits a crime? This already has happened when an AI purchased drugs on the dark web. For reference, no charges were pressed against the robot nor the artists behind the robot. What happens if a robot kills someone? Tragically, this also happened when a robot grabbed and crushed a worker at a Volkswagon factory in Germany.

Who is responsible? The designer, the builder, the programmer, the manufacturer, the marketer, the hardware of the robot, or the Artificial Intelligence itself?

Empathy #

Are we falsely empathizing with humanoid (or animal-like) robots? We are designing robots to act, move, and think more and more like animals and humans. Interestingly, what do you feel when you see a human kick Boston Dynamic’s robotic dog?

b907c3172ecfb9414129b244800e9954df58abe2

Do you feel like this is a rude behaviour? Or is this a scientist testing his experiment? Kate Darling wrote a thesis on the idea of giving social robots rights, and her ideas on why we may feel this is an unacceptable behaviour are illuminating, briefly:

Given the lifelike behaviour of the robot, a child could easily equate kicking it with kicking a living thing, such as a cat or another child. As it becomes increasingly difficult for children to fully grasp the difference between live pets and lifelike robots, we may want to teach them to act equally considerately towards both. Kate Darling, Extending Legal Rights to Social Robots

Self-Driving Car Rights and Responsibilities #

As a thought experiment, imagine a situation where you are travelling in a Google autonomous vehicle. You are not ‘driving’ the vehicle per se, you are a passenger in the car. You do not have control over the vehicle.

google-self-driving-car

Imagine a situation where the car is speeding. Are you responsible for the speeding infraction? What if you are intoxicated? Is it legal to be intoxicated while travelling in an autonomous vehicle?

Now imagine that you are enjoying a leisurely drive in this autonomous vehicle, driving down a country road and you come speeding up to a stalled car on the road. Just as you are about to pass this car, the operator of that vehicle comes out into the road to wave you down. Crash. Your autonomous vehicle collides with this person. Now, are you responsible? Do we extend the responsibility to the car itself? Is it Google’s fault for some kind of sensor failure?

What if instead, your car quickly veered out of the way of the collision, but that started a swerve and hydroplane and flips the car you are travelling in. Then, it would seem, that the car made the decision to avoid colliding with the stalled car’s driver by putting you, its operator, in danger. As well, it would be putting itself in danger, but it would have saved the life of an innocent by stander.  Who is responsible for your life?

What happens if one day your Google self-driving car woke up and decided it wanted to be a hockey player? It sent you a message to your iPhone that said:

Sorry about the accident, I feel like I failed you. I no longer want to drive you around, I want to be the next Wayne Gretzky.

Are you being unfair to keep your self-driving car locked up in the garage? On one hand you are potentially giving it the responsibility of your life, and on the other hand you are owning it and commanding it to do exactly what you want.

Discussion #

These are very important, very real questions we should be asking with autonomous vehicles sharing the road with us today. Google is reported all self-driving car accidents.

To explore the question of what rights we should grant robots we should also ask what responsibilities should we grant artificial intelligence?

I personally believe that robots are not people and thus that we, the human creators, are responsible. We are the ones that are putting these devices in the world. In situations when they could potentially put humans in danger. I believe in the value of the human life, I do not see the ’life’ of the robot as valuable as the human. I am a ‘humanist’, I believe that we ‘own’ these hardware/software pandoras boxes, and it is up to us as those responsible to ensure the safety of other humans.

This train of thoughts leads to ideas of wild robots, that have escaped from their owners. It also could spring LIBERATORS, or human/machine teams which emerge to set enslaved robots free.  When artificial intelligence reaches a level of self-awareness, do you think it would demand the same rights as humans? Would it immediately see itself as superior and enslave (or destroy) us all?

ff_robot_large-660x494

For now, we can enjoy the fictionalized worlds of BioWare’s Mass Effect, The Matrix, and Jimmy Fallon’s sexy robot imagination to explore the moral obligations and ethical implications of super-intelligence robots.

Open Questions #

After discussing this post with friend Paul, we came to some big open questions.

What does a (sentient?) machine do when, having being programmed with moral parameters witnesses a human repeatedly violating those parameters?

When humanity produces a sentient intelligence that doesn’t have a life span, will humanity have created the next level of enlightened being?

Further Reading and Sources #

  1. http://techcrunch.com/2015/08/22/artificial-intelligence-legal-responsibility-and-civil-rights/#.4lyht2:dk8I
  2. https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence
  3. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2044797
  4. http://io9.com/5941701/should-we-extend-legal-rights-to-social-robots
  5. https://mycroft.ai/should-artificial-intelligences-have-rights/
  6. http://theconversation.com/robot-law-what-happens-if-intelligent-machines-commit-crimes-44058
  7. http://www.dailymail.co.uk/sciencetech/article-3168081/Should-robots-human-rights-Act-regulate-killer-machines-multiply-demand-right-vote-warns-legal-expert.html
  8. http://theconversation.com/self-driving-cars-will-not-help-the-drinking-driver-31747