Order from us for quality, customized work in due time of your choice.
Introduction
In the society we live in, robots and artificial intelligence are quite loved by the media/people for its controversy and mysteriousness. The question of robotic ethics is making a lot of people tense for they worry about the machines lack of empathy and even the feeling of sadness that occurs in humans when certain unkind events happen to the machine. In this white paper Im going to talk about how these unnecessary feelings are something that shouldnt be misguided when we are creating these intelligent machines.
Looking at the facts we know
When engaging in these topics I often look at all the different ways AI is influencing us these days, how the subject of robotic soldiers are discussed and whether or not they should ever deploy them. That is something to be considered an ethical decision by human hands when we are considering the idea of automatic robot war. Of course, there are going to be questions of whether a robot soldier has the capabilities to make the right decisions and that of the use of automatic weapons. Reference: (Yampolskiy, 2013)
Science fiction writer Isaac Asimov was the one who introduced three laws of robotic engineering safeguards and built-in ethical principles that a lot of people these days see in movies, tv, novels, and stories all around. It was either: A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Reference: (Deng, 01 July 2015)
Now, these days a lot of his laws are being set in real life. A lot of objects human beings make these days are autonomous enough to make life-threatening decisions. When we talk about self-driving cars we often get to the point of discussing how they would behave in a crisis. What would a vehicle do in order to save its own passengers? What if the self-driving car had to swerve out of the way from hitting a human being but ends up hitting someone else instead in the process?
The problem with Nao
There is an experiment that was quite popular and it was about a toy robot commercial named Nao and Nao was programmed to remind people to take their medicine. Susan Leigh Anderson who is a philosopher at the University of Connecticut in Stamford who together with her husband Micheal Anderson of the University of Hartford in Connecticut talked about how there are no ways how ethics can possibly be involved with a machine like Nao. But Anderson looked at how Nao should even proceed if the patient refuses to take the medicine. But then again. If the robot would in any way force the patient to take the medicine it would vastly see how it could maybe hurt the patient in the process of it.
In order to teach Nao to act correctly in such situations, the Andersons did a lot of tests and learning algorithms with Nao until they found patterns that can guide robots into new situations. A learning robot like this can be really useful for us in many ways. The more ethical situations the robot would get in the better it would get with ethical decisions. But a lot of people have the fear that the advantage might come to a price, which I believe we should when you know that Principles is a thing that cannot be programmed with computer code. You will never know why a program came up with a specific rule that decided that something is ethically correct or not. This is something that Jerry Kaplan al mentions in his artificial intelligence and ethics classes at Stanford University in California. Reference: (Anderson, 15-12-2007)
The famous trolley problem
As I mentioned earlier, people often see militarized robots as something dangerous. And there have been numerous debates on whether or not they should be allowed. But Ronald Arkin who works on robot ethics software at Georgia Institute of Technology in Atlanta argues that such machines could be better than human soldiers in some situations. Where robots are programmed to never break any of the rules humans can fail.
We have computer scientist that work rigorously on machine ethics today. They often favor code that uses logical statements like if a statement is true, move forward; if it is false, do not move. Clear and logical statements are according to Luís Moniz Pereira, a computer scientist at the Nova Laboratory for Computer Science and Informatics in Lisbon the best way to code machine ethics. Logic is how we reason and come up with our ethical choices- Luís Moniz Pereira
But making such code is an immense challenge. In Pereira notes the logical language that he uses with computer programming still have trouble coming to conclusions when it comes to life-threatening situations. One of the most famous scenarios that have been a loved topic in the machine ethics community is the trolley problem. Reference: (Deng, 01 July 2015)
In this situation you have a runaway railway trolley that is about to hit and kill 5 innocent people who are working on the railway tracks. You can save them only if you pull a lever that makes the train change its direction onto another track. But on the other track, there is 1 innocent bystander. In another set-up, you can push the bystander onto the tracks to stop the trolley.
Now, what would you do?
Often times people answer that it is okay to stop the trolley by hitting the lever, but immediately reject the idea of pushing the bystander onto the tracks. Even though it has the same exact result. This basic intuition is known to philosophers as the double effect. And that is when deliberately inflicting harm is just plain wrong even if it will lead up to something good. But inflicting harm is okay when it is not deliberate but a simple coincidence of good because the bystander simple happened to be on the track
Conclusion
I want to make clear that I am talking about the ethics of machines that just werent designed right or just simply arent finished yet. In this paper I talked about self-driving cars, automatic war machines to robots that remind us to take in our medicine. But the problem Im trying to get our attention on is the bad designs of these machines and how it can cause bigger problems or greater risks in our safety. These machines should not be let into society if we are not 100% sure they are safe and are programmed correctly.
I understand the excitement about these machines and they surely will bring our technology to a new high in the future. But until then I think we can all agree that we still need to work on our AI before actually using them for making important decisions.
Sources
- (Yampolskiy, 2013) https://link.springer.com/chapter/10.1007/978-3-642-31674-6_29
- (Hammond, 2016) https://www.recode.net/2016/4/13/11644890/ethics-and-artificial-intelligence-the-moral-compass-of-a-machine
- (Keith Frankish, 12 jun. 2014) https://books.google.nl/books?hl=nl&lr=&id=RYOYAwAAQBAJ&oi=fnd&pg=PA316&dq=Ethics+and+Artificial+Intelligence&ots=A0X5wkfGqq&sig=6nyvUpOC5bUEdhOTBRsBPlaRNEY#v=onepage&q=Ethics%20and%20Artificial%20Intelligence&f=false
- (Garner, 2015) https://www.nature.com/polopoly_fs/1.17611!/menu/main/topColumns/topLeftColumn/pdf/521415a.pdf?origin=ppub
- (H, 07 August 2006 ) https://ieeexplore.ieee.org/abstract/document/1667948/references#references
- (c. allen, 07 August 2006 ) https://ieeexplore.ieee.org/abstract/document/1667947/authors#authors
- (Anderson, 15-12-2007) file:///C:/Users/Alice/Downloads/2065-Article%20Text-2789-1-10-20090214.pdf
- (Goodall, 2014) https://link.springer.com/chapter/10.1007/978-3-319-05990-7_9
- (Deng, 01 July 2015) https://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881
- (Michael Anderson, 9 mei 2011) https://books.google.nl/books?hl=nl&lr=&id=N4IF2p4w7uwC&oi=fnd&pg=PP1&dq=Ethics+and+Artificial+Intelligence&ots=5XYYqolYMl&sig=ANFdk6e8U_SOpq1s6l_od0f4tHc#v=onepage&q=Ethics%20and%20Artificial%20Intelligence&f=false
Order from us for quality, customized work in due time of your choice.