With Great Power...

30 Nov 2017

Ethics Reflection Ethics is the code that we all hold ourselves to when we decide if something is right, wrong, or somewhere in between. It usually applies to how we affect other people, places, or things. Other nouns, I suppose. Any action or lack thereof could theoretically fall under the burden of an ethical decision in the right context. The decision to breathe could be unethical if you were sharing a limited air supply with, say the President of the United States.

The previous one anyway.

Ethics wouldn’t really be considered for most people just spending a day doing nothing but relaxing, unless they were maybe in a group project where other people relied on them to have a certain amount finished by a certain time. Ethics in computer science and software engineering takes on much a similar burden: if whatever you do is going to affect someone else, you need to take into consideration what that effect is, what it can be, and is it worth it?

The advances in technology have necessitated its modern usage in our current society. This is some of that good life contribution that software engineers can take part in. It doesn’t have to be all marketing, big data, and code maintenance for software engineers, they can improve peoples’ lives too. Consider places like Stack Exchange, a series of websites that exist to facilitate people helping other people out. Someone had to design and develop all that, and it’s really helped, well, all of us CS students at some point. As noted in the readings, however, good intentions do not always translate into good ethics, as I will discuss in the case of the self driving car.

Autonomous cars are on the horizon, and really, they’re mostly here. It’s been quite a while since the general public has used something for mass transportation that could think for itself, and even then, it wasn’t that reliable. Could you imagine trying to calm an autonomous car? Thankfully, we don’t have to program fits of hysterics into our self driving cars (and it would be highly unethical to do so), but we do have to deal with the dilemma of decision. As discussed in the MIT Technology Review article, the software within an autonomous car must be able to make a decision to take lives to save others if the situation demands it. The article brings up a situation involving the decision to drive into ten people, or a wall. One way kills people, the other way kills the driver and occupants. The car must decide who must die, and therefore, the software engineer has decided who must die. Of course, the overall truth of driving is that most deaths are due to human error, and this situation should be unlikely, or at least, it’s never happened to me. The amount of people saved by the wide-scale adoption of self-driving cars would definitely outweigh the amount of lives lost in this situation, but if the car hadn’t chosen that particular route on that day driving exactly the speed limit, then those people (and that poor wall) might never have been hit in the first place.

Of course, there are less dramatic ethical issues that come with autonomous cars than death. Millions of jobs rely on transportation, and those jobs will be lost with the adoption of autonomous automobiles, so that would be giving people the bad life. Advertisers may pay to have cars route pass certain places, or subtly slow down when driving past them. Hell, used sparingly they could be used to assassinate people a la The Sontaran Stratagem (https://www.youtube.com/watch?v=xCN9cEmgY90). The argument of “one software glitch among thousands of lives saved” might even be employed against tin-hat conspiracists. These are all just some of the points that need to be considered when discussing what might come up if (when) we decide to adopt autonomous cars.

When it comes to self driving vehicles, it doesn’t take much of a jump to get from cars, to tanks and planes. We’ve had drones in the sky for surveillance and seek-and-destroy missions for years already, except they’ve been controlled by people elsewhere. However, a drone with AI could just decide the value of a target or targets weighed against the value of civilian lives that could be lost by a strike. The fear of these autonomous kill machines can be felt no more than those who live under their influence daily (https://youtu.be/K4NRJoCNHIs?t=617).

Autonomicity, whether in cars or active weapons is a gargantuan ethical pothole. My stance on all things autonomous is that we need to have some serious discussions about these ethical issues and turn them into legal hard calls to rely on. This is far better steering us in a safer and better direction. Sadly, all it really takes is one nation to mess it up for everyone. An attempt by the UN to ban killer robots will be completely ignored by Russia (http://www.businessinsider.com/russia-will-ignore-un-killer-robot-ban-2017-11). As part of the security council, Russia could veo this, however as redditor husker_417 puts it: “I’m pretty sure they don’t mind if other countries don’t develop killer robots.” If Russia is going to have killbots, then America is going to have them. If America is going to have them, then its allies are going to have them. If all these nations have them, then they’re going to end up in the hands of terrorists, gangsters, human traffickers, and general scum. I think one of the most horrifying outcomes of this can be seen in the short film “Slaughterbots” (https://www.youtube.com/watch?v=HipTO_7mUOw).

Thankfully, we have had the strategy to combat killbots worked out since the late 90s: (https://www.youtube.com/watch?v=EF3g4Ua5e7k)