Delving into an area which involves a great change from the norm of everyday life calls for a shift in paradigm. Everyday life in America consists of people driving vehicles to and from places such as work, school, or entertainment. What would life be like if those same people, perhaps including a broader rubric of people, entered a vehicle that would drive for them? The possibilities could be endless, but before even considering those possibilities, could consequences such as a moral dilemma that the artificial intelligence of a self-driving inherently cannot deal with arise?
Sven Nyholm1 and Jilles Smids introduce their research on the fundamentals of self-driving vehicles. In essence, self-driving vehicles promise a safer and more efficient way of navigating the streets designed to get from place to place. But exactly how much safer can self-driving vehicles exactly be when thrown into unpredictable situations caused by humans or nature. Problems tend to arise when dealing with the ethical aspects of these vehicles. Should they be designed to reduce deaths or save their passengers at all costs? (Nyholm p. 1). What will be the proper way in implementing ethics into these autonomous vehicles considering there are serious risks involved? The risks include, but are not limited to, liability of the company that produced the vehicle, liability of the person(s) inside the vehicle, the victim(s) of the outcome of the vehicle, and the public opinion on autonomous vehicles as a whole. Starting with the design of the autonomous vehicle first, the main difference between it and a regular vehicle on the road is the advanced sensor-technology it has to detect surrounding vehicles and obstacles on the road.
Along with that comes sophisticated algorithms and information technology to predict possible paths of other moving objects and send that information out to other nearby autonomous vehicle. That in itself is pretty impressive but is it enough to always stop accidents from occurring? The answer is no according to physics. Cars, which travel at fast speeds and hold a large sum of weight to them have limited mobility. According to newton’s first law of motion, or inertia, an object in motion stays in motion with the same speed and in the same direction unless acted upon by an unbalanced force. These unbalanced forces can be include humans driving their own vehicle or a tree suddenly falling in the way of an autonomous vehicle. Which unavoidable crashes from an autonomous vehicle might deter a potential consumer from purchasing, these vehicles still have a better chance than humans at minimizing the impact of the crash. The problem now, however, is what decision the vehicle makes in terms of saving the passengers or other potential people in danger.
Christine Connolly helps create a background in the study of the systems that go into self-driving cars. For reasons of safety, convenience and improved traffic flow, there has been a drive to develop autonomous road vehicles. (Connolly p. 1) It is important to know how these vehicles work internally and which technologies helped further the study of understanding them. Technologies that provided useful information in the study of autonomous vehicles included vision systems, radar, inertial heading sensors, inter-vehicle communications, global positioning systems (GPS), parking sensors, adaptive cruise control ACC, night vision aids, radar and imaging collision warning systems, and inter-vehicle communications systems that detect vehicles not yet in sight.
These systems helped provide a basis of a much more complex system which would have an implementation of ethics as its core. Cameras are set up all along autonomous vehicles to track the surrounding area of any moving images which can possibly come into interaction with the vehicle. These vehicles are constantly tracking the area around them for any potential threat and are always relaying that information back out. The good thing about this system of information exchange is that the more information out there, the better these vehicles become at making the roads safer. A negative would be when systems fails and cause injury or even death to the passengers or innocent bystanders. This would be a major setback for the autonomous industry. The idea is to get these collision prevention technologies to reduce crash rate to as close to 100% as possible.
Santoni De. Sio describes two different viewpoints on the implementation of ethics in autonomous vehicles by Bonnefon and Gerdes. Bonnefon says that the public must just learn to accept the ethics implemented as just one of the facts of new technology and to accept the different options available for the programs of the vehicle. These ethics can also be supported by experimental ethics. Gerdes says that traditional ethics should be applied to minimize the negative effects of a situation as best possible. While these theories may prove to be the way to go, can they be justifiable or excused under the law if the end result ends in the death of a bystander? To understand this, we have to take a look at the current law. Laws that a broken during a time of necessity are justified. Sio’s example is when a person purposely destroys private property, which is against the law, to escape from a fire. This crime is justifiable because of the circumstance.
With all the recording and accident-algorithms in autonomous vehicles it could be conveyable that certain circumstances would get excused under the law for unavoidable incidents but what would this mean for families of victims? Yet again, another ethical issue at hands. It appears that most lay people would not be opposed to an autonomous vehicle swerving off the road to save more people while sacrificing less and utilitarian philosophers agree that would be the better thing to do. The simple utilitarian reading of the doctrine of necessity goes as follows:
- in the presence of a tragic unavoidable choice between two evils, one ought to choose the lesser evil;
- the lesser evil between intentionally causing the death of one and intentionally causing the death of more than one is intentionally causing the death of one;
- When confronted with an unavoidable choice between intentionally causing the death of one and intentionally causing the death of more than one, one ought to (and, a fortiori, can) intentionally cause the death of one person rather than that of more than one. (Sio p. 3.)
In a YouTube interview with Tesla’s president, Elon Musk, he stated that it would be made known that the responsibility of the car and its actions remain in the hands of the driver of the vehicle at the time. (Bloomberg, 2014.) Maybe it would not be wise to go fully autonomous at all times; perhaps in situations which would limit the autonomous vehicles functionality if personal liability becomes a factor.
Giuseppe Contissa1, Francesca Lagioia1 and Giovanni Sartor1 raise an interesting question. What if autonomous vehicles were implemented with an “ethical knob?” This ethical knob would be a device that would be able to switch the morality or principle modes in the vehicle for user preference. The authors give three different possible preprogramed modes:
- Altruistic mode preference for third parties;
- impartial mode equal importance given to passenger(s) and third parties;
- egoistic mode preference for passenger(s).
This would certain cause some outcry from the public along with judgement for which mode each person chooses; whether it is to save themselves or others. Even if this may seem far-fetched, each person already has their own internal ethical knob. People have an instinctive thing they will do in a certain utilitarian situation. One might decide to jump in front of a bullet for someone while another person would be willing to push someone in their way of harm for personal protection. This idea would certainly expose a person’s character outright but maybe these ethical knobs could be limited by the manufacturers to be very specific and not allow for much customization. Perhaps liability will be held stronger against the driver depending on which setting they had preset.
So exactly how serious are autonomous vehicles taken today? Many companies and people have spent billions in the industry for testing. According to electronic news, an Australian state was set to begin autonomous vehicle trials for its airport and one of its public universities in March of 2017. The South Australian Government announced that it was going to fund AU$5.6m to start trials for an automatic shuttle that runs without a driver in the capital. Maybe it is better and safer to have these types of vehicles run only in a specified area where its path has been mastered. Every time these vehicles go to a new area, they take in new information and risk even more possible unknown events. It would without a doubt lessen the costs of public transportation as there would be no operator to pay but then again, this would cause a loss of jobs.
This author seems positive that cars will be driving themselves in the future. The only problem is no one seems certain on how safe these vehicles actually will be. When a pedestrian in Arizona was killed in March of 2018 by an autonomous vehicles, questions and public opinion certainly rose. But many are confident that over time these vehicles will eventually develop to make roads safer overall. A new ambitious idea called intelligent connectivity has been described as follows: it provides road users with information sensed, processed and distributed by machine-based intelligence built into the transport system. At the same time, this overarching intelligence allows us to also re-imagine the capabilities of self-driving vehicles.