Jump to content

User:Timot2016/sandbox

From Wikipedia, the free encyclopedia

Adding a paragraph to existing WP article: Autonomous cars

Moral issues / implications

Background

The introduction of autonomous vehicles does not only raise a lot of technological and regulatory, but also ethical obstacles that all players in this field have to consider and overcome. Ethical issues include questions such as: When the car gets into an accident and has the "choice" of either running over a person in front of the car to save the passenger's life or run into a tree with the passenger being killed, what decision is the car to make?

Maybe providing the reader with our society's current situation and solution to this whole problem of prioritization would be most beneficial. AMJM8 (talk) 16:36, 21 October 2016 (UTC)

resource: Maurer, M., Gerdes, J. C., Lenz, B., & Winner, H. (Eds.). (n.d.). Autonomous driving: Technical, legal and social aspects.

Debates / controversy

There is a lot of controversy around the topic of those "decisions" the car has to make as mentioned above. The ethical issues arising in this controversy can be summed up in the trolley problem. "There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the most ethical choice?" (Wikipedia article about trolley problem)

One main argument against having those issues blocking any legalisation of autonomous driving is the fact that accidents would probably decrease by 90% anyways and either way, many lives would be saved.

It might be interesting to investigate into the public opinion on those questions. Use "moral machine" by MIT to get an impression what most people would want it to be like.

This would truly be a fascinating thing to do, I believe issuing a survey to our Northeastern Community could prove to be a very good first step. Or some statistics are probably already out there in terms of public opinionAMJM8 (talk) 16:37, 21 October 2016 (UTC)

Emerging Research

Bringing autonomous vehicles onto the public streets raises the question of liability in case of a crash. Will the car manufacturer, the software engineer, the owner of the car, or the supplier of the software be held liable? Even though the federal government has issues some guidelines to autonomous vehicles on public roads, many questions are still to be answered.

https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/

Future Research

The question arises if through artificial intelligence there is a way of setting general moral standards that the car has to follow. It would then be the car's job to translate those standards into real-time decisions. Just like a human driver would in case of an emergency.

Another question is, would the owner of the car or the person that is being driven him/ herself be able to decide which rules the car should follow in case of an emergency, for example in a situation such as the trolley problem.

VIPs

Most car manufacturer executives have made public statements towards the subject matter.

A little more detail on the matter would be most beneficial to the readers I believe AMJM8 (talk) 16:37, 21 October 2016 (UTC)
Overall comments: I think you made some very valuable changes and additions to the existing Wikipedia Article. More information and details are however needed in certain sections. I am looking forward to reading the end product! AMJM8 (talk) 16:37, 21 October 2016 (UTC)

Possible additions Perhaps you could make a paragraph on how the research and development is going so far on this issue. Bring up both sides of the argument and how they possibly intend to approach such a situation in their programming Gordon.re (talk) 16:21, 21 October 2016 (UTC)

With the emergence of autonomous cars, there are various ethical issues arising. While morally, the introduction of autonomous vehicles to the mass market seems inevitable due to a reduction of crashes by up to 90%[1] and their accessibility to disabled, elderly, and young passengers, there still remain some ethical issues that have not yet been fully solved. Those include, but are not limited to: (1) The moral, financial, and criminal responsibility for crashes, and (2) the decisions a car is to make right before a (fatal) crash.

(1)  There are different opinions on who should be held liable in case of a crash, in particular with people being hurt. Many experts see the car manufacturers themselves responsible for those crashes that occur due to a technical malfunction or misconstruction.[2] Besides the fact that the car manufacturer would be the source of the problem in a situation where a car crashes due to a technical issue, there is another important reason why car manufacturers could be held responsible: It would encourage them to innovate and heavily invest into fixing those issues, not only due to protection of the brand image, but also due to financial and criminal consequences. However, there are also voices that argue those using or owning the vehicle should be held responsible since they lastly know the risk that involves using such a vehicle. Experts suggest introducing a tax or insurances that would protect owners and users of autonomous vehicles of claims made by victims of an accident.[2] Other possible parties that can be held responsible in case of a technical failure include software engineers that programmed the code for the autonomous operation of the vehicles, and suppliers of components of the AV.[3]

(2)  Taking aside the question of legal liability and moral responsibility, the question arises how autonomous vehicles should be programmed to behave in an emergency situation where either passengers or other traffic participants are endangered. A very visual example of the moral dilemma that a software engineer or car manufacturer might face in programming the operating software is described in an ethical thought experiment, the trolley problem: A conductor of a trolley has the choice of staying on the planned track and running over 5 people, or turn the trolley onto a track where it would only kill one person. However, the person on that track assuming that the there is no traffic on it.[4] There are two main considerations that need to be addressed: Firstly, on what moral basis would the decisions an autonomous vehicle would have to make be based on. Secondly, how could those be translated into software code. Researchers have suggested in particular two ethical theories to be applicable to the behavior of autonomous vehicles in cases of emergency: Deontolicalism and utilitarianism.[5] Asimov’s three laws of robotics are a typical example of deontological ethics. The theory suggests that an autonomous car needs to follow strict written-out rules that it needs to follow in any situation.

Utilitarianism suggests the idea that any decision must be made based on the goal to maximize utility. This needs a definition of utility which could be maximizing the number of people surviving in a crash. Critics suggest that autonomous vehicles should adapt a mix of multiple theories to be able to respond morally right in the instance of a crash.[5]

Further ethical questions include privacy issues and the possible loss of jobs due to the emergence of autonomous vehicles.


[1] Fagnant, D. J., & Kockelman, K. (2015, May 16). Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations. Transportation Research Part A: Policy and Practice, 77, 167-181. doi:10.1016/j.tra.2015.04.003

[2] Hevelke, A., & Nida-Rümelin, J. (2014). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics, 21(3), 619-630. doi:10.1007/s11948-014-9565-5

[3] Marchant, G. E., & Lindor, R. A. (2012, December 17).  The Coming Collision Between Autonomous Vehicles and the Liability System. Santa Clara Law Review, 52, 1321-1340. Retrieved October 26, 2016, from http://digitalcommons.law.scu.edu/lawreview

[4] Thomson, J. J. (1985, May). The Trolley Problem. The Yale Law Journal, 94(6), 1395-1415. Retrieved October 25, 2016.

[5] Meyer, G., & Beiker, S. (2014). Road vehicle automation. Springer International Publishing. (pp. 93-102)