Berlin 2018 – wissenschaftliches Programm
Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe
SOE: Fachverband Physik sozio-ökonomischer Systeme
SOE 22: Focus Session: Computational Social Science
SOE 22.6: Vortrag
Donnerstag, 15. März 2018, 17:30–17:45, MA 001
Avoiding Ethical Dilemmas of Autonomous Vehicles — •Jan Nagler — ETH Zurich
Soon Artificial Intelligence will decide about many issues, including life and death, how should autonomous systems faced with ethical dilemmas decide, and what is required from humans? We discuss this problem in connection with the accident management of autonomous vehicles. Today, more than 1 Billion vehicles are on our streets worldwide. Within the next 10-20 years, self-driving cars are expected to largely substitute these conventional vehicles. But how to engineer autonomous vehicles and, more generally, design artificially intelligent systems for safety and other moral values? Self-driving cars will have to deal with situations that result in 'moral dilemmas', and will sometimes have to autonomously decide who will be harmed. The challenge is usually discussed by means of the popular (but unrealistic) 'Trolley problem', where a choice is to be made whether to run into one group of people or severely harm another group of people, if an accident is unavoidable. This simple dilemma has been imported from moral philosophy into our thinking about systems engineering, policy and law (Deng, Nature 523: 24, 2015), but it has a number of pitfalls. Today's 'moral' algorithms are typically based on a deterministic minimization of harm. We challenge this myopic principle as - in the long-term - it may increase harm rather than minimize it, in particular in times of crisis, or in unsustained environments. We are unable to solve those dilemmas, or tell exactly what to do. We wish to discuss, however, what not to do.