The AI car problem: to kill the child or the elderly person? -TEISS® : Cracking Cyber Security



News / The AI car problem: to kill the child or the elderly person?














Would you kill one to save many? For centuries philosophers have been discussing this question for which there seems to be no objective answer at all. Or is there?

Researchers from MIT, Harvard University, University of British Columbia and Université Toulouse Capitole tested millions of people from 200 countries from all over the world in an experiment called “The Moral Machine” on their moral-driven decisions. Participants were confronted with a scenario based on the famous trolley problem which allows the person control over a lever which a trolley is moving towards. On each track a person, an animal or a group is blocking the route, hence the participant has to decide: Do I switch? Whom do I kill?

According to The Hacker News, the research is meant to help in the development of algorithms for AI-driven cars.

Read more about if AI cars are going to decide for or against you in future here. 


























Source link