Take the “Moral Machine” test

Should a self-driving car headed for an unavoidable accident swerve to hit – or save – a dog, a baby, or a older person?

The rise of Artificial Intelligence in equipment is bringing unintended consequences and moral dilemmas, especially as self-driving cars become more accepted. How can a self-driving car make the life and death decisions that a driver would ordinarily make?

Enter Moral Machine, an online platform developed in 2014 by Iyad Rahwan‘s Scalable Cooperation group at Massachusetts Institute of Technology.  The platform presents variations on a terrible scenario – a car’s brakes fail; the choices are either to crash into a barrier and kill the passengers or swerve into a crosswalk and kill pedestrians.  Potential victims, either passengers or pedestrians, range from doctors to the homeless, men or women, children or the elderly, kittens or puppies (as if even a human driver would know if the pedestrian was a doctor). Interestingly, other factors include whether or not the pedestrians were jaywalking or not.

The platform is a sort of ‘crowd funding” of the decisions people make between two destructive outcomes. The information is billed as “a game of ethics” but could be used in research about the kind of decisions machines may need to make in the future.  Participants also get a quick analysis of their responses at the end of the 20 question quiz.

Over the past four years, the project had garnered some 40 million decisions from participants in 233 countries and territories around the globe.  If you’d like to see how your choices rack up, visit the MoralMachine platform and try the test yourself.

Leave a Reply

Your email address will not be published.

To complete your subscription, click on the Submit button and look for a confirmation email in your inbox. If you do not see our email in your inbox, please check your spam folder.

See our privacy policy