The Trolley Problem and AI

TLDR : An interesting discussion

CaptainLazarus
2 min readFeb 12, 2022
The trolley problem

I think everybody knows the trolley problem. 2 tracks, 1 train and a switch to kill a lesser number of people, should you choose to do it.

When there was a discussion the other day (actually about 2 weeks ago) about the accountability of AI (specifically about self-driving cars and who is to be held accountable), it hit me that this was really a real life trolley problem.

To phrase it differently, would you allow an algorithm that has an infinitesimal chance of failure to be deployed in real world applications where failure is equivalent to death?

In the case of self-driving cars, let’s assume that currently 10,000 people die every year because of automobile accidents. If people only used self-driving cars, then let’s assume that the number became 50. However, the cars are completely controlled by said AI.

Would you do it, and if you did, who would you blame when an accident happens?

In the discussion itself (and in real life too) the workaround was to call them AI-assisted cars, to prevent the idea of giving complete control of the vehicles to the almighty algorithm (This smacks of blaming pedestrians for jaywalking, something the auto industry lobbied for to prevent paying for settlements, although I concede I have no alternative). People were still responsible for their cars, it’s just that the AI helped them to drive (99% of the way).

Now, let’s assume that the number of people who died would be 1. You created a technology that would benefit a lot of people, but would randomly kill a person each year.

Would you walk away from Omelas ?

--

--

CaptainLazarus

I do stuff. Like stuff about code. And book stuff. And gaming stuff. And stuff about life. And stuff about stuff.