First World (Trolley) Problems

Adam Sellke
Oct 25, 2016 · 2 min read

I’ve been thinking a lot about artificial intelligence and in particular the “trolley problem” of late. Jason Kottke just shared Mercedes-Benz’ take on it.

Let me quickly add that I’m no technoethicist (and I’m sure this piece will only further prove the point), but I’m inclined to agree with Mercedes’ approach. Or at least it’s a start. I also think the article unfortunately chose to take a sensationalist route on the policy by insinuating that the luxury auto manufacturer had adopted some sort of elitist position on the matter.

I keep coming back to a few things:

In general, I feel the first rule for AI should be that it should operate in service of its “client”. In Mercedes’ case the client is the passengers of the car and chief among them, the driver.

Why? Well, first off, it is the AI’s reason for existing. The idea of altruistic or superethical AI that serves ALL parties begins to stretch the scope and boundaries of a system in a way that puts the whole system at risk. In all my years of software development, scope creep and poorly defined user roles and system boundaries — no matter how well intended — can quickly get you into trouble. Plus, there’s another flaw in pursuing the superethical approach. At least at present, AI doesn’t fully “know” the decision processes of external “non-client” parties. It’s power of prediction will be limited due to independent, free-thinking, agents in the mix. Even with an altruistic or “greater good” disposition, total disaster may still be the result because of these random variables.

And here’s the thing: if the first rule is clear and absolute across the board, suddenly we have new fixed data points to use in order to further solve the problem.

Because, eventually, in a world where AI is ubiquitous and highly intelligent, AI will be able to use this collective knowledge and real-time information to communicate, collaborate, and negotiate with other AI for the benefit of all parties. Disaster can more readily be avoided with the relay and processing of near-perfect information, or in a worst case scenario — where tragedy is inevitable — it can be delivered as an informed, “mutually self-interested” but “dispassionate” and “fair” outcome. Thanks to the ever-present, all-knowing, and constantly-chatting robots, everyone's right to life, liberty, and pursuit of happiness remains in tact.

This would also represent a viable amendment to Asimov’s Laws of Robotics when such dilemmas exist.

What do you think?

Note: I’ve glossed over the car vs. pedestrian aspect of the dilemma, where it’s currently not a fair fight. But that’s not to say that AI-enhanced personal technology (read: jet pants or ejector shoes) won’t be developed to give pedestrians a better chance in these scenarios.

Adam Sellke

Written by

Done-getter. Co-Founder | CEO of Evolve Labs (www.evolvelabs.com)

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade