Regulating Taxicab Safety by Algorithm: How to Build a Safetybot

Eric Spiegelman
6 min readJul 20, 2016

--

By now you’ve probably heard about “bots,” those little programs that run automated, repetitive and somewhat sophisticated tasks on various websites and devices. Bots are everywhere these days. On Facebook Messenger, you can order flowers from a chatbot. On Twitter, there’s a bot that crowdsources poetry and another that trolls Donald Trump supporters. If you use the smartphone app Burner (launched right here in Los Angeles), you can run a “ghostbot” to blow someone off for you via text message. Over at the Los Angeles Times, there’s a “quakebot” that writes and publishes articles on seismic activity.

In government, however, there are very few bots. (There are people who may seem to act like robots, but for the moment, those are still humans.) When lawmakers order tasks to be run, it generally falls on human people to execute those tasks. This is destined to change. Some of these tasks are so simple and routine that it makes more sense for bots to manage them. This is especially true of laws that apply to regulated industries. As public utilities get more connected to the Internet of Things, it becomes increasingly plausible for bots to regulate certain aspects of their behavior.

Consider, for example, ride hailing services. (“Ride hailing service” is the fresh new collective term for taxi, Uber and Lyft.) The cars dispatched through ride hailing services all connect to the Internet of Things. Their drivers receive dispatch requests through an Internet-connected device that also calculates fares. Some of their cars are equipped with telematics devices, which means the car’s on-board computer can broadcast a whole lot of information about how it feels and what it’s doing, over the Internet.

(Uber and Lyft employ telematics in some cars; New York City recently launched a telematics pilot program for its taxicabs; other cities are inching forward on similar programs.)

Taxicabs and other hailed vehicles are prime candidates for regulation by bot. If you were to design such a bot, it might look like this:

This flowchart is a blueprint for a “safetybot.” The safetybot protects the public from a taxicab that has something seriously wrong with its engine and shouldn’t be on the road. It does this, essentially, by sitting by the phone all day, waiting for a call from the taxicab’s telematics device. If there’s a problem with the taxicab’s engine, the telematics device sends the safetybot a whole lot of gobbledygook that looks like this:

Sample information pushed by the telematics service Automatic’s “MIL:on Websocket”

See where it says “mil:on”? MIL stands for “malfunction indicator lamp,” but you know it better as the check engine light. “On” means, well, that the light is lit. See where it says “vehicle”? That tells the safetybot which car is having the problem. Below that, where it says “dtc”? That means Diagnostic Trouble Code. There are hundreds of different DTCs, each one assigned to a specific problem automatically by car’s engine. It used to be that only a mechanic could divine the DTC of an car with the check engine light on, but a telematics device makes this information available to anyone.

I showed the safetybot flowchart and the telematics gobbledygook to a good friend who understands how both government and computer programming work (and who prefers to remain anonymous). In just a few minutes, he wrote a PHP script that, if implemented, would breathe life into the safetybot.

My anonymous friend’s code for the safetybot.

The first 3 lines tell the safetybot to listen to the telematics device. Lines 6 through 7 determine whether the trouble reported by the telematics device is serious, and if it is, lines 9 through 27 turn off the car’s taxi meter. The rest of the script tells the safetybot to email the Los Angeles Department of Transportation, telling them that a particular taxicab has a problem, and the nature of that problem.

The safetybot code is a translation of regulatory code into computer code. Specifically, it’s based on rules 111 and 460 of the Los Angeles Taxi Commission rule book.

Rule 460 deals with the check engine light. “Any problem causing the ‘check engine’ light to be illuminated when the vehicle motor is running must be corrected within two business days or the vehicle may be removed from service.”

Rule 111 defines what it means to take a taxicab “out of service.” It means to put the vehicle “in a status such that no person shall operate the taxicab… except as may be necessary to return the taxicab to the residence or place of business of the owner or driver or to a garage.”

Clever readers will note that the safetybot’s functionality does not exactly mimic the rule book’s prescription. There is, perhaps, a lesson here. The regulatory code was written for human beings to enforce, so it is designed around the limitations of human ability.

The structure of a DTC identification.

For instance, there are probably few people who have memorized all of the diagnostic trouble codes that a car’s engine may report. Some of them signal relatively minor issues. Others refer to situations that create an imminent danger. A safetybot, however, can remember all of them, and we can program it to know which DTCs should get a car off the streets right away, and which should probably be taken care of sometime soon, you know, when you have the time.

Also, a human investigator can’t possibly know the exact moment a check engine light goes on without the help of a bot. It would be an absurd waste of resources (not to mention a profoundly boring use of them) to sit an LADOT official in the passenger seat of a taxicab, staring intently at the cab’s dashboard for the entire duration of a driver’s shift.

That lightning bolt looks serious. Go to the mechanic.

Since we don’t have bot-like abilities, Rule 460 was designed to put the onus on the driver to monitor his cab’s check engine light. If, during a routine inspection, a mechanic determines that the check engine light has been on for more than two days, we punish the driver. Good policy is efficient policy. This is the most streamlined way we’ve come up with to achieve this particular safety goal in an analog world managed by human beings with limited observational powers.

But in a world where bots are an option, efficiency’s guiding light can lead a policymaker to a different outcome. The starting point is the same. We have values we want to translate into law. Here, that value is bodily safety — specifically, protection from injury caused by a busted engine. A bot that protects us takes the responsibility off the shoulders of the driver. A compromise borne of human fallibility, such as a rule that treats all engine problems the same way, becomes the legal equivalent of a tailbone or appendix. It no longer serves any purpose.

A safetybot is both law and enforcement. Frankly, it’s better enforcement. It’s more effective than a human at achieving the ends of the law, and it’s far less expensive. A safetybot can free up law enforcement resources to focus on those other dangers that can’t be prevented by a bot.

--

--

Eric Spiegelman

President of the Los Angeles Taxicab Commission. Opinions expressed here are personal to me and do not reflect the opinion of the Commission as a whole.