How Cops Will Do Stop-And-Frisk In An Era Of Self-Driving Cars

Lance Eliot
Nov 5 · 15 min read

Dr. Lance Eliot, AI Insider

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Have you ever looked in your rear-view mirror and watched anxiously as a police car came up behind you and signaled for you to pull over?

I’d dare say that most of us dread such a moment.

Why do police perform seemingly ad hoc traffic stops?

In theory, the traffic stop is intended to ensure the safety of the roadways.

There were an estimated 60,000 traffic stops made by police in Los Angeles last year, according to numbers released by the LAPD (Los Angeles Police Department).

That’s a lot of traffic stops.

And that’s only counting Los Angeles.

Nationwide, it’s estimated there are 20 million traffic stops per year. As most driver’s know, a traffic stop consists of a police officer asking you to pull over your car. You are then to find a safe and reasonably soon spot to pull over, of which the police might direct you to such a spot.

Defining Traffic Stops

A traffic stop is considered a form of detention.

The police generally have the authority via law to detain you when you are in a public place, such as the case of driving your car on public roads.

These traffic stops are often referred to as a “Terry stop.”

That’s because there was a famous Supreme Court case in 1968 involving Terry versus the state of Ohio, and it clarified aspects of the 4th Amendment of the United States Constitution about searches and seizures.

In brief, the Supreme Court ruled that an officer can conduct a stop if there are articulated facts to justify such an intrusion and when based on a “reasonable suspicion” (note that a reasonable suspicion is a lesser rigorous requirement than a “reasonable belief” which is a higher standard and involves a belief in probable cause that a criminal activity might be or has taken place).

There are all kinds of twists and turns legally about the nature of these stops.

I’m not going to drag you through my own version of law school herein. Let’s just for the moment agree that they happen, and whether they are legally right or not, and whether they are suitable or not, they nonetheless occur.

I had earlier posed the question as to why the police will at times undertake these ad hoc traffic stops.

I had mentioned that it is presumably for the driving safety and roadway safety aspects. In addition, the police would be inclined to suggest that these can be considered a valuable crime fighting technique. There are those that argue that a traffic stop can be a powerful tool toward crime suppression. It is a policing tactic that one might say is a proactive method of catching crimes before they happen or perhaps while they are happening and can prevent further criminal escalation and danger to the public at large.

I’ve opted to so far mention just the traffic stop part of this activity.

There is also a potential part of the activity that can be referred to as the frisk.

The frisk portion would be if the officer opts to not only detain a suspect, but also then perform a pat-down or other search of the suspect.

Once again, I’m not going to dig herein into the legal particulars of the frisk. Instead, for the remainder of this discussion, let’s assume that a traffic stop and a stop-and-frisk are one and the same, which means that there is a stop involved and there may or may not be a frisk involved.

Basis For Traffic Stops

Suppose a police officer pulls over a driver because their headlights aren’t on and it is nighttime. This is usually considered a valid stop since the laws tend to require that you use your headlights at night.

If there had been a report of a burglary that just happened nearby, or if there was a report of a car that maybe was associated with a crime and my car matched the description, those reasons might also support the stop, and potentially a stop-and-frisk. Making the mental leap that headlights off translates ergo to burglar or burglary would be a less likely scenario and a reach of the law.

Police doing policing in areas of high crime rates would say that this kind of “intense policing” can make a big difference in terms of catching criminals and nabbing gang members that are involved in criminal acts.

They would tend to say that the ability of police to be able to undertake a stop, and possibly a stop-and-frisk, often detects criminals before they fully commit a worse criminal act or can serve to forewarn such criminals to not commit such acts because the odds of getting nabbed are heightened.

The downside of these stops and stop-and-frisk actions is the potential for abuses of the authority to do so.

In Los Angeles, there is an ongoing and acrimonious debate about how and whom seems to be selected for these stops. Some are concerned that the basis for deciding when to undertake such a traffic enforcement action, along with the outcome of the action, might be based on factors other than the ones that are considered lawfully bona fide.

In Texas, there was an interesting recent case of a police officer that had run the plates of a car and was informed that it had a week earlier or so been involved in a drug bust.

The officer followed the car for a little bit. The driver of the car apparently failed to signal for a left turn the sufficient legally required distance prior to making the turn. The officer then performed a traffic stop. Subsequently, one of the passengers made a run for it and there was a shooting involved. It was also discovered that there were illegal drugs in the car.

I bring up the case mainly to point out that the presumed basis for the stop was the failure to properly signal when making the turn. A run of the plates alone would be unlikely a sufficient basis for the stop. Now, some might say that the signal usage aspect was flimsy and an obvious and troublesome pretext to stop the car, asserting that the officer wanted to stop the car, and was seeking for any basis to do so, no matter how far a stretch it might take. There are those that worry this might be the equivalent of the movie Minority Report.

I’m not going to explore the societal trade-offs involved in the matter of stops and stop-and-frisks. There are plenty of other avenues for that kind of assessment. Herein, the interest is that a car is involved in these traffic stops and/or stop-and-frisks, or even a possible stop-and-arrest.

A car doesn’t necessarily need to be involved in a stop, stop-and-frisk, or stop-and-arrest, since those actions can all take place while you are a pedestrian. You can be walking along and be stopped. You can be walking along and be stopped and frisked. You can be walking along and be stopped and arrested.

My focus is when a car is involved.

AI Autonomous Cars And Traffic Stops

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One question that sometimes comes up at conferences that I speak at involves what will happen with these kind of car stops once there are AI self-driving cars.

I’ll start the discussion with a claim that often is stated as though it is a fact and yet it is completely utter nonsense: Some pundits say that there will never be a need for a traffic stop ever again.

Their logic seems to be that since they assume that all AI self-driving cars will be legally driven by the AI, there is no basis for stopping an AI self-driving car.

For example, my story about driving at night time without my headlights on, well, presumably this will never happen with an AI self-driving car because the AI will realize that the headlights need to be on at nighttime and will dutifully and without fail make sure that the headlights are indeed turned on.

That seems to settle the matter, at least in the minds of those pundits.

Wrong!

Suppose the headlights aren’t functioning on an AI self-driving car. Yes, this could happen.

Right now, it is unlikely to happen to AI self-driving cars because they are being pampered by the auto makers and tech firms. Today’s AI self-driving cars are carefully being maintained by a special team of mechanics and engineers. They make sure these AI self-driving cars are in top shape.

Once we have a prevalence of true AI self-driving cars, meaning Level 5, will all of those AI self-driving cars really be kept in such tiptop shape? I doubt it.

Let’s imagine that we end-up with 250+ million AI self-driving cars in the United States alone, does it seem reasonable to expect that all 250+ million will be kept in pristine condition, all of the time, without fail? Again, I doubt it.

I realize you might try to argue that these AI self-driving cars will mainly be in fleets of ridesharing services and other such entities. Those entities will want to keep their AI self-driving cars in good working order to make sure that revenue rolls in. Any of their AI self-driving cars that might have a faltering is possibly going to mean lost revenue if it is not viable on the road and performing.

Those are certainly valid reasons to argue that AI self-driving cars will likely be kept in better shape than today’s conventional cars, but it seems a larger leap to say that we won’t have any on the road that perchance have something wrong with them. Any pundit that believes all AI self-driving cars will at all times and in all ways be perfectly functioning cars is living in a dream world. Not gonna happen.

Stopping A Self-Driving Car

I am going to therefore boldly proclaim that having headlights that are perhaps switched on but that the bulbs aren’t working is a possibility for an AI self-driving car and therefore presumably a cop could pull over the self-driving car on the basis of the headlights not functioning. Now, that being said, I admit that’s a bit of a stretch that the headlights are out and yet the AI doesn’t know it.

Let’s consider though suppose the AI does know that the headlights aren’t functioning. Should it therefore refuse to drive at night? Perhaps I’ve driven to work and had hoped to come home before dark, but got stuck at work, and so I go out to take my AI self-driving car home and it says no-go? It refuses to drive me?

For those of you that are really strict on legality, you’d say that it should not allow you to proceed in the dark.

The AI self-driving car will be a hazard to itself and other nearby cars.

I could try to counter-argue that via V2V (vehicle-to-vehicle) electronic communications, the AI of my self-driving car could communicate with other nearby AI self-driving cars and let those AI self-driving cars know that its headlights aren’t functioning. It could also use V2I (vehicle-to-infrastructure) electronic communications and let the roadway infrastructure know that the headlights are out. This might allow other nearby AI self-driving cars to share their headlight beams in a sense with my self-driving car, by driving nearby it, and the roadway infrastructure might be able to adjust too.

I don’t want to get mired on these points with just the whole argument based on headlights. Suppose my self-driving car has expired tags on the license plate? Suppose my self-driving car has a tail pipe that is dragging on the street? There are various physical aspects of possible disrepair or concern that could be potentially used as a “reasonable” basis for opting to stop the AI self-driving car.

I mention these facets because the pundits that say there will never be any basis for making a traffic stop are seemingly forgetting that a self-driving car is a car. It will have various issues or failings that a car might ordinarily have. These then open the door toward a basis for a stop.

I get the idea that these pundits are focused on the expunging of presumably any illegal in-motion driving actions that a car might undertake. A human might forget to turn on their turn signals in the proper manner for making a turn. The AI is unlikely to make such a mistake. A human might be driving erratically and appear to be driving intoxicated, while presumably the AI will not do so. And so on.

I’ll go along with the overall notion that much of the time the AI won’t be making those kinds of human foible driving mistakes, but I’ve also many times expressed that we cannot assume that the AI is going to be some “perfect” driver that strictly and always unfailingly obeys traffic laws.

Suppose the car itself has an axle problem and the AI is trying to correct for it, meanwhile keeping the self-driving car driving ahead, as reasonable safe to do so, and will get the self-driving car to a repair shop after having completed getting a ridesharing passenger to their destination. The self-driving car might weave in a manner that seems like a drunk driving action.

The same goes for illegal driving acts.

There are situations whereby an AI self-driving car might perform an illegal driving act, doing so for a variety of reasons. It could be that the AI system has a system bug that when encountered causes the AI self-driving car to perform an illegal maneuver. It could be that the AI has “decided” that an illegal action is the best course of action, suppose that traffic has gotten blocked by a fire and the cars are making U-turns to go back away from the fire. If the U-turn is not legally allowed there, does this imply that the AI should not make the U-turn, even though it is a prudent course of action at the time?

Could Be The Occupants As A Basis

There’s another potential basis for opting to stop an AI self-driving car, namely because the occupants are doing something that could be construed as a basis for a stop.

Someone is in a true AI self-driving car. The AI self-driving car is driving flawlessly and fully legally. There is nothing wrong with any of the equipment on the self-driving car and it is in good shape. The passenger in the AI self-driving car holds up a gun and brandishes it at someone else in a nearby car. This is reported to the police. You might say it was a type of road rage.

Would you say that the police have a reasonable basis to stop the AI self-driving car?

Many would say so.

Let’s try another angle on this.

A person in an AI self-driving car tells the AI to take them to a certain part of downtown and proceed to wait at a street corner.

The AI obeys.

At the street corner, the occupant rolls down the window and proceeds to purchase a quantity of narcotics from a drug dealer standing there. After getting the illegal drugs, the passenger tells the AI to head over to a friend’s house. It turns out that the drug dealer is under watch by the police and they witness the drug buying act. They let the AI self-driving car drive some distance away, so as to not tip their hands to the drug dealer, and have an officer stop the AI self-driving car to do a drug bust.

Is this a reasonable basis to stop the AI self-driving car?

Seems like it.

Will Be Stop-And-Frisks

All in all, I’d assert that the traffic stops, stop-and-frisk, and stop-and-arrest can still take place even in light of the emergence of AI self-driving cars.

Some pundits have said that we won’t need for police to ever do traffic enforcement when we have AI self-driving cars.

I’d be willing to vote that it should be a lot less traffic enforcement needed, but not entirely eliminated.

We are going to have a mix of human driven cars and AI driven cars for quite a while, and so I’d suggest that the reduction in traffic enforcement will occur gradually, incrementally, and not somehow miraculously overnight. There will still be traffic enforcement for human driven cars, and then a lesser proportion toward AI self-driving cars, and as the self-driving cars numbers mount, and the number of human driven cars wane, the traffic enforcement volume will diminish.

Most of the low-hanging fruit of traffic stops will be unlikely available once we have a preponderance of AI self-driving cars.

Would someone do a drive-by shooting while inside an AI self-driving car?

Admittedly, the person doing the shooting has to be somewhat out-of-their-head to commit such a crime (in more ways than one). Besides the lack of a typical getaway effort, the odds are that the AI self-driving car would capture the entire act on its cameras and other sensors. In theory, it would be quite damning evidence against the perpetrator.

Autonomous Car Is Its Own Cop

There are some pundits that believe there is a good chance of AI self-driving cars acting as their own form of police.

Suppose the drug buyer kept the motor running and undertook the drug buy.

The AI catches the whole act via video.

The AI, using its sensor analysis programs, figures out that a drug buy just happened. The AI locks the passenger into the self-driving car so they cannot flee. The AI then using V2V or V2I calls for the closest police officer to come to the self-driving car to bust the passenger. Or, maybe the AI opts to drive the passenger to the nearest police station to turn them in.

These are societal scenarios that we as a society will need to decide how to best deal with. Are we willing or wanting to have the AI be examining the sensory data for these kinds of illegal acts? If so, does this provide a slippery slope toward a Big Brother kind of atmosphere that we will all be subject to?

This also raises the question of the AI self-driving car as a kind of tattletale.

Will the massive amount of sensory data being collected by the AI for purposes of driving the car be used for other purposes? Some of the data will be stored in the on-board AI systems and some of it will be stored in the cloud. Does the location of the data make a difference as to what is discovered versus what is not?

There are already byzantine laws about what, where, how, and why the search of a car can be undertaken.

The added twist for AI self-driving cars is that a lot more data will be recorded and kept, more so than on conventional cars. It will be an arduous effort of the courts and the legislature to ultimately figure out what is the proper balance between the data being private versus considered usable for crime fighting efforts.

More Twists And Turns

I’ll add some additional twists that you might find of interest.

Roadblocks and sobriety checkpoints are generally legally allowed junctures at which you can be stopped while in your car. It is presumed that there is already a defacto kind of reasonable suspicion to stop your car.

Will being inside an AI self-driving car impact those legal stops?

If there isn’t a human driver, one might assert that the AI self-driving car should not need to stop at a sobriety checkpoint. For a roadblock of another kind, there is not necessarily that same get-out-of-jail free card.

Okay, that was interesting, what about this one.

An AI self-driving car has no passengers in it. Someone wanting to buy some drugs sends the AI self-driving car to that street corner where the drug dealer hangs out. The AI self-driving car halts at the street corner. The drug buyer is on a smartphone and tells the drug dealer to go ahead and toss the drugs into the self-driving car and meanwhile remotely the drug buyer rolls down the window. The drug dealer tosses in the drugs and at the same time bitcoins are sent to the drug dealer to cover the cost of the drug buy.

The police were watching the drug dealer. They see the AI self-driving car drive away after making the drug buy. Can the police stop the AI self-driving car? If they do, and they find the drugs, is that sufficient to try and go find the buyer and bust that person?

Conclusion

A future of all and only AI self-driving cars is a long way off. We have time to be considering how our society might be changed by the advent of AI self-driving cars.

I’ve tried to make the case that we are still going to have traffic stops, along with stop-and-frisks and stop-and-arrests. This certainly will last as long as we also have conventional cars or at least AI self-driving cars less than a Level 5. Even once we have Level 5 AI self-driving cars, there are still many opportunities to potentially do a traffic stop.

It would be nice to think that the advent of AI self-driving cars would magically curtail crime because the use of an AI self-driving car as your “accessory to the crime” as being your getaway driver is seemingly impractical.

Nonetheless, there is still sadly opportunity to commit crimes, involving the use of AI self-driving cars, and it is as much a societal question as a systems AI question.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot, a renowned expert on Artificial Intelligence (AI), was a professor at USC, headed an AI research lab, and is a top executive in industry.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade