In the future, you are your Uber rating.
I’ll take you there if you give me 5 stars!
[⚡️Time travel sound effects⚡️] The year is 2030. You’re still alive, so that’s cool. But enough about you. We’re here to talk about transportation. It’s now a service. Those crumbling strips of infrastructure that you used to have to physically navigate are now just the wiring in a massive packet-switching network of autonomous vehicles.
Let’s pause here. Perhaps the most remarkable aspect of this premise is that it doesn’t require a suspension of disbelief. Michael Dempsey recently wrote a great FAQ on the topic and I wanted to add another consideration: rating systems. What might they look like in an AV future? I’d offer that allowing autonomous vehicles on technological and legislative grounds is not the difficult part—after all, a fleet of autonomous trucks drove themselves across Europe. Instead I think it could be safely argued that AVs are an inevitability. The humanity, though. Oh, the humanity. So pesky.
Back to the present!
[⚡️Time travel sound effects⚡️] It’s 2016 again. Your relationship to services like Uber still includes—brace yourself—a human being.
Here in the present, the rating system of Uber includes two actors: the driver and the rider. Their respective scoring is treated asymmetrically as a matter of brand and revenue. The aims are simple: a) Remove drivers from the supply side who don’t meet a high standard of service. b) Only remove riders from the demand side if they’re, like, a potential risk to society, because money 💸.
The inherent quirks in the assymetrical system are on full display in UberPOOL—the carpooling version of Uber that Travis Kalanick clearly, and understandably, wants us all to get behind.
That’s because the present-day rating system, with respect to UberPOOL, presents real problems to scaling quality in a manner that’s equitable for drivers. Drivers often find themselves at the mercy of a system with uncontrollable variables. The following scenario is not only plausible, but widely complained about by drivers.
- Driver picks up Passenger 1 on time. Great start!
- Driver offers Passenger 1 water. Provides a comfortable ride to Passenger 2’s location. Nailing it so far!
- Passenger 2 isn’t there when ride arrives. Driver waits mandated 2 minutes. Calls Passenger 2 who says they’ll be “right out.” LOL that was a lie. Passenger 1 is pissed.
- Passenger 1 is asked to rate their driver at the end of the trip. Not exactly fair!
The driver is the default object of culpability—often wrongly so—in an UberPOOL. But that’s today. Uber’s future wasn’t meant for human drivers. Let’s travel back to the future. Hold on to your butts!
The future state of ratings
[⚡️Time travel sound effects⚡️] It’s 2030 again. Interesting choice of haircut. You’re in an Uber with three other people, none of whom are employed by the company. To get to this point, you’ve all tacitly agreed to a new social contract; you are all rating each other. As it turned out, removing the driver fundamentally changed the calculus of maintaining brand affinity. In this future, Uber agrees to move you around the city while you move each other around a massive spreadsheet that feeds an algorithm. Your co-passengers now consitute a large portion of your brand experience, so something had to give.
The user experience changes by subtraction
By most accounts, Uber has done a phenomenal job maintaining a solid service experience while scaling at such a ridiculous pace. But then the future came and changed the game up.
Much how the existence of a proctor mitigates cheating on tests, the same is true of a driver in an UberPOOL. Take that driver out of the equation and humanity shows its true self (or could, in theory. Call me cynical). Whether it’s real-time reporting or post trip ratings, people will have to, even if passively, keep each other in check.
You might remember way back in 2015 when the idea for the Peeple app terrified actual people. The behaviors that app meant to encourage might become the most effective means to manage brand perception in a future that includes autonomous vehicles.
If this ends up being the case, it has tremendous implications for fairness and equality. How do you build an egalitarian algorithm when people are who they are? People have biases. We even have a tendency to put each other into classes—perhaps useful for tax bracketing and other legislative stuff, but less so when attempting to provide networked transportation to disparate neighborhoods.
This very scientific diagram shows the very real reality of a lot of commutes in the era of car ownership.

To dislike your co-passenger because he or she smacked their gum is one thing. When Uber’s matching algorithm has to grok more systemic trends that don’t paint the most harmonious picture of society, that’s another, and it will be painfully difficult. How does the algorithm balance the prioritization of brand (providing a good experience for the rider), while also adequately servicing groups that are the subject of discrimination?
How this reality will be managed, I have no idea. And frankly, it might ultimately be a sausage-making process better left unseen. It is, however, an issue I’d add to any FAQ regarding the future of autonomous vehicles if for no other reason than to start a healthy dialogue in the present.
If you liked this, please rate me 5 stars. I’m trying to stay on the network.