What it means to be “AV-first” when designing a new mobility platform — part 2/2

David Geretti
Bestmile
Published in
7 min readNov 10, 2020

This article is the second part detailing some key differences between AVs and Human drivers that we experienced while building the Bestmile Platform.
The first part can be found here

Part 1/2

  • “What would a robotaxi do?”
  • Can robots and humans share a common language?
  • Different levels of “smartness”, different languages

Part 2/2

  • How much intelligence should be sent with the mission?
  • Routing and navigation: Programmed vs. common sense
  • From groom to field agent to remote control specialist: evolving roles for humans

How much intelligence should be sent with the mission?

As a human driver, when your GPS navigator lets you down, or when you are running out of gas, you have an instinctive fallback mechanism to your current “mission”. You can handle those cases without being instructed to.
It is even more true for professional drivers: whatever the degree of precision they have in their daily planning, they have training, experience, and instinct to account for most situations.

When it comes to AVs, how do we bridge that gap? How much fallback intelligence will be built in AVs vs. should be built in the platform?
(see also the “idling” question at the beginning of the article)

Just-in-time vs. foresight for the vehicle/driver

The initial version of Bestmile’s Hermes protocol had a fundamental design specificity: It was sending missions when the orchestration platform thought it would be time to execute them. (a.k.a. just-in-time).
This designed feature did account mainly — or too much — for AVs. Why would any robot need to know in advance what it will have to do? Shouldn’t it just “obey” what the transport operator wanted it to do now?

This didn’t account for one thing: withholding information is terrible for humans. We crave information, and we deserve information. We can adapt and find better solutions to any problems if we have the whole context. This made Hermes difficult to adapt to human scenarios at first.

The new version of Hermes, however, changed this design. Now, the protocol can support as many future missions or actions as the platform wants, and as much as necessary for the vehicle/human counterpart.
It is now up to the vehicle — or the app — to decide how much information has to be processed.
For instance, on Bestmile’s Driver App, we use future missions to show the driver what is coming next through Hermes. Along with missions, we provide critical operational information, like “when to start driving”, or what is expected of the driver after the drive.

As for the AVs, some information like brand-specific events, or model-specific capabilities might never go through the Hermes pipe.
But if the information has an impact on the orchestration, it definitely should…

Manufacturer will always know their vehicles better than the agnostic orchestration platform. It is safe to assume though that giving them more context to handle specific situation is beneficial. The fallback mechanism, or the alerting system might work differently if the AV is currently on a mission to pick someone up or drop someone off, or on a maintenance trip to charging.
That is specific to the brand, model, and software version of the AV though, and that need is hard to shape and predict.

Summary

  • More data shared with the mission means more context. It’s critical for humans, but might also be useful for AVs in some specific cases.
  • The data shared through missions should — and probably will — remain very orchestration-centric. This means that any information that is not directly useful for the efficiency and quality of service might have to go through other channels.

Routing and navigation: Programmed vs. common sense

We all know what kind of challenges humans might face when using a GPS navigator. Trusting it a little too much might drive you in weird situations, whereas not using it might lead you in a traffic jam you could have avoided.

But whatever the guidance from the navigator, taking a wrong turn can usually be recovered with common sense. Also, humans have road signs, well-known infrastructure, and implicit knowledge about their vehicle to avoid a road they cannot handle, or to reverse a bad decision. Making mistakes in routing doesn’t mean it is the end of the road. Humans will find a way out, with or without GPS navigation, and still end up at their destination at some point.

For instance, think of a construction work started in the middle of the day without notice, not impacting much normal traffic but narrowing one of the streets. A human might see the construction, and automatically reroute (or be able to go through by using “experienced” driving and knowledge of the vehicle’s limits).
For AVs, it would be a very different story — and anyone who tried to train a machine learning (ML) model once will understand why.

AVs rely on a set of complex algorithms, sensors, and computers. Even with modern machine learning or artificial intelligence (AI), they are still constrained to obey what computer scientists “planned” for their abilities.

In essence, the difference between an AV and a human, when it comes to navigating or routing, can be summarized as follows:

A human can make routing mistakes and still succeed, an AV could get “stuck” without making any mistakes

From groom to field agent to remote control specialist: evolving roles for humans

So the driver, as traditionally known, might be a disappearing role.

But a driver is more than just a wheel-spinner-pedal-pushing-human. Tasks, that are inherently part of the role of a driver will not be replaced by lidars, orchestration platforms, and AI (at least not that fast).

  • What happens when a vehicle breaks down? (At the fleet and operation level. Not at the vehicle level)
  • What happens when a vehicle takes the “wrong” road and gets stuck? (see previous chapter about routing also)

At Bestmile we have been designing and dealing with 3 different evolution of the role of drivers (so far):

  • Grooms
  • Attendants
  • Field agents

In some cases, the same role might be held by the same person. But the subtle difference between those roles and the path they might take in the future is enough to differentiate them:

A groom is taking care of the passengers, and supervising the vehicle mostly from inside. It serves mainly as a backup for the technology gaps of today, while keeping the operation in line with (most) of the legal framework for AVs: the obligation to have a human to take over the machine, inside the machine.

The groom will most likely still exist, but maybe only for bigger vehicles (like the train driver still exists, but might not be driving that much…). The care of the passengers, and the first-hand intervention in case of incident will be the main incentive to have grooms.

An attendant is mostly taking care of the vehicle with regards to the operations. When “grooms” or drivers are not legally obligated to be in the vehicle any more, the vehicles might still need some attention, although not necessarily from inside.
Tasks like service preparation, maintenance, and mandatory manual operations to account for the current state of the technology will be still relevant. Tasks include calibrating sensors and software before driverless operations.
One attendant might have, in the end, the oversight of multiple vehicles.

The field agent would be a natural evolution — or complementary role — of the attendant, especially for larger operations. It could be seen as a mobile “supervisor”, as opposed to the remote supervisor role. The field agent could be a contractor, “certified” to take care of many brands and model of AVs, and receiving alerts and missions through an integrated platform. They would mainly intervene in case of vehicle issues like complex mechanical interventions that cannot be taken care of by remote operations center staff.

Back to the driver: remember that most of those tasks are today in the hand of the driver!

  • Taking care of passengers
  • First-hand mechanical intervention in case of breakdown
  • Cleaning (in some operation types, like taxis or private chauffeurs)
  • Making sure the vehicle is fit for the start of operations
  • Refueling

Those tasks will have to be taken care of when AVs become mainstream. Being AV-first means to understand those roles in order to build the features that will have a direct impact on orchestration, planning, and incident management.

Summary

  • The driver will still be an active part of the operations. The role is simply evolving fast. Changing the direction of the vehicle, and deciding when to leave the idling stop might just not be part of the task list of a human anymore, but part of the commands sent to robots.
  • Orchestrating a fleet, and interfacing humans with robots in the context of a transport operation require to account for this change of role and its potential evolution.

Conclusion

At first sight, vehicles will be vehicles. It is too easy to think that systems can easily adapt for the transport revolution. This includes assuming that for pre-existing software systems, as software is flexible. The devil is in the details here also.

For humans, new roles appear while well-known ones are getting replaced — partially — with software. This process also helps put in the spotlight onwhat it means to be “a driver” in the different kind of mobility operations across the world.

Humans have implicit behaviors, AVs have only explicit ones. At Bestmile we build for the explicit, and add UX on top to make the whole thing friendly for carbon-based beings. Doing so helps our customers prepare for a promised future while supporting the transition.

--

--