The Courchevel Test for self-driving vehicles — the impossible challenge?

Toby Simpson
9 min readJun 22, 2020

--

It has been two decades since I have been to Courchevel. I am reliably informed it’s still present, still has this terrifying airport (7th most dangerous in the world), and skiing Les Trois Vallées remains just as awesome as ever. Looking forward to returning some day soon! (Image credit: Hugues Mitton)

If you’re unfortunate enough to have got into a self-driving car conversation with me that involved wine, then I will have bored you with my Courchevel Test. It’s pretty simple: I’ll believe that we’ve cracked AI well enough to do general-purpose self-driving cars¹ when the CEO of the car company is able to complete these breezingly simple steps:

  • Fly into Lyon airport in France during the ski season,
  • Get into the back seat of their self-driving car — alone — with a nice big book that they’ve not read before,
  • Have the car drive them to a hotel at the Courchevel 1850 ski resort whilst they read the book.

There must be no other human beings in the car. The CEO will be tested on the book on arrival. The car must drive at the appropriate speed for the conditions, so no creeping along at 10km/ph. Repeat at dawn, dusk, and in varying weather conditions.

This is precisely the kind of problem that modern AI can’t — yet — solve safely, because the decision space is too broad, too unpredictable and requires context and understanding, not just pattern recognition. No number of radars, sensors and cameras can provide enough information to break the world around down accurately enough to make the real-time decisions necessary to avoid driving off a cliff, flattening a cyclist, or colliding with another vehicle.

Awwwww! Don’t you want one too? Couple of months and he’s available in LEGO! Image credit: me. Because everyone should own a baby Yoda. Do you?

Without extra magic, a safe solution requires general-purpose intelligence, and that is something that remains (no matter what anyone tells you) far, far away, like the galaxy in Star Wars and its cute baby Yoda.

Passing the Courchevel Test² consistently would, to me, be the pinnacle of achievement for a self-driving car’s AI. It would indicate that there is a grand enough understanding of the surrounding environment to deal with the extraordinary range of conditions that will be found. Let’s take just a handful of variables: unpredictable reflections on partially iced or wet roads, weather that can change faster than you can describe it, car-sized holes in side-walls with perilous deadly drops beyond them, hairpin bends with large, sudden gains or drops in altitude. Then there’s small villages where pavements are missing, with narrow and wide bits around buildings that were engineered when horses were the premier tip-top way of getting around, constant animal or rock-fall hazards, insane speeding drivers³ that roar up these roads like they are invincible (and those car-sized gaps in the protective wall would indicate otherwise). It’s a puzzler, for sure.

Revising the Turing Test

Let’s try something that’s surely easier. Steve Wozniak, co-founder of Apple, said he’d believe that AI had arrived when a robot could enter a strange house and make a decent cup of coffee. The get-dressed test was suggested to me by Steve Grand: get an AI powered robot to get dressed, appropriately, with what’s in the wardrobe and drawers. There are others, like assemble some IKEA flat-pack furniture, or pick strawberries and they sound deceptively simple until you start to break them down into their component parts. Simple becomes “uh oh, yeah, I see the problem”.

The Courchevel Test, the make-a-coffee test and the get-dressed test are possibly better indicators of how we are getting along with true AI than most other contrived ones. Sure, it’s impressive to be able to solve puzzles, play games and recognise and classify and transform images orders of magnitude better and faster than humans, but humans can solve problems that leave computers standing cold. We are absolutely, totally, brilliant and don’t let anyone tell you otherwise. Indeed, we’re not alone in being awesome: watch a bee navigate in 3D space and tell me you’ve seen AI do that better.

The thing is, AI can’t see human level intelligence from where it is even if it is armed with the Hubble Telescope. I’ve written about this before in the context of the vast gap between where you might think we are and where we actually are. None of this, of course, takes away any of the life-changing benefits that we get every single day in our lives from the incredible advances that artificial intelligence, and in particular machine learning, are and will continue to deliver. We do, though, need to be both pragmatic and realistic about the chasm between AI and Artificial General Intelligence (AGI).

Stacking Cats

But, as always, there is more than one way to stack cats… and one of them is to make the world so easy to understand, that a drunken baboon could make sense of it.

Let’s touch on two approaches:

Job: keep the cows fed. Trivial for humans to optimise, really difficult for a robot. Farming’s a great example of a business with too many layers between the value provider and consumer. Decentralisation will, at last, eat at these layers.
  1. Take the options out of the scenario. When you see little robots trundle peacefully around a factory or warehouse, they are usually achieving it in one of two ways: either by detecting a metal ribbon in the floor, or following a yellow stripe on the ground. This is inflexible, and doesn’t respond well to changes (like a paint spill over the stripe, or a break/fault in the metal ribbon, or just a surprise like goods spilt in the way or a person not paying attention). It’s great for factories, and also for some city applications, but won’t pass our Courchevel test due to the cost to deploy. If you want this, take a train.
  2. Have the world describe itself. Rather than using cameras and other sensors to attempt to do a paint-by-numbers interpretation of the surrounding world, where lighting, weather and other things can destroy the reliability of the picture, turn the problem inside out. Cameras make mistakes that humans do not, because we’re paranoid, and we see and understand context. If all the objects in the world describe themselves, then you know what and where things are.

Option 2 could solve the Courchevel Test, but it requires a critical mass of things to announce themselves, an architecture to support such things, and a method of receiving the required information and building a picture of this augmented reality. If these issues were solved, you gain a load of interesting up-sides: you can see through fog. You can see in the dark. You can see through snow. You can see around corners. It’s the ultimate cheat mode for reality.

If everyone takes part, you have a visual on what’s going on that is exceptionally accurate, so much so that you can dedicate the cameras and sensors to just picking up the surprises and those that refuse to play. Your average fox, hedgehog, deer or drunken adult staggering back from a bar having dropped his phone in the toilet are unlikely to be represented in our digitally augmented world. But, in one fell swoop, we’ve hugely reduced the scope of the problem, and brought it within grasp of what AI can realistically do today.

Bringing the world to life

When I’m talking about AI and alternative approaches to understanding and navigating the world at conferences, I often show images like the one below. How many trees are in that picture? Humans can make a better guess than computers because we can see which ones are probably the same tree, and we can imagine the missing parts and speculate realistically as to what’s where we can’t see. But what about the road signs in that image? Only one, you say? Are you sure about that? What if one was knocked over and yet to be fixed? What if it said “Caution, thousands of hungry tigers 100 metres ahead at stop sign”, or “Bridge out: 500 metre vertical drop ahead”. Unless the sign itself talks to you, then you’re never going to know.

Humans can count trees more effectively because we know when two are in fact one. Plus, there’s the sign you can see in this picture. What about one that has fallen over? How would you know? What if it was important?

And this is where Autonomous Economic Agents come in. Clearly, you are never going to put computing devices on all the dumb, static street furniture out there, but thanks to comprehensive databases of such items, it is now possible to spawn digital representatives of everything, and for those representatives to be viable economic units. For small fractions of a cent, they are able to sell the information about themselves, plus additional intelligence based on what they can glean from the type and frequency of other requests. This incentive to operate such things coupled with using decentralised incentives to play-well, be up-to-date and accurate is a powerful combination of technologies. We’re armed with some incredible parts of this jigsaw puzzle that were not true as recently as half a decade ago:

  • Mobile devices with GPS and permanent data connections are ubiquitous. We all have them. They can run a representative of themselves, and this provides intelligence information on where they are, how fast they are going, in what direction, and at what altitude. This brings almost every person, vehicle and building to life: an unprecedented source and quantity of real-time knowledge.
  • Digital representatives can operate without any centralised entity being involved and can control their own value generation. Thanks to, wait for it, because you knew it was coming, blockchain, individual entities can take part in the digital economy autonomously. They can create their own “account”, have it populated with tokens/cryptocurrency and can take part in the economy around them immediately. If we can figure out self-bootstrapping, token-powered e-SIMs, then this becomes even more powerful.
  • Blockchain, coupled with verifiable credentials, digital identity and other cryptographic technologies can provide decentralised, self-service trust. You need not take anyone’s word for it, you can establish enough about reputation yourself to decide if any given interaction fits your risk profile.
  • Again, blockchain (it keeps on giving) enables scale through decentralisation, and incentives for taking part in providing the computing power. Now, the dream of having, say, every single street sign represented by a digital entity with its own identity, account, reputation and knowledge is no longer laughable science fiction.

The issue over discovery and organisation of such a population of autonomous digital entities remains, or at least it did, until we at Fetch.ai created our Agent Framework and decentralised agent search and discovery mechanism. We’ve demonstrated several examples of how the real and the digital worlds can be connected, with both decentralised ride-sharing and autonomous agents representing each and every train and station in the UK, and we’re many slices in to the street-furniture-as-agents cake, too. Depending on the time of day you check, there has been a large population of agents representing the space around Cambridge, UK, in our Digital World for some time now and it’s growing all the time.

These networks-of-agents are the building blocks for decentralising the delivery business, optimising congestion out of travel, increasing capacity and efficiency of supply chains, providing an augmented, smart reality for self-driving cars and more.

It’s possible for anyone to take part in this digital economy: bringing the static dumb world to life, and building a collective intelligence about what’s going on that’s owned by and for the benefit of everyone. As I am fond of saying, life need not be a zero-sum-game, and the incentive mechanisms blockchain provides makes it more profitable to be good than bad, limits damage, and tends away from the tragedy of commons one can often see with shared resources.

I’m still not sure if I’d take part in the Courchevel Test myself, at least not first, or sober, but I do look forward to a day where it is possible. And by augmenting the environment with autonomous agents, we close the gap substantially. AGI, though? True human-level intelligence in a machine? Well, there’s a way to go, but here’s a little clue: pattern generation is as important, if not more so, than pattern recognition when it comes to intelligence, because it enables imagination, effective goal selection and execution.

It won’t surprise you to hear, we’re on the way.

-

Twitter: @pretzelsnake

[1] — that’s actual SAE level 5 with knobs on, not “Elon Musk tweet level 5”.

[2] — then there’s the advanced version of the Courchevel Test: same scenario, but the steering wheel and pedals are removed from the vehicle to stop the CEO from leaping between the seats to save him or herself when things go… well, you know.

[3] — mostly French, of course, who demonstrate a miraculous knack for surviving overtaking on a blind hairpin corner, at speed, on ice, with the sun in their eyes, whilst smoking. Then you eventually find them at the hotel five beers ahead of you looking astonishingly, and unreasonably, cool.

--

--

Toby Simpson

Opinions my own, yours may be different, and that’s cool.