Seeing differently: Five’s powerful sensors

By Team Five

Team Five
Five Blog
8 min readDec 14, 2018

--

Driving is one of the few complex skills that almost all of us humans are capable of learning. So, for autonomous vehicle developers, it can be tempting to draw analogies between how humans sense and drive and how our systems sense and drive. This is especially true when we’re seeking to explain highly innovative new technologies and bring them to life for the wider public. In reality, however, humans and autonomous systems are really quite different. And these differences are both meaningful and valuable.

Humans have the most incredible ‘wetware’ — including eyes, with the equivalent of over 130 million pixels each and a vast dynamic range, and brains with 100 billion neurons. No camera exists with an equivalent performance of the human eye and, though computers can outpace the human brain on narrow tasks, no-one is close to recreating the kind of intelligence that any human with a driving license possesses.

But despite lacking many of the powers humans have, autonomous technology — when developed and deployed diligently — has the potential to reduce road casualties by delivering consistent and dependable situational awareness. This means driverless cars can be aware of everything that’s happening around them, at all times. They never get distracted, they never drive drunk, and they always follow the rules of the road.

In short, we can be safer by doing things differently. Our sensors are our car’s eyes and ears, but only of sorts. We can’t mimic human sensory abilities directly, so we sense the world differently using more than just the visible light spectrum.

In a previous post, we shared how we combine data from our vehicle’s sensors — such as cameras, lidar scanners, radar and GPS — with map data to make it possible for our vehicle to deduce where it is in the world, how fast it’s moving, and what ‘objects’, behaviours and scenarios it can expect to encounter. These same sensors also enable us to identify and localise other road users, pedestrians, and the rest of the world around us. And they give us the information we use to predict how a scene in front of us will unfold and plan what we should do.

Five’s perception system has to be fit for the domain in which we operate. So that means our sensors need to be a match for London’s rainy, gloomy, sometimes snowy and occasionally sunny conditions, as well as its narrow streets, large numbers of pedestrians and cyclists. These environmental factors have a profound impact on the sensors we choose and the ways we use them. While humans are able to apply their senses to a wide range of contexts, an autonomous system that can navigate fair conditions may be completely unsuitable for safe operation in bad weather.

In short, our sensors are multi-talented, crucial, and always in play. While they may mostly be hidden away beneath the exterior of the car, they’re integral to the safe and successful operation of our autonomous system, and to making Five’s vision a reality. With this importance in mind, we have rigorous, carefully researched processes around how we select, integrate and monitor them.

Here’s what makes our sensors so powerful:

They’re interconnected

As humans, our senses never work in isolation. They’re interconnected. The same is true of our car’s sensors. We ensure that, at every opportunity, they work together to allow us to accurately see the environment, irrespective of weather conditions.

Unlike a human, however, our cars won’t be distracted by the wrong sense at the wrong time. And unlike human drivers, our cars have the right senses for the job. While they lack a sense of smell that would be of less (though not no) use on the roads, they do have radar, which has no biological analogue but vastly improves their perception of the world around them; our cars have no sense of taste, but they do have lidar.

Those senses our vehicles share with humans, like vision, we’ve adapted, increasing the number of cameras to give us continuous, all-round awareness rather than just sight in a single direction, for instance.

Our system has been engineered specifically to be good at what it does — getting our passengers safely from the start to the end of their journeys.

The physical world is three dimensional and, especially in urban environments, highly dynamic. Our cameras, for instance, give us detailed images of the world around us which can be passed to the parts of our software stack that deal with identifying the components of a scene. And, by using carefully synchronised images from two cameras which will have slightly different viewpoints, we can calculate the distances to what we see.

Those distances are confirmed using our highly accurate lidar units. These give us multiple measurements of a significant proportion of the world around us. Both camera and lidar data tell us where moving objects are in our scene, but our radar system gives us a precise measurement of how fast these objects are moving, so we can take the right action.

Our cameras also enable us to estimate our speed from the changing scene that they see, just as a human driver develops the ability to sense how fast a vehicle is going by looking out the window. This speed can be cross-referenced against the speed reported by our GPS system and sensors integrated into the car’s wheels.

The seamless teamwork between our sensors gives us a system of checks and balances. Each sensor checks the others and all this data is shared and correlated, ensuring accuracy and safety.

They’re self-aware

It’s vital that our system has the ability to be aware of its own performance. We call this ‘introspection’ and we ensure it runs throughout our entire system architecture.

At runtime, we employ our own sophisticated suite of monitoring software which allows us to ensure all our sensors — in fact, all parts of our system — are performing in the way we expect. These checks range from making sure our cameras are producing images at the correct frame rate, to monitoring the temperature of our battery system, to recording how much computing power we’re using.

This means we can catch and solve problems without having to waste valuable time on the test track. And in the case of unexpected behaviour in tests, these diagnostics help us focus our attention on the component that’s responsible for the issue. On the roads, these monitoring tools help us ensure our system is safe, by alerting us to potential problems and giving our human safety drivers the information they need to make an informed decision about whether to proceed with a test.

By comparison, mistakes are very often made by human drivers who have been unable to appreciate their own vulnerability in some situation on the roads — whether by failing to allow themselves time to react, by driving whilst tired, or by failing to be aware of unfamiliar risks on a familiar road. Our cars always know their own capabilities.

They’re all unique

While it takes a young human many years to become mature enough to reach the stage where they can drive safely on the roads, very soon after we finish installing the sensors, our cars can be out on the roads helping us gather data or on the track testing the latest version of our software. To be able to deploy the vehicles so quickly, we have to take full account of the differences between each of our cars and sensors.

Individual camera lenses for instance, however well-made, differ very slightly and distort images in very slightly different ways. Similarly, even our precision mounts leave our sensors in ever-so-slightly different positions and orientations, making each sensor and each car unique. To account for these differences, we put every single one of our vehicles through a thorough commissioning process before it’s used for development work.

First, we test each individual sensor we receive to make sure it’s operating in the way we expect and is free from mechanical or electrical defects that might cause problems on the roads.

Then, we determine the individual characteristics of our sensors, such as the distortion effects of the lenses. With our lenses, we typically set up calibration targets with known properties, such as a chequerboard pattern with squares of a known size. By comparing what we see with what we expect to see, we can deduce the distortion added by the lens.

Finally, we determine the precise position and orientation of our sensors in much the same way — by exposing them to a known scene and updating our position and orientation estimates until what our sensors tell us agrees with our ‘ground truth’. We use a sophisticated suite of software calibration tools to automate this process.

Before we release our vehicle for development work, these calibrations are tested thoroughly on our test track to confirm that the data we get from our sensors is self-consistent. We continually monitor these calibrations when the car is in operation, to catch and fix any developing problems.

They’re always on the move…

Our teams are continually working to upgrade our car’s software, rigorously testing the results at every stage. We don’t just replace failing components with new ones of the same type. We relentlessly seek out ways to boost performance and add additional functionality — in much the same way that a good driver continuously looks to improve his or her own driving based on experience. But there are, of course, differences.

Human drivers each have to improve separately and slowly, with society often tragically seeing the same lessons taught again and again at too high a cost. We’re able to take what we learn on one car and quickly deploy it to our whole fleet, speeding our development and making everyone’s roads safer.

The result of our approach to selecting and integrating sensors is a car with amazingly powerful senses. These senses are very different from those of a human, and it’s their differences that make them.

Want to join our team and help us build the future?
Email talent@five.ai

--

--

Team Five
Five Blog

We’re building self-driving software and development platforms to help autonomy programs solve the industry’s greatest challenges.