Find the lion: robotic navigation and wayfinding

QUT Science & Engineering
The LABS
Published in
4 min readSep 20, 2020

Robotic systems are becoming better at navigation all the time, especially when integrated with GPS and other supporting technology — but how can robots find their way around unmapped, unknown spaces, like office buildings, university campuses, and hospitals?

Dr Ben Talbot from QUT’s Centre for Robotics has developed wayfinding robots that can read signs, understand directions and contextual clues, and navigate complex environments relying on cues and signposts rather than detailed maps.

Choosing a novel way to test their systems, Talbot’s team set up an abstract zoo on the floor of the robotics lab, and asked the robot to find animals around the space.

Image: ThorMitty via Getty

Mapping and navigating from cues

Researchers tested the navigational task with robot and human test subjects to benchmark the robotic system’s performance against innate human navigation and wayfinding skills.

“We used AprilTags — similar to QR codes — that the robot could scan and translate into navigation cues, and humans used a mobile app that could access the same directions to ensure there was a level playing field,” said Talbot.

“Both robots and humans were given a hierarchical model to ensure consistency in contextual understanding — like knowing that lions might be with the other African animals.”

By engaging with directional cues (‘to the left’, ‘past the giraffe’, ‘behind the penguins’, etc), the robot outperformed human test subjects who attempted the task using the same information.

Dr Ben Talbot in the QUT Centre for Robotics. Image: QUT Media

Much of the existing mapping and navigation technology relies on the fact that robots base their wayfinding decisions on prior knowledge, or on maps and models that have been pre-uploaded to the system.

“We don’t work that way as humans — navigation cues are central to how humans operate in built environments, so our research focuses on how we can replicate that within robotic systems,” explained Talbot.

“For example, someone can ask you to meet them in their office. Even if you’ve never been there before, you have ways of finding it, and you certainly don’t need someone to hold your hand or walk you through the environment to understand how to find a new place.”

A map that moves as you learn

The navigation process is built around a novel navigation tool called the abstract map, which embraces the use of symbols in navigation cues for describing relationships between places: directions like down, past or behind, or visual symbols like an arrow or a pointing gesture.

There’s a big difference between ‘down the hall’ and ‘down the highway’, though, so the abstract map is founded on a malleable spatial model, which gives the robot the flexibility to change and reconfigure its understanding of these symbols as it interprets them in the environment.

“The malleable spatial model is based on a system of simulated spring dynamics, which can be stretched, pulled and adapted as the robot sees more information and builds its modelled understanding of the location,” explained Talbot.

“The spring is an analogy here: just like a spring can be pushed, pulled and moved around as you add more forces to the system, the new observations the robot makes as it explores impact the spatial model that it’s building.

“If we say something is ‘down the hallway’, the robot uses some default parameters to take an initial guess at its location, but there’s a lot of ambiguity around what ‘down’ could actually mean in relation to the goal location.

“The spring places the estimated location in a flexible way, so if the robot reaches the location and the goal isn’t there, the spring can be stretched in both distance and direction — this enables the robot to explore further and widen the scope of the navigation.”

Concrete instructions around distance and direction add tighter springs to constrain the guessed location, which gives them a higher priority than more abstract directions like ‘down’, ‘past’ or ‘up the road’.

These spatial models could then be shared between different robotic systems to contribute to a flexible navigational knowledge base across systems.

The malleable spatial model in action. Video source: QUT Centre for Robotics

Navigation in the real world

The biggest challenge for robots reading navigation cues out in the real world is the context — understanding the cue in relation to the environment.

“Imagine if a robot goes past someone wearing a shirt that says ‘I love New York’: how does the robot know that it’s just an article of clothing and not a sign that it’s suddenly in New York City?” said Talbot.

“We had to be mindful of this in our experiment, and set it up in an abstract way and with limited peripheral cues so that humans couldn’t get extra contextual information about where the animals were.”

The research formed part of an ARC grant to explore navigation in unseen built environments where GPS can’t support wayfinding.

“It’s impractical to map the world down to that granularity,” said Talbot.

“Humans can get where they need to go without too much fuss, so why can’t we make robots work in a similar way?”

More information

Explore more research at the QUT Centre for Robotics

Contact Dr Ben Talbot

Access the abstract map resources

Read the research paper in IEEE Xplore

--

--

QUT Science & Engineering
The LABS

Science, technology, engineering & mathematics (STEM) news, research, insights and events from QUT Science and Engineering Faculty. #qutstem