Robots get their sea legs

It’s a big deal when a child first walks. A similar breakthrough is taking place in robotics, where researchers are working on legged robots, especially those capable of walking on non-stationary rigid surfaces. These robots could aid in a wide range of critical tasks, from fighting a fire on a ship to inspecting, maintaining, disinfecting and/or conducting surveillance on a moving vehicle freeing people from tedious, trivial, and sometimes dangerous work.

Such a moving surface commonly is referred to as a dynamic rigid surface (DRS). Although a number of theoretical and commercial control approaches can achieve reliable legged locomotion on a stationary surface (e.g., pavement, hallway, sand or gravel), the approaches may not handle DRSs well or be able to prevent the robot from falling. This is because existing methods typically assume the ground or walking surface is static — which is not the case with, for example, ships, aircraft, public transportation vehicles, trains, and elevators.

The robot dynamic associated with locomotion on a DRS is fundamentally different from static surface locomotion. We’re tackling this challenge at our Terrain Robotics Advanced Control and Experimentation (TRACE) Lab, with a primary research focus on legged robots, nonlinear control (for scenarios that don’t move in a straightforward manner from one step to the next), and hybrid systems (which must account for both continuous and discrete actions).

We want to model, estimate and control legged robot systems that navigate autonomously on non-stationary surfaces. It’s a huge mathematical and computational hurdle, requiring our group to draw on nonlinear control theory, theory of hybrid systems, dynamic modeling, and optimization.

Enabling such functionality demands reliable robot estimation and control. This is particularly difficult owing to the high complexity of associated robot behaviors, which are hybrid (involving continuous leg-swinging motions and discrete foot-landing events), and subject to dynamic rigid surface movements that vary over time.

State estimation is especially crucial for planning and control. The challenge is to combine onboard sensors with mathematical and computational tools to reliably estimate the robot’s “state” — a mix of position, rate of movement, orientation, foot contact points, and other characteristics.

Our project has been funded through the National Science Foundation (NSF) Faculty Early Career Development (CAREER) program.

We are pursuing four main objectives to reach the research target:

So far, we have achieved success in enabling stable quadrupedal walking on a rocking treadmill that emulates a boat floating on sea waves. We’re using three robots in our research: a Digit bipedal humanoid robot developed by Agility Robotics; a Vision 60 quadrupedal ground robot from Ghost Robotics; and a Laikago four-legged robot developed by Unitree Robotics.

Image tiles of different walking phases during stable quadrupedal walking on a rocking treadmill enabled by the control methods developed at Purdue’s TRACE Lab. (Purdue University images/TRACE Lab)

By collaborating with a group of engineering and computer science faculty members at the University of Massachusetts Lowell led by Professor Holly Yanco, we also recently have begun to extend our state estimation and control algorithms for enabling robots to simultaneously perform walking and manipulation tasks on moving vehicles. This effort has been funded by the Office of Naval Research (ONR).

Our lab is working with Purdue’s Institute for Control, Optimization and Networks (ICON) to leverage the institute’s multidisciplinary expertise in complex, connected and autonomous systems. ICON continually provides well-designed opportunities for affiliated faculty members to explore research collaboration and connect with funding agencies.

For instance, ICON’s New Faculty Orientation Workshop and weekly seminars have allowed new College of Engineering faculty members like myself to present our research and exchange ideas with a wide audience across and outside the college. I also attended the ICON Summit on Trusted Autonomy with the U.S. Department of Defense, leading to numerous thought-provoking research brainstorming sessions with ICON and DOD colleagues.

Robotics is a rich field, ripe for investigations down many avenues.

In addition to developing the usefulness of legged robots, we’re focused on covering the fundamental principles of the physical interaction between a human and an assistive device. The goal is to advance the design and control of adaptive robotic exoskeletons in order to enhance human locomotion and human-intent inference.

In another project, we’re combining 5G technology with legged locomotion for fast teleoperation during time-critical missions, such as search and rescue and other emergency responses. Through a 5G network, the human operator can access reliable, high-resolution, real-time video streaming to make feasible, timely decisions on which the robot can act.

It’s time for robots to “step up” to a whole new range of functionalities and capabilities.

Verizon’s 5G Lab in Cambridge, Massachusetts, interviewed Yan Gu in 2019 after her research group won Verizon’s 5G Robotics Challenge by combining Verizon’s 5G technology and legged robotics methods to enable robot teleoperation. (Verizon photo/5G Lab, Cambridge)

Yan Gu, PhD

Director, Terrain Robotics Advanced Control and Experimentation (TRACE) Lab

Associate Professor

School of Mechanical Engineering

Faculty Contributor, Institute for Control, Optimization and Networks (ICON)

College of Engineering

Purdue University



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store