‘Inertial Navigation System’ — an alternative to GPS based tracking?

Venkatesh Bharadwaj Srinivasan
Spider R&D
Published in
8 min readJul 16, 2020

Let’s take for example — cars, autonomous vehicles, smartwatches, mobile phones, etc. One common utility present in the above-mentioned ones is the ‘Global Positioning System (GPS)’, which can be used for tracking people.

Introducing GPS:

The GPS relies on signals from a total of 32 satellites to compute the position of a person/thing in the Earth frame. Each of the 32 satellites possesses an atomic clock that is synchronized. It emits a radio wave that sends out timestamps corresponding to the movement of an object that is GPS compatible. Using the information of timestamps and the movement, it is possible to compute the distance between the sender and the receiver — aka satellites and devices with GPS.

GPS — an intuitive understanding:

Trilateration on a plane. Courtesy: Fleetminder

The algorithm that’s used to perform this operation is called ‘Trilateration’, which makes use of the speed and the time of the signal to be sent from each satellite to the receiver. In order to have a good estimation of the location, it is required that at least 7–8 satellites are in direct view to the receiver. This concept is often confused with ‘triangulation’, which makes use of the angle relative to the reference point to compute the position.

The concept that’s used in GPS — trilateration, determines the position either from 3 known distances on a plane, or 4 known distances on a sphere.

To compute the location accurately, the GPS signals are expected to have a clear and direct sky view, also known technically as the Line of Sight. If the satellites fail to have a clear view of the receiver, there are chances that the GPS signals would not be detected by the receivers, resulting in an erroneous estimation of the receiver’s location.

The accuracy of GPS depends on various factors such as Signal to Noise Ratio (SNR), position of the satellite, obtrusive objects like buildings, trees, mountains, tunnels etc. which result in multi-path reflections.

An alternative technique for tracking:

To explore the possible ways of tracking people indoors / in places where there is a lack of proper Line of Sight of GPS signals, we will explore the concept of ‘Indoor Positioning’. It is a constantly evolving concept that can be pursued in different ways. One of the techniques of Indoor Positioning that is cost-effective and simplistic in its design is the ‘Inertial Navigation System’ (can be used indoors/outdoors). We will delve deeper into the intricacies of this concept, with a video demonstration of the Foot-mounted Inertial Navigation System of a group of students in Spider R&D back in 2017.

Moreover, Inertial Navigation System doesn’t require any knowledge of the outside world, which makes it even more accessible.

While exploring various gadgets that move, say, for example, right from your smartphones to Autonomous cars, you could observe that they make use of an Inertial Measurement Unit (IMU), which helps the device keep track of its location and its movements. To implement tracking using Inertial Navigation, we opted to use MPU-9150, a 9-axis (independent motion can occur in 9 directions — indicating Degrees of Freedom) Inertial Measurement Unit. The 9-axis IMU has three sensors namely accelerometer, magnetometer, and gyroscope.

It is important to be noted that the values obtained from the sensors in IMU are in the sensor/body frame.

Components of an IMU — a brief description

Accelerometer gives a measure of the amount of force it is experiencing in X, Y, and Z direction. A gyroscope measures the angular velocity along the three axes. Magnetometer helps in estimating the orientation using the earth’s magnetic field. It works similar to a magnetic compass.

The sensor values must be transformed from the sensor/body frame to the Earth frame for computing the orientation & position.

We need the orientation of a person’s movement (theta) and the position (r) to compute the trajectory using r & theta (Basic parametric equation — Coordinate Geometry concept).

Orientation

Courtesy: Apple

Generally, the Euler angles — yaw, pitch, and roll angles are computed to estimate the orientation of an object moving in a three-dimensional space. Pitch, roll, and yaw are defined as the rotation around X (lateral), Y (longitudinal), and Z (perpendicular) axis respectively.

By integrating angular velocity over time, we can estimate the changes in yaw, pitch, and roll. The magnetometer values are used as a reference value to the integrated gyroscope angle values which give us the angle of orientation, susceptible to a factor called ‘gyro drift bias’ in the perpendicular axis aka yaw (due to integration to obtain angles, which keeps accumulating over time).

The values from 3 sensors are fused together to compute the direction and position of an object in a process called Sensor Fusion.

A pictorial depiction of an algorithm for estimating the orientation (attitude). Courtesy: http://philstech.blogspot.com/2016/02/quaternion-imu-drift-compensation.html. Check the link to learn more elaborately about terminologies mentioned in the image.

Euler Angles are very easy to compute and analyze, but they are limited by a phenomenon named ‘gimbal lock, which restricts them from computing orientation when the pitch angle approaches +/- 90 degrees. Hence, we adopted a concept named ‘Quaternions’, which is a four-element vector that can be used to encode the orientation of a body in a 3D coordinate system. The intuition behind Quaternions is beyond the scope of this article.

To summarize, complex numbers are two-dimensional representation of real numbers. Quaternions are four-dimensional representation of complex numbers.

All the computations must either be performed completely in terms of Euler angles or completely in the Quaternion space. Performing computations in Quaternion space helps us evade complex/costly calculations as the exact orientation is obtained all the time.

Position

Having discussed about the orientation estimation of the motion of an object, let’s discuss about estimating the position in detail after obtaining the sensor values in the Earth frame. Before starting the recording of sensor data, the initial reference point is known. The position is computed at each footstep and the orientation is obtained via quaternion computation discussed earlier. Thus, we get an estimate of the trajectory of movement relative to the previously known position. This technique is known as Pedestrian Dead Reckoning.

Pedestrian Dead Reckoning with Foot mounted inertial sensors — mind-map. Courtesy: Inertial Elements (In our implementation, we placed sensor only on one foot and computed the stride length accordingly). This is just to depict how we placed sensor in our implementation on foot and to give a gist of the entire process.

As per Laws of Motion in Physics, double integrating acceleration values with respect to time would result in distance (position).

With basic knowledge of integral calculus, you can visualize that on every integration, a constant/bias value gets added up. Now, the position will have 2 more bias terms (c1 + c2*t). The bias/error accumulates over time to become very large. This is true in the case of Inertial Navigation for both pedestrians and autonomous vehicles.

This error has to be minimized in order to track with better accuracy.

For a foot-mounted Inertial Navigation System,

To cater to the accumulating error over some time, we need to realize that the time steps in the time duration during which the foot stays on the ground has instantaneous velocity equal to 0 and the angular rate is also 0. This time period is called the zero velocity interval. We use Zero Velocity Update algorithm to perform the computation mentioned above.

Steps involved in the Zero Velocity Update Algorithm:

  1. We detect this zero-velocity interval at every footstep by

i) smoothing the acceleration values along three axes,

ii) removing the 0 Hz noise (so that there is no offset/bias) and

iii) comparing the resultant acceleration (taking the square root of acceleration in x, y, and z-axis — also called L2 (or) Euclidean norm) with the empirically (experimentally) set threshold.

2. When the resultant acceleration is lesser than the threshold, it means that the person is stationary and velocity is in turn set to 0. For non-stationary points, the velocity is computed by normally single integrating acceleration values. This is accompanied by integration constant. We do this for all sample points.

3. We then finally remove the drift (integration bias) velocity by finding the velocity difference between the start and end of a non-stationary period, then divide the same by the number of sample points during this non-stationary period and multiply with corresponding data index to get drift value at that certain point. We subtract this value from the previously calculated velocity. Thus, the position and velocity errors diverge very slowly which aids in minimizing the drift and bias errors.

Drift formula, where i = sample index

The step/stride length depends from person to person. Hence, there is a possibility of having poor performance in implementing the algorithm with a fixed step distance.

The zero velocity update can be done only in terms of packets, where each packet contains the sample points of the one non-stationary period. Hence, there might be a slight lag in obtaining the tracking real-time.

For an autonomous Inertial Navigation System,

The concepts of Inertial Navigation, coupled with Perception sensing via camera, LIDAR, RADAR, etc. and the external reference value of GPS can be very effective in estimating the accurate position and orientation of self-driving cars.

Several companies such as Analog Devices, Ford, nVIDIA, Cruise etc. are pursuing their research on combining the salient features of Inertial Navigation and Perception based Navigation with the GPS tracking system.

Concluding notes:

That being said, both Inertial and Perception-based Navigation have their fair share of pros and cons. These can be combined with GPS signals to give accurate tracking of people/objects. Inertial Navigation is a fairly simple concept that can be implemented at very low-cost just by using an IMU and a Microcontroller and by superimposing the trajectory of the motion estimate over a known map with a known reference starting point.

You could find our video implementation and our publication of Inertial Navigation in the References section for further technical details.

References:

  1. Our implementation videos of Inertial Navigation System — JARVIS
  2. Our publication — JARVIS (50 cm accuracy for Inertial tracking)
  3. OpenShoe — Foot-mounted INS for Every Foot
  4. OpenSource IMU Algorithms — x-io technologies
  5. Opensource GitHub code for plotting position and orientation estimates — x-io technologies
  6. Human activity recognition dataset containing accelerometer & gyrometer — UCI ML repository. You could classify the IMU signals to predict the type of activity recorded, i.e. walking, running, sitting, standing, etc.
  7. Wi-Fi based tracking — a publication by my juniors in NIT Trichy which deals with the same concept
  8. Demystifying Quaternions — an article by Veejay Karthik, Spider R&D
  9. Beginner’s guide to IMU — an article by Robotics club, IIT Kanpur

Editorial note

My friends (PRAKASH SABESAN, Ashish Kumar Kola, Shrikrishna, Abhi Nabera, Rajiv Vaidyanathan, Avinash) and I, had worked on A personalized monitoring system for rescue missions, where Foot-based Inertial Navigation System was one of the modules. We won a few regional and national events with this project under the team name, ‘Tinkerers’ representing Spider R&D. Special thanks to Hariharan Natesh for proofreading the technicalities in this article. Thanks to Sundara Paripooranan, Rakesh Vaideeswaran as well for additional proofreading.

This article is published as a part of the ‘Hardware Series’ under Spider Research and Development Club, NIT Trichy on a Tronix Thursday!

--

--