Introduction to Radar: Part 3

How to Estimate Angle with a MIMO Radar

Isaac Berrios
7 min readJun 17, 2024

In this post, we will learn how to process raw Radar data to estimate Angle of Arrival (AoA). We will use a sample from the Radical dataset, which can be downloaded from here. If you would like to follow along, the Jupyter notebook is located on GitHub. This post is the final part in an introductory series on Radar.

Photo by Khara Woods on Unsplash

Range-Doppler Recap

In parts 1 and 2, we learned that basics of Range-Doppler estimation. It really just amounts to FFT operations on different dimensions of the Radar Data Cube, here’s an example. (See part 2 or notebook for how to load the data).

# perform Fast Time Range FFT
range_cube = np.fft.fft(adc_data, axis=2).transpose(2, 1, 0)

# perform Slow Time Doppler FFT
range_doppler = np.fft.fftshift(np.fft.fft(range_cube, axis=2), axes=2)

Now let’s learn a bit about Angle of Arrival Estimation.

Angle of Arrival Estimation

In the scenario below, a Radar uses a Transmitting (Tx) antenna to transmit signal. It reflects of a distant object and the reflection is incident on two Receiving (Rx) antennas below. Using lines to model the transmitted and reflected signal is known as Ray Tracing which is a useful way to simplify how we model the signals.

Figure 1. Top: Reflection from a distant object incident on two Rx antennas. Bottom: detailed reflection incident on both antennas. Source.

Basic geometry tells us that the path distance from the object to each antenna will be different by a factor of Δd = dsin(θ). This difference in path length will amount to a phase shift in the received signal at each antenna, and exploiting the phase shift is the key to determining the Angle of Arrival. This phase shift is also called the phase delay or phase difference and is defined as:

Where d is the antenna spacing, we can express this in a complex phasor:

We can actually expand this to multiple receive antennas in a Uniform Linear Array (ULA) which is the same type of array for our Radar data.

Figure 2. Phase Difference at multiple receive elements. Source.

Notice how we scale the phase difference by receive element position, also notice how we only have one axis to deal with. We can estimate the angle with another FFT across the receive antenna dimension of the Radar Cube, this allows us to exploit the phase difference across chirps separated in space. The resulting FFT is sometimes this is called the Spatial Spectrum. We will see below that the FFT method doesn’t work very well, but more sophisticated methods known as Beamforming will allow us to make better estimates of the Spatial Spectrum. These algorithms scan a virtual beam across multiple locations and compute a response, if the value of this response is high enough we can declare a detection. The method of beam scanning is accomplished with something called the Steering Vector.

Steering Vector

The steering vector describes the direction of a given signal or beam relative of the array, whether the signal is transmitted or received the steering vector will be the same. More formally, the steering vector gives us a complex phasor representation of the incident signal, it is comprised of the two vectors below

The element position vector describes the position of each receive element, and the wavevector describes the direction of the incident signal at a given frequency.

The term wavevector can be confusing, for simplicity you can think of it as a vector that describes the direction of a signal at a certain frequency

Here’s the Steering Vector derivation for our Radar data:

This derivation assumes a half wavelength element spacing which is what our Radar actually uses. The resulting equation provides a complex phasor for an incident signal at angle θ for each antenna element m. This allows us to conveniently describe the phase difference at each element with the Steering Vector.

MIMO Basics (Optional)

The Radar helps us perceive the 3D world, but we only have one axis. In this case the Radar data only allows us to resolve objects in Azimuth, because we are using a 1D ULA. We would need to stack the ULAs on top of each other in order to resolve objects in Elevation ad get true 3D data. We can efficiently do this with MIMO. MIMO Stands for Multiple Input, Multiple Output and is commonly used to increase the number of data channels. In the case of Radar it allows us to increase the number of receive antennas via virtual antennas. The concept of operations involves using multiple transmitters to produce multiple receive paths which are deemed virtual antennas.

Figure 3. MIMO concept. Source.

Notice the phase sequence at each of the receive antennas, it produces a total of 8 virtual antennas. The array can be designed in a manner that produces unique integer offsets as seen above, and the Radar data we are using is very similar and also has 8 virtual antennas. We can also design the array such that we stack ULAs to estimate the Elevation angles.

Figure 4. 2D MIMO configuration. Source.

For more information see this.

Implementing Angle Estimation

FFT Method

The first method we will choose is the FFT method, where we will zero-pad the antenna dimension of the Range-Doppler Cube and take the FFT across it.

# zero pad along the antennas
range_doppler_padded = np.pad(range_doppler, pad_width=[(0, 0), (86, 87), (0, 0)], mode='constant')

# perform FFT over azimuth
az_fft = np.fft.fftshift(np.fft.fft(range_doppler_padded, axis=1), axes=1)

# Compute Power Spectrum
az_power = np.abs(az_fft)**2
beamformed_img = np.flipud(np.mean(az_power, axis=2))

# display
plt.figure(figsize=(5, 10))
plt.imshow(np.log(beamformed_img))
plt.show();
Figure 5. Range Azimuth Spectrum from FFT method. Source: Author.

There’s not much to look at here, we can see the some energy returns around 9 meters and a slight return around 4.22 meters both of which correspond to the detected objects in part 2. There isn’t really any notion of angle separation displayed here, the next section introduces a new method that will allow us to separate the objects in angle.

Note: This view is essentially a B-Scope display

Capon Beamformer

This section introduces the Capon Beamformer (or Capon Spectrum) which greatly improves angular resolution at the cost of increased computation. It uses the Steering Vector to sweep a virtual beam across each range bin vector. It then processes the Complex Sample Correlation matrix of the Range Bin vector (along with the steering vector) to obtain an estimate of the Spatial Spectrum (at each range bin). Here’s the code, see the notebook for details.

steering_vector = compute_steering_vector(num_ant=8, angle_res=1.0, angle_rng=90)

n_range_bins = range_doppler.shape[0]
n_angles = steering_vector.shape[0]

range_azimuth = np.zeros((n_range_bins, n_angles), dtype=np.complex64)

for i in range(range_doppler.shape[0]):
range_azimuth[i,:] = aoa_capon(range_doppler[i, ...], steering_vector)

range_azimuth = np.flipud(np.fliplr(range_azimuth))
Figure 6. Capon Spectrum. Source: Author.

Now we can distinguish the two people the we previously detected in part 2. We can also see strong returns for the metal doors around 9.8 meters and various other returns throughout the spectrum. Also notice how things tend to blur on the edges, this is because the angular accuracy is the highest at 0° (head-on) and gets lower as we go off to the edge. See the notebook for code on how to make a video!

Figure 7. Range Azimuth Spectrum for each Frame. Source: Author.

To build on this, we can use this spectrum to generate a 2D point cloud of the environment by determining which area have a strong enough return (e.g. Peak Finding), furthermore we can use the Doppler information to determine the speed of each point.

Conclusion

In this post, we learned the basics of Angle of Arrival Estimation for Automotive Radars, but these concepts apply to many other situations. We have seen how the FFT method to estimate the angle of arrival is simple yet highly unreliable for practical applications. To achieve good angular resolution, we used the Capon Beamformer which provides impressive results compared to the FFT method. The Range, Doppler, and Azimuth processing that we have performed in this series are the basic steps prior to actual Radar detection. We can visualize the resulting range azimuth plot in figure 7 and determine for ourselves which points correspond to actual targets. We can even perform 2D peak finding to automate this, but even that would be a rudimentary approach. In a new series, we will cover how the Radar accurately and reliably detects objects.

References

--

--

Isaac Berrios

Electrical Engineer interested in Sensor Fusion, Computer Vision, and Autonomy