Building a Radar Simulator: A Practical Guide to Doppler Motion and 2D CFAR Detection
A Step-by-Step Tutorial to Learn How Radar Sees the World
When people think of sensors in autonomous systems, like self-driving cars, drones, or robots — they often picture flashy LiDAR units or smart cameras. But behind the scenes, radar is the silent powerhouse enabling some of the most critical perception functions.
In my work as an engineer within the AI and robotics space for nearly a decade, I’ve seen radar outperform other sensors for long-range (200m+) detection and in challenging conditions: darkness, fog, heavy rain, or GPS-denied environments. That’s why radar plays a vital role in everything from autonomous vehicles to weather forecasting.
In this post, we’ll explore the fundamentals of radar — why it matters, where it shines, and how it’s used in real-world systems like Bosch’s automotive radar. Then, we’ll build a simple radar simulator in MATLAB to understand how radar detects motion using Doppler shifts and how targets are identified using 2D CFAR detection.
Whether you’re a student, engineer, or hobbyist, this practical guide will walk you through the fundamentals step-by-step. You can also find the code in my Github:
Why Radar?
First, let’s look at some of the key advantages radar offers over other sensors:
- Direct Velocity Measurement: Unlike cameras and LIDARs that estimate speed by computing the derivative of position over time, radar directly measures the rate of velocity via the Doppler shift. This is critical for distinguishing static objects (like walls) from moving ones (like pedestrians or vehicles), improving both safety and decision-making.
- Weatherproof & Day/Night Ready: Radar doesn’t care if it’s raining, snowing, foggy, or pitch dark. Electromagnetic waves used in radar can penetrate atmospheric interference and even dust, making it far more dependable than vision systems like cameras or even lidar in adverse conditions, especially for detecting objects at a very far range (200m+).
- Ideal for Safety-Critical Systems: Radar powers key features like Adaptive Cruise Control (ACC). In modern cars, it autonomously adjusts speed to maintain a safe following distance and can even apply emergency braking to prevent collisions.
Like all sensors, radar has its limitations. It can struggle to detect fine object shapes and often falls short in classifying closely spaced or overlapping objects in complex environments. That’s where sensor fusion becomes essential.
Sensor fusion combines data from multiple sensors — such as cameras, radar, LiDAR, and inertial units — to build a richer, more reliable understanding of the environment. Each sensor brings its strengths:
Cameras excel at classification and lane detection. LiDAR provides accurate 3D positioning. Radar delivers robust velocity and distance measurements. Together, they form a multi-modal perception stack that is far more accurate, resilient, and reliable than any single sensor on its own.
How Does Radar Work?
A typical radar system includes a wave generator, an antenna for transmitting and receiving signals, and a receiver with processing logic to extract object properties. From this, radar can compute radial distance, radial velocity, and angle.
Advanced sensors like Bosch’s automotive radar go further. Housed in a weatherproof dome, they contain antenna and RF circuits on a printed circuit board, as well as digital signal processing units for real-time analysis. Modern radar comes in short-range (typically wide field of view, lower range — ideal for parking or city driving) and long-range variants (typically narrower field, up to 250m — used for highway and adaptive cruise control).
At higher autonomy levels (L4/L5), radar doesn’t just detect but also classifies objects, producing higher-resolution outputs critical for decision-making in real time.
Range-Doppler Estimation: How Radar Estimates Range & Velocity
Range-Doppler estimate refers to a radar processing technique that determines both the distance (range) and relative speed (Doppler velocity) of detected targets. It creates a 2D representation — called a Range-Doppler Map — which shows
- Range (x-axis): how far the object is from the radar,
- Doppler (y-axis): How fast it’s moving toward or away from the radar (relative velocity)
Like LiDAR, radar measures distance based on the trip time of a signal. But instead of timing light pulses, radar uses frequency shifts in a linearly ramping chirp (FMCW waveform). By measuring the difference between transmitted and received frequencies, radar calculates range:
range = (c * Δf) / (2 * sweep_slope)
Radar uses the Doppler effect to estimate target speed. The Doppler shift refers to the change in frequency of a wave due to the relative motion between the radar and the object. If an object is moving toward the radar, the frequency of the reflected wave increases. If it’s moving away, the frequency decreases.
velocity = (λ * Doppler_shift) / 2
Now that we’ve built an understanding on the fundamentals of radar, lets write some code in Matlab!
Simulation Pipeline
Building a radar simulation pipeline allows us to prototype radar algorithms like Constant False Alarm Rate (CFAR) without hardware, visualize how radar actually sees moving targets (via range-Doppler maps), test and tune detection methods under different conditions (e.g., clutter, SNR, target proximity) to build a strong intuition of radar data, and more specifically, understand the signal chain from waveform generation → reflection → detection — all in one loop.
It’s ideal for learning, experimentation, and even validating algorithms before deploying to real systems. The pipeline goes in the following steps:
- First, configure the FMCW waveform based on the system requirements.
- Define the range and velocity of target and simulate its displacement.
- For the same simulation loop, process the transmit and receive signal to determine the beat signal
- Perform Range FFT on the received signal to determine the Range
- Towards the end, perform the CFAR processing on the output of 2nd FFT to display the target.
System Requirements
System Requirements defines the design of a Radar. The sensor fusion design for different driving scenarios requires different system configurations from a radar. In this project, we follow the following system requirements to design our radar.
The sweep bandwidth can be determined according to the range resolution and the sweep slope is calculated using both sweep bandwidth and sweep time.
bandwidth(B_sweep) = speed_of_light / (2 * range_resolution)
The sweep time can be computed based on the time needed for the signal to travel the unambiguous maximum range. In general, for an FMCW radar system, the sweep time should be at least 5 to 6 times the round trip time. This example uses a factor of 5.5.
T_chirp = 5.5 * 2 * R_max / c
slope = bandwidth / T_chirp % slope of chirp signal
For the initial selection of target range and velocity: the range cannot exceed the max value of 200m and velocity can be any value in the range of -70 to + 70 m/s.
Implementation Steps for 2D CFAR Process
Full implementation can be found on radar-target-generation-and-detection.m.
- Radar specifications
max_range = 200; % in meters
c = 3e8; % speed of light, in mps
range_resolution = 1; % in meters
fc = 77e9; % frequency of operation, in GHz
2. Target specifications — here we defined our own target’s initial position and velocity. An assumption is that velocity remains constant.
target_pos = 110; % in meters
target_speed = -20; % in mps
3. FMCW Waveform Generation — here our radar design should be based on the aforementioned system requirements. We consider Max Range and Range Resolution specs to calculate the Bandwidth (B), Chirp Time (Tchirp) and Slope (slope) of the FMCW chirp.
B_sweep = c / (2 * range_resolution); % calculate the Bandwidth (B)
T_chirp = 5.5 * 2 * max_range / c;
slope = B_sweep / T_chirp;
We then simulate the signal propagation and move target scenario.
%% Signal generation and Moving Target simulation
% Running the radar scenario over the time.
for i=1:length(t)
%For each time stamp update the Range of the Target for constant velocity.
r_t(i) = target_pos + (target_speed * t(i));
td(i) = 2 * r_t(i) / c; % time delay
%For each time sample we need update the transmitted and
%received signal.
Tx(i) = cos(2 * pi * (fc * t(i) + slope * (t(i)^2)/2));
Rx(i) = cos(2 * pi * (fc * (t(i) - td(i)) + slope * ((t(i)-td(i))^2)/2));
%Now by mixing the Transmit and Receive generate the beat signal
%This is done by element wise matrix multiplication of Transmit and
%Receiver Signal
Mix(i) = Tx(i) .* Rx(i);
4. Range measurements — To measure the first FFT output for the target located at 100 meters:
- Reshape the vector into Nr*Nd array. Nr and Nd here would also define the size of Range and Doppler FFT respectively.
Mix = reshape(Mix,[Nr,Nd]);
- Run the FFT on the beat signal along the range bins dimension (Nr) and normalize
sig_fft1 = fft(Mix,Nr);
sig_fft1 = sig_fft1./Nr; % normalization
- Take the absolute value of FFT output
sig_fft1 = abs(sig_fft1);
- Output of FFT is double sided signal, but we are interested in only one side of the spectrum, hence we throw out half of the samples.
single_side_sig_fft1 = sig_fft1(1:Nr/2);
- Plotting the FFT output gives us the following simulation result.
figure ('Name','Range from First FFT')
plot(single_side_sig_fft1);
axis ([0 200 0 1]);
5. Range and Doppler measurements — the 2nd FFT will generate a Range Doppler Map (RDM) as seen in the image below.
% Range Doppler Map Generation.
% The output of the 2D FFT is an image that has reponse in the range and
% doppler FFT bins. So, it is important to convert the axis from bin sizes
% to range and doppler based on their Max values.
Mix=reshape(Mix,[Nr,Nd]);
% 2D FFT using the FFT size for both dimensions.
sig_fft2 = fft2(Mix,Nr,Nd);
% Taking just one side of signal from Range dimension.
sig_fft2 = sig_fft2(1:Nr/2,1:Nd);
sig_fft2 = fftshift (sig_fft2);
RDM = abs(sig_fft2);
RDM = 10*log10(RDM) ;
% Use the surf function to plot the output of 2DFFT and to show axis in
% both dimensions
doppler_axis = linspace(-100,100,Nd);
range_axis = linspace(-200,200,Nr/2)*((Nr/2)/400);
figure,surf(doppler_axis,range_axis,RDM);
6. CFAR Implementation — The 2D CFAR is similar to 1D CFAR, but is implemented in both dimensions of the range doppler block. The 2D CA-CFAR implementation involves the training cells occupying the cells surrounding the cell under test with a guard grid in between to prevent the impact of a target signal on the noise estimate.
% Take the absolute value of FFT output
sig_fft1 = abs(sig_fft1);
The steps involve selecting the number of Training Cells and Guard Cells in both the dimensions and set offset of threshold,
% Select the number of Training Cells in both the dimensions.
Tr = 10;
Td = 8;
% Select the number of Guard Cells in both dimensions around the Cell under
%test (CUT) for accurate estimation
Gr = 4;
Gd = 4;
% Offset the threshold by SNR value in dB
off_set = 1.4;
Last but not least, slide window through the complete Range Doppler Map. Here we design a loop such that it slides the CUT across range doppler map by giving margins at the edges for Training and Guard Cells. For every iteration, we sum the signal level within all the training cells. And to sum, we convert the value from logarithmic to linear using a db2pow function.
We then average the summed values for all of the training cells used. After averaging, convert it back to logarithimic using pow2db and add the offset to it to determine the threshold. Next, we compare the signal under CUT with this threshold. If the CUT level > threshold assign it a value of 1, else equate it to 0. Further implementation as follows.
% Use RDM[x,y] as the matrix from the output of 2D FFT for implementing
% CFAR
RDM = RDM/max(max(RDM)); % Normalizing
% The process above will generate a thresholded block, which is smaller
%than the Range Doppler Map as the CUT cannot be located at the edges of
%matrix. Hence,few cells will not be thresholded. To keep the map size same
% set those values to 0.
%Slide the cell under test across the complete martix,to note: start point
%Tr+Td+1 and Td+Gd+1
for i = Tr+Gr+1:(Nr/2)-(Tr+Gr)
for j = Td+Gd+1:(Nd)-(Td+Gd)
%Create a vector to store noise_level for each iteration on training cells
noise_level = zeros(1,1);
%Step through each of bins and the surroundings of the CUT
for p = i-(Tr+Gr) : i+(Tr+Gr)
for q = j-(Td+Gd) : j+(Td+Gd)
%Exclude the Guard cells and CUT cells
if (abs(i-p) > Gr || abs(j-q) > Gd)
%Convert db to power
noise_level = noise_level + db2pow(RDM(p,q));
end
end
end
%Calculate threshould from noise average then add the offset
threshold = pow2db(noise_level/(2*(Td+Gd+1)*2*(Tr+Gr+1)-(Gr*Gd)-1));
%Add the SNR to the threshold
threshold = threshold + off_set;
%Measure the signal in Cell Under Test(CUT) and compare against
CUT = RDM(i,j);
if (CUT < threshold)
RDM(i,j) = 0;
else
RDM(i,j) = 1;
end
end
end
RDM(RDM~=0 & RDM~=1) = 0;
Evaluation of the Radar Systems
In this simulation, we built a radar signal processing pipeline from the ground up. Starting with system requirements, we designed an FMCW waveform by calculating key parameters like bandwidth, chirp time, and slope (≈2e13). We then simulated a moving target and generated the corresponding beat signal.
Applying a 1D FFT on this beat signal produced a range profile with a peak near the expected target position (±10m margin). Next, we extended the pipeline with a 2D FFT to generate a range-Doppler map and applied 2D CFAR to filter out noise and isolate the moving target.
This end-to-end process not only demonstrates how radar “sees” motion but also mirrors how real radar systems extract actionable data. While it’s entirely simulated, this pipeline builds intuition for key concepts in modern radar — from waveform design to motion detection — and lays the foundation for more advanced signal processing and sensor fusion work.
Concluding Thoughts
Through this tutorial, we built a radar simulation pipeline that mirrors how real sensors operate: from waveform design to target motion, Doppler detection, and noise suppression using 2D CFAR to help enhance clarity in noisy environments.
This kind of simulation not only deepens technical intuition but also prepares us to work more confidently with real-world radar systems, whether in robotics, autonomous vehicles, or beyond. I hope this article can help you connect the dots between abstract radar theory and how sensors actually detect motion. I hope it reminds us that even without hardware, simulation can teach us a lot about how machines perceive the world.
Further Reading & Resources
- MIT OpenCourseWare: Radar Systems — Free lectures and materials on radar!
- MATLAB Radar Toolbox — Great for simulating waveform design, Doppler, and CFAR in one environment.
- National Instruments: Understanding Radar CFAR” — A clear explanation on CFAR principles and types.
- GitHub Repo — Full Source Code for This Project: https://github.com/moorissa/lidar-obstacle-detector
If you enjoyed this radar simulation article and want to expand your understanding of how different sensors contribute to perception, check out:
If you made it this far, you either really like radar or are avoiding something else — in either case, thanks for reading! I’d love to hear your thoughts, so leave a comment and don’t be shy to clap 50x 💛