Real time image processing system with FPGA and DSP

AKASH PATIL
Image processing using FPGA
3 min readFeb 28, 2021

In recent period, Image processing is key in finding solution to many problems in a field such as medical, industry, security remote sensing applications and so on. But most of the image processing systems are developed on the Desktop PC which is more simple and generic way to implement image processing applications and these systems may not meet the requirements of a real time applications.

So, this article tells us the real time image processing system using FPGA and DSP. In this system TMS320DM642 DSP is used as an image processing core and Xilinx FPGA chip is used for image sampling and display. TMS320DM642 board along with CCD camera and VGA display is used. Highly focused CCD camera is used to capture image frames. TMS320DM642 board module is used to execute the image processing algorithm and it operates at 600 megahertz.

Above block diagram show the processing of an in image. In first stage image acquired from the CCD camera and captured image format is converted to YUV 4:2:0. In next step video recorder record the frame and outputs the recorded image data and it is stored in the SDRAM. TMS320DM642 reads the image data frame and it runs image processing algorithm on them. Then, processed data is fed to video encoder and finally frame is then displayed on the output screen.

In this system FPGA used as a control unit for image sampling and display. The EMIF is used as a communication interface between FPGA, DSP, SDRAM and flash memory.

Now let’s discuss about the edge detection application. Edge detection is the process of finding intensity transition in an image. So many processes are available for Edge detection some of them are Sobel, Roberts, Canny.

Let’s focus on Sobel edge detection. It has two masks horizontal mask and vertical mask which are used to find intensity transition in horizontal and vertical direction. These two masks are applied one after the other to the image stored in the SDRAM for generating two image frames. One image frame shows intensity variation in horizontal direction and other shows the intensity variation in vertical direction and then these two are combined to form edge detected image. Basically, this process is divided into three tasks input task, process task, and output task. Input task capture the image and store it in SDRAM after changing format. Then in process task, algorithm is executed (i.e., image is processed) and in output task image is displayed on the screen.

Performance and functionality of the system is not affected by lighting variations and complex diagram and it has a large flexibility to implement different computationally intensive image processing algorithm.

References — “Implementation of Real Time Image Processing System with FPGA and DSP” - M V Ganeswara Rao, P Rajesh Kumar, A Mallikarjuna Prasad.

--

--