A FPGA-based Hardware Accelerator for Multiple Convolutional Neural Networks.

In Field of Image processing and computer vision Convolution Neural Network plays a very important role in these recent days. A FPGA-based CNN accelerator are for used for multiple networks and its also automate mapping flow. In the automatic mapping flow mainly three step involve (fig.1) which are Input files, CNN mapping and hardware accelerating. In the input files, user have to provide CNN model file which are .prototxt or .caffemodel and test image files. In the second step i.e. CNN mapping for .prototxt files network configuration parsing is mainly involved. This mapping flow extract data from .prototxt files a output from .c file. But for .caffeemodel FPGA is not supported so because of that Data processing is used. In data processing data is transformed in .bin file format from .caffemodel format. In the last step of mapping flow which is Hardware accelerating, FPGA Platform is used for design purpose the information is stored in the SD card. In the SD Card there are mostly .elf and .bin files. .elf file is composed of bitStream file which contain more information about hardware part.

fig.1 Mapping flow process.

CNN accelerator system is mainly consist of two parts which are processing system and programmable logic. In Processing system CPU plays role for sending parameter configuration and start signal to programmable logic layer. Programmable logic is the main part of CNN architecture (fig.2 ). Which is mainly consist of memory controller, on-chip buffers and processing elements.

fig.2. CNN architecture

Basically Processing Element consist of two unit, MAC and pool unit. MAC is generally used for support multiple convolution operation. MAC unit supports 3*3 or7*7 like convolution operation as the basic computational unit. Pool unit supports two types of pooling operation which are average pooling and max pooling.

--

--