Sign in

ComBox Technology
ComBox Technology is a system integrator in the field of information technology and neural networks

ComBox Edge — it is a cross-platform video analytics system for the detection and recognition of license plates, brands and models of vehicles.

ComBox Edge can work as part of a centralized structure, on a server in the DC, or on the last mile, directly on the site, near the place where the video camera is installed. Regardless of the installation location, you get a full-fledged video analytics block with access to it via an open REST API, which allows you to integrate with any existing information systems in the infrastructure.

ComBox Edge


We work with transport and use neural networks inference at in-vehicle devices. Let’s talk about problems in edge computing of this market. First of all it is operating temperature -40+85. There’re a lot of industrial devices with operating temperature -20+85, but it is a big problem to find insustrial device for Russian winter. The second problem is cost effective solution with high performance. For inference it’s able to use Intel MyriadX VPU as accelerator for edge computing, but all M.2 or other extension boards have 0+ or -20+ operating temperature. For example AAEON AI Core X or VEGA-330.


We decided…

ComBox Technology is a system integrator in the field of information technology and neural networks. We offer a full range of services for the creation, training and implementation of neural networks. The company has its own unique stack of technologies. Besides, we also produce proprietary equipment of our own design for inference of neural networks based on Intel components. ComBox Technology is a member of the Intel® Internet of Things Solutions Alliance and has a status of Intel® Partner Alliance Gold.

ComBox software is used inside of a very effective and high tech AAEON solution named Car Share Driver Monitoring…

We need cost effective solution for neural networks inference. Of course, we can use GPUs like Tesla T4 or Xeon CPUs servers, but these solutions will be too expensive. Alternative way is using Core i5/i7 CPUs of high-density. We’ve developed our own solution for inference with 8 Intel NUCs inside 1U server. This server has 8 CPUs Core i5 and 8 integrated GPUs Iris Plus 655 inside. In first release we used Intel NUC8i5BEK. In CNN inference (SSD Mobilenet v.2) we got 320 FPS from each NUC and 2560 FPS from server.

Deployment steps are:

Some time ago we were looking for software and devices for neural networks inference. It was interesting for us to find the cheapest way and unique software solution for inference in edge computing and DC too. We found Intel OpenVINO toolkit and one of the advantages was a possibility to run inference at all Intel devices (CPU, iGPU like Iris Plus, FPGA and others).

One of the devices we used was Intel Movidius (Intel NCS2) with MyriadX chip inside. It is a simple small USB stick with VPU inside. Everybody can use this accelerator in every PC.

Intel NCS2

For edge computing…

Neural networks help us solve various problems in the field of AI and computer vision. For example, detection, classification, segmentation, objects recognition and many others. In many cases, pre-trained models are used, which are further trained according to the developer’s own data to obtain a ready-made industry solution. In this case, both the dataset itself (a set of marked-up data for additional training) and the resulting model are of value. …

One problem we’ve solved is model’s protection in edge computing. All classic protection methods have the layer when decrypted model is availible by framework methods. For example, this one:

Intel OpenVINO models protection method

So, as we can see on edge device we can find decrypted model by changing some methods of open source framework.

In our projects we use hardware keys Senselock from Seculab company. We built all sources of OpenVINO toolkit as one stand-alone portable binary file, encrypted with Virbox Protector. The model encrypted too and it can be opened only by this binary. …

We develop and produce inference accelerators with VPUs Intel Movidius (MyriadX) on PCIe boards and we have the highest density of them in 1 PCIe slot. So, we’re talking about this blade board, which contains 8 daughter boards with 8 Movidius on each one:

ComBox x64 Movidius PCIe blade board for neural networks inference

64 Movidius in one board can give us up to 2880 FPS in CNN (we use SSD Mobilenet v.2). In some platforms (for example, Supermicro 1029GQ-TRT) it’s able to use 4 PCIe cards in 1U. It’s able to get up to 10000 FPS per each processing server for inference.

This solution may be interesting for DC and cloud service providers, who has cloud video archive and all stored data should be processed. In our cases we have video arhcive from buses and we should start imference on demand in DC on existing files.

Read more…

We are engaged in object video analytics and use neural networks to detect and classify objects. I would like to share our solution in hardware accelerators, and a board for inference (execution) of neural networks in the data center. Initially, using Intel Movidius VPU (MyriadX) in edge solutions running OpenVINO, we noticed their high performance and efficiency when working with convolutional networks, CNN (our often-used option is SSD Mobilenet v.2). Then the idea to implement a PCIe server board with VPUs placed at a high density was born. The result and the first test batch are shown in the photo…

To help illustrate the advantages of using the OpenVINO toolkit, this paper will look at the case study from leading embedded Edge AI developers ComBox Technology and LARGA. These two companies partnered together to deploy a passenger counting system for public buses, powered by AAEON’s VPC-3350S with AI Core X, and utilizing the OpenVINO toolkit to help optimize and quickly deploy their software models —

Edge inference

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store