This Xilinx-Backed Tech Company is Providing Video Processing Solutions in Data Centers

SV Insight
SV Insight Research
4 min readOct 18, 2019

Aupera Technologies, a Vancouver-based company providing highly intelligent video processing solutions for the cloud to the emerging edge network, announced today it has received a strategic investment from Xilinx Inc.

SV Insight interviewed Dr. Roy Liao, founder and CEO of Aupera Technologies. The technology innovator is building a next-generation video data center platform with its proprietary distributed computing and storage architecture, coupling with an AI acceleration engine.

Hundreds of millions of cameras are being deployed in the cities, retail stores, railway stations, manufacturing lines, etc. However, the ability to extract insights from this tremendous information has been more challenging than ever.

“Making video alive is Aupera’s core mission,” said Liao, “This simply means making every live video clip’s content understandable in real-time using AI analytics. Today, after multiple years in development, Aupera’s high-density video processing and real-time analytics solutions have been deployed at tier-one customers and are now expanding at scale. At the same time, we are thrilled to announce that Xilinx has joined our group of strategic investors.”

Founded by a group of top engineers and experts, Aupera is working on the innovation of a highly efficient video processing system. “After 50 glorious years, Moore’s law is approaching the end. The computing power could no longer rely on ever-increasing speed, the computing architecture needs to be changed to agile, domain-specific architecture, therefore, FPGA+Arm heterogeneous computing is being selected”, said Roy.

Aupera started with an ultra-high-density video transcoding system, to provide a solution to video decoding and encoding computing capability. Then, the AI acceleration engine was seamlessly embedded into the transcoding system, which provides a highly efficient video analytics system.

“Aupera’s high-density video transcoding products and high-density video analytics solutions have achieved commercialization with tier one customers and are expanding in scale,” said Liao.

With the innovation of technologies and successful commercial deployment, Aupera is thrilled to announce that Xilinx has joined its group of strategic investors. In addition to capital investment, Xilinx is actively working with Aupera in terms of market expansion and application implementation. The goal is to apply FPGA computing platforms faster in the rapidly changing data center landscape and the ever-changing AI implementations for various scenarios of AIoT applications. Aupera, together with Xilinx, is poised to be a strong force in the world of AIoT and video processing solutions.

Distributed Micro-Node Computing Architecture

Video stream processing is often distributed, micro-task, and unstructured. Aupera’s distributed micro-node computing architecture, with dozens or hundreds of processor modules connected through 10G to 100G Ethernet switches, can process video dynamically. It also can be scaled cross-chassis, cross-cluster, and cross-data center without being limited by a central processor bottleneck or by fixed physical location, allowing the system to be scaled linearly.

Each micro-node operates as a micro-server, on which an ARM processor deals with task management on the control level, while FPGA fabrics process the data. The advantage of FPGA lies in its ability to achieve high throughput and high performance while ensuring low latency and real-time response. More importantly, FPGA can customize AI acceleration for almost any application, and achieve system-wide optimization in terms of pre-processing, post-processing, network inference, and storage of video data. Another highlight of FPGA is its adaptability and all-programmability. With the complete system and software framework solution provided by Aupera, the time-to-market of various video AI applications is significantly shortened.

Converged software architecture for video + AI applications

In order to ensure that the distributed micro-node architecture can run efficiently while carrying high-density computing power, Aupera also launched AupXStream Video+AI SDK, a comprehensive converged software framework that handles the full video processing pipeline from codec to real-time artificial intelligence applications.

The AupXStream Video+AI SDK creates a complete platform resource management system for the hardware layer, platform operation, and cloud services, including chassis status management, temperature alerting, fan control, system task allocation, load balancing, cloud management, cloud operation, and maintenance of the IoT applications on the edge devices of Aupera.

Currently, the features supported by the AupXStream SDK are:

  • -H.264, H.265 video codec, JPEG hardware encoding
  • -Support standard audio and video media framework, including video synthesis, stream mixing, and format conversion
  • -Efficient video/image acceleration engine, including image pre-processing, scaling, rotation, layer overlay, and dynamic watermarking
  • -Complete deep-learning development environment, supporting mainstream frameworks such as Caffe, Tensorflow, and PyTorch
  • -Deep-learning inference engine supporting various mainstream deep neural networks such as Resnet, Inception, MobileNet, YOLO, SSD, etc
  • -Support hundreds of pre-trained models and application templates to enable diverse video AI applications to be launched within very short time frame, i.e., narrowband HD streaming, multi-party conference calls, public security, smart logistics, smart retail, etc.

With the arrival of 5G, Aupera’s high-efficient video AI solution will be greatly expanding in both cloud and edge AI computing in the era of Internet of Things (IoT).

--

--