Distributed Multi-Layered AI Cluster Architecture

AdHive.tv
Adhive.tv
4 min readFeb 16, 2018

--

The market today is plagued by issues regarding the correct identification and placement of content. The main stumbling block at the moment is the difficulty of ensuring video and photo file recognition from a full spectrum of internet sources — without losing data components in the process.

The best fix for this heachache is to improve the performance and fault tolerance of recognition functions. That’s why, at AdHive, we’ve solved the problem by developing a distributed multi-layered AI cluster architecture to bring high quality content to the market without the kinks. The AI cluster ensures the performance of the recognition functions and guarantees its fault tolerance.

The diagram below illustrates the structure and components of the architecture. Let’s go through the components of the AI ​​cluster architecture and briefly examine the functions each block is responsible for.

RabbitMQ

RabbitMQ, otherwise known as a message queue, is responsible for receiving commands for their subsequent execution (recognition, downloading of content, etc.) and storing them until the transfer of received commands to the next service.

Compframework = Computation Framework

The Compframework module is an analogy of the Computation Framework. Find more details on concept of the Computation Framework here. http://scorch.ai/Technology/computation-framework/

The given architecture component is responsible for preparing and downloading data for other recognition levels, controlling the interaction and transmission of data by level, data integration processes with input-output sources, and data conversion for output formats. Here, the module integrates with RabbitMQ.

Smart-Sender

This module guarantees the following processes:

  • Delivering tasks to smart from the Compframework
  • Receiving a response from a smart and transferring it to a Compframework

Smart Sender interacts with Redis to save an optimized state. In addition, this module uses RabbitMQ as an intermediary between it and the Compframework.

The Smart module and the Compframework module communicate via the http protocol. Compframework is installed locally on one device together with Smart — one Smart-Sender module per Smart module.

Smart

Smart is the lowest-level block of this system. It combines various algorithms for recognizing video and audio objects using convolution and recurrent neural networks.

The Smart block consists of the main control program and modules executed as dynamically uploaded libraries.

Module = algorithm

Each module has its own configuration file and is equipped with a standard interface. So to alter or create a new module, there’s no need to make changes to the Smart itself. The modules are added directly to the management file.

Modules in Smart are algorithms for processing video, audio or text data. Each separate module is a dynamically loaded library with a standard interface.

AI cluster — brief workflow description

  • A command is sent to the MQ message queue from an external service to process video or photos.
  • Next, the command enters the Computation Framework. The data is downloaded and converted to the required format, and transferred to the next level of processing.
  • Finally, the command arrives at the Smart module through the MQ where images and sounds are recognized.
  • If at any stage of the Smart or Computation Framework the modules don’t accept the command for processing, the command is transferred to a neighboring server by means of load balancing.
  • The received responses to the processing instructions arrive at the computation framework where they are packaged and sent back to the external service that requested the processing.

How Smart works:

In the current architecture the module recognizes photo, video or audio data by various algorithms and selects the module configurator.

- Smart contains an asynchronous HTTP server that receives data and sends it to the controller system.

- The controller system receives a command, selects the appropriate handler and starts execution.

- Smart is the central module. It manages the input/output system in turn. Each input/output system is individual for this module.

- After receiving the command for processing, Smart adds the request to the queue of the desired module.

- Next, integration is carried out with the external service, all under the control of load balancing. It is then determined which of the Smart modules is ready to process the command at that particular moment.

Smart is executed in 3 main streams:

  • Http — server
  • Input-output stream
  • Smart

Plus separate streams for each module.

The input/output stream is a set of methods for sending http requests for reading video, audio, text files to be added to the processing queue for a specific module.

How is the module selected? The principle behind the choice is simple — data of a certain kind is added to all modules that process that particular type of data. For example, audio files are sent to all modules that process audio, and so on.

Module = Neural network. he module is equivalent to a neural network in the form of an uploadable library with a standard interface. The structure of the module means that updating the neural network is convenient and efficient.

The process of adding is as follows: the module is loaded into the necessary directory on the server and its parameters are added to the configuration file. After that the Smart reboots and is loaded with the updated neural network.

--

--

AdHive.tv
Adhive.tv

The first AI-controlled influencer marketing platform on Blockchain. Launching massive advertising campaigns has never been so simple.