Machine Learning Player v1.0 is out!

Viktor Shwartskop
Deelvin Machine Learning
5 min readAug 4, 2020

Hello! Today we are pleased to announce the release of our new product — ML Player. ML Player can be used for visualizing the output of machine learning models.

For example, using our Player, you can:

  • Place some mask that comes from your model on a frame
  • Display the predicted class as a label on a frame or simply write it in the log
  • Compare frames before and after processing with the splitter.

All you need to do is add your model to our ML Player through the open and completely free API.

Here are the main features of our new product:

  • Video playback
  • Camera playback
  • Screenshots
  • Timestamp bookmarks that can be added to a particular media file
  • Comparison between the original frame and the output of models
  • Dynamic loading of models that utilizes so-called ‘plugins’
  • Parsing color settings of a model from a JSON file
  • CPU & GPU usage info.

Let’s have a closer look at ML Player.

The main window of ML Player by Deelvin
Fig 1. The main window of ML Player

There are four views on the side panels:

  • Files: currently open files
  • Models: available models
  • Logs: logs of ML Player
  • Bookmarks: user-created bookmarks for the current media file

Here is the toolbar of ML Player.

The toolbar of ML Player by Deelvin
Fig 2. The toolbar of ML Player

The toolbar contains the following specific buttons and controls:

  • Step: go to the next frame
  • Screenshot: save the current frame (the save location is the Screenshots folder next to ML Player’s executable file)
  • Bookmark: create a bookmark
  • Demo Mode: separate the display screen from the main window
  • Hide Mode: hide the side panels
  • Go to Frame (button and spinbox): go to the frame specified by the number in the spinbox

Now, let’s talk about how to work with ML Player.

ML Player can work in 2 modes:

  • Video mode — normal playback of video files. In this case, decoded frames from a file will be fed to the model
  • Camera mode — the model will receive and process frames from the webcam
Video playback (ML Player by Deelvin)
Fig 3. Video playback

To open a video, click on the Open button and select the desired video. To use the Camera, simply click on the Camera button. To start frame processing, select the desired model(s) in the Models view.

So, how can a model be connected to ML Player? As you can see, in the list above there is a line that says “Dynamic loading of models that utilizes so-called ‘plugins’”.

Let’s take a closer look at plugins.

A plugin connects your model and ML Player. To create a plugin, first, create a JSON file that describes your model. This file should be placed next to the executable file of ML Player. Here is an example:

{    "model_name": "name",    "library_path": "path/lib",    "return_type": "int",    "output_as": [        "label",        "log",        "slider"    ],    "colors": {        "0": "#0000ff",        "1": "#ff0000"    },    "label_pos": "bottom_left",    "names": {        "0": "A",        "1": "B"    },    "const_labels": {        "C": "top_left",        "D": "top_right"    }}

These are required fields:

  • model_name — The name of the model.
  • library_path — The path to the library (omit the file’s suffix).
  • return_type — The type of the return value. Supported types: ‘int’, ‘float’, ‘frame_image’, ‘frame_images’.
  • output_as — Output the return value as. ‘label’, ‘log’ are available for ‘int’, ‘float’ (‘slider’ is available for ‘int’ only); ‘overlay’, ‘image’ are available for ‘frame_image’; ‘2 images’ is available for ‘frame_images’.

These are optional fields:

  • colors — If the return type is ‘int’: sets the color for each possible return value (colors are used by the ‘slider’); if the return type is ‘frame_image’: sets the color of the ‘overlay’ (if specified): “overlay”: “#ff00ff”.
  • label_pos — The position of the ‘label’ (if specified). Supported values: ‘top_left’, ‘top_right’, ‘bottom_left’, ‘bottom_center’, ‘center’.
  • names — If the return type is ‘int’: sets the label’s text for each possible return value.
  • const_labels — Permanent labels specified by the pair `text: position`.

After that, you need to create a library implementing the following interface:

struct FrameImage{    uint8_t *data;    uint64_t frameNumber;    int width;    int height;    int bytesPerLine;    int format;};extern "C"{    PLUGIN_EXPORT void *processFrame(FrameImage *frame); /// frame processing    PLUGIN_EXPORT void release(); /// deallocate resources here}

You can find more information and examples in the documentation on ML Player.

Let’s cover some more features of ML Player.

  • Click on the Analyze button to get a thumbnail for every frame of the video. Then, if you create a bookmark, the bookmark’s thumbnail will appear if you hover the slider.
Fig 4. Bookmarks in action
  • Click on the Demo Mode button to detach the display screen from the main window of ML Player. This can be useful if you have two screens.
Fig 5. Demo mode

Before we end, we would like to present an example of how our real model works. This model is the Person Segmentation model. You can see it here.

Person Segmentation by Deelvin
Fig 6. Person Segmentation

If you check the Person Segmentation box in the Models view, the model will start to work and the splitter will appear: it can be moved left or right with the right mouse button to compare images (the original frame is on the left, the frame processed by the model is on the right).

Here is the JSON file that describes the model:

{
"model_name": "Person Segmentation",
"library_path": "models/libperseg",
"return_type": "frame_image",
"output_as": [
"overlay"
],
"colors": {
"overlay": "#960000ff"
}
}

If you have a similar model, you can use this JSON file as a template, changing only the ‘colors/overlay’ field to any color you like.

That’s all for today, we hope you enjoyed this tour of our brand new product! Contact us if you have any questions.

You can download ML Player here.

--

--