— New Insights to Help you Understand your Visual Content

The article is also published at Ximilar AI Blog.

This update is huge! Even though this summer in Europe is really hot, we are working hard to improve Vize to become a tool that helps you understand and improve the results much more than any other similar platform.

As we promised you in the previous update of Vize, here are new features that would save your time and lower your stress levels significantly.

More Tools for Developers

Vize users are asking all the time for tools to help them inspect and debug their classifiers. As machine learning experts, we know how hard it can sometimes be to build a reliable model. It is a tough challenge even for professionals. Our goal is to make Vize as simple as possible. Yet we are still focused on you, using our API and developing tools on your side.

So, now you are able to look and examine your Tasks and Models even deeper. We have added Model Screen, where you can find four tools helping you to improve your classifiers. We have actually used these tools in our custom machine learning solutions for our customers to help ourselves. And we believe you will love it as much as we do. To see the features, click on the Detail button in the list of models on the Task Screen.

Insights can be accessed via Detail button

Confusion Matrix

The Confusion Matrix is a well known visualisation technique to help you understand which labels are commonly mixed up (confused.) Imagine that we want to build an Animal Classifier with four labels of cat, dog, parrot and birds (various other kinds.) It is highly probable that cat will likely be confused with dog and parrot will more likely be mixed up with birds. This is a very simple example, but in a more complex scenario this chart will help you pin point exactly the most interfering labels.

Confusion Matrix with 4 Images

Value of each square represents a percentage of how many pictures belong to ground truth label — rows, or were classified as predicted label — columns. The higher the percentage the darker the colour.

Ideal Confusion Matrix has all diagonal squares from top left to bottom right dark — high percentage and all other squares light — low percentage. Vize computes Confusion Matrix on a testing set which is approx. 20 % of all images in the Task.

Failed Images

Another feature to help you understand what is happening inside your Task is knowledge of failed images. Vize will show you some of the images that your classifier misclassified — behaved incorrectly after training. You can clearly see, with such an overview, that some images can be quite hard and maybe your Task needs more of similar images to be added to some of the labels.

Vize computes misclassified images after final training (training & test set) on all of your data. That is also why Confusion Matrix and Failed Images could differ. Thanks to Failed Images your classifier is going to be more transparent and you will know better how to tweak the Task to get better results.

Dialog with information for each image

In this example picture, our Animal classifier has failed on some of the images. The first picture should be classified as bird, but our model predicts that it is parrot. We can see that indeed the first picture is a little bit more colourful, probably that is why our model made an error and classified the image as a parrot. Another possible explanation could be, that some of the birds have many features common with parrots. We have clearly overfitted our small task and we need to put a little bit more similar images to our dataset.

More Charts

Quite commonly, user tasks have several labels and it could be difficult to see that they have imbalanced data. That means the number of images for some Labels can be higher than for all other Labels, or one of the Labels have only small percentage of images and all other labels are quite well defined.

Anything has a little more images than Trailer

Our optimisation algorithm can handle this quite well, but we still recommend you to balance your data so you will get the best results.

Uploading more images just to some Labels can lead to decreasing overall accuracy of the classifier, however with more images overall, your classifier will have more stable results throughout all Labels. We are recommending to always have some data prepared which will never be uploaded to Vize and on which you can test your models. Check out the Images per Label section, with pie chart, where you see how well balanced your dataset is.

Follow us on Facebook & Twitter to get more insights.




Official blog of We write articles about image recognition, deep learning and artificial intelligence. At Vize we help businesses to extract actionable value from their images.

Recommended from Medium

Getting started with Image Processing

Your Path to AI: Fraud Prediction using AutoAI

Understanding of Simple Neural Networks Learning for Multi-Class Classification-Maths Version

Machine Translation with NeuralSpace

Playing Ultimate Tic-Tac-Toe using Reinforcement Learning

AI Enabled Anti Money Protocol — Chapter 3

My Experience With Flatiron School’s Immersive Data Science Boot Camp

Cross-entropy for classification

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Víťa Válka

Víťa Válka

User interface designer who convinced his family to switch from a house to a travel trailer. #digitalnomad

More from Medium

Why Gen Z is a generation unlike any other.

Best Choice of 50 branded hotels in Taiwan, Aiello Intelligent Voice Assistant wins good reputation

The case of the forgetful AI

The future of refrigerated logistics is digital