Automated quality control, opening up the AI black box, and more — Hasty.ai

Alex
Hasty.ai
Published in
6 min readAug 2, 2021

AI-automation for quality control

State-of-the-art research available through the click of a button

After more than six months of development, we are releasing our feature for AI quality control based on state-of-art-research. By using specific AI models, we can:

  • Find wrong classes for labels
  • Find “extra” labels (labels that our model thinks should not be there)
  • Find missing labels (labels that our model thinks are missing from completed images)
  • Find inaccurate annotations (annotations that our model predicts differently than the current state)

We do this by running your data through a specific model looking at all the labels you want to QA. We then compare the output of the model with the original annotation and see how they align. For example, let’s say that we are annotating a football game (soccer for our American readers), and we want to check that all classes are correct before training our model. We use Confident Learning techniques to find wrongly assigned labels in your data.

To make your life a bit easier, we take the results and sort them from the likeliest error (i.e., most significant gap between model and human) to the least likely error.

In this example, we see some of the more obvious errors found in a PCB project

You then can decide what to change and what you want to keep as-is with one click.

Just click accept or reject to quickly QC your dataset

We see this utterly new human-machine workflow as being something of a “quiet revolution” in the vision AI space as you will no longer have to spend time finding errors in your data; simply fix them. This new workflow can save organizations working on vision AI projects an enormous amount of time and keep budgets in line.

To give you a baseline number, we compared Error Finder with the gold-standard technique of today, consensus scoring. What we found was that our approach could save smaller-scale projects up to 15x, and larger-scale projects up to 33x.

It’s exciting. It’s available for all. Learn more about what it is and how you can use it by going here.

AI assistants status

Some of the most common questions we get concern the status and training of our AI assistant models. We’re the first to admit this has been a bit of a black box, with users asking us questions like:

  • When will my model train next?
  • What are the ML metrics for my model?
  • How is my model improving?

Now, you can answer these questions yourself. With our new AI assistants status page, you can see how models improve over time to get an idea of how you are progressing towards annotation automation.

Here, you can see the status of all models available in Hasty

You can also see what is needed to train the next model and the current status of your model(s).

Now you can see how models change over time, and if they are improving

Next up is to add the same functionality to custom models created in Model Playground so that you can see how more data helps with your models.

For more information, feel free to check out our docs.

Model Playground grows up

First, a big thank you to all the beta testers we’ve had for Model Playground. With your feedback, we’ve been able to push what we offer in terms of model building and testing and are getting closer to releasing our model building and experimentation functionality to the rest of the world.

In our latest update, we’ve added a host of new visualizations and plots to see how your new model experiment is performing and how it compares with other experiments. We also added many, many new solvers and augmentations.

New visualizations and plots

Best performing table and metric overview

See which of your models are performing better in terms of your primary metric(s) and in terms of inference speed with our “Best performing” widget and get an overview of how different experiments compare with our new “Metric overview” table.

Running time and GPU consumption

Get a better understanding of hardware consumption with our running time and GPU consumption widgets.

Hyperparameter comparison

Check what differences exist between different experiments using our hyperparameter comparison and so that you can quickly figure out which parameter is essential for you.

Confusion matrix and classification prediction visualization

Using these, we figured out that the model struggled with diode, resistor, and capacitor classes as they were all fairly similar

Visually still a bit raw, these two widgets were created at the request of one of our customers so that they could get a better idea of how their classification model was progressing.

First, you have the confusion matrix that will tell you what classes the annotators’ struggled with and what the model thinks it should be instead. This can be very helpful as you can see for which classes the model needs more data, and where it struggles.

Secondly, we have the classification prediction visualization which can be used to do a further deep-dive on your data to figure out what the model is seeing. In tandem, they can help you find issues with your data and figure out how to improve your model.

We are also working on adding the same widgets for other types of annotation in the near future.

Additionally, we’ve added:

Augmentations

Solvers

Inference monitoring

With users successfully training models in Hasty, more and more users are using our Inference engine API (link). Although still early days, it’s encouraging to see users getting models developed in Hasty into production with minimum amounts of coding. However, one successful feature leads to additional feature requests. We are currently building out the monitoring tools for our inference engine to give you insight into what the model sees on production data.

It’s still early days, but we have built a first interface so that you can see what our model sees.

Using inference monitoring, here we can quickly see that us setting max predictions to 10 was a mistake

We will continue to extend our monitoring functionality and flesh it out in the coming weeks and months — but to do so correctly, we need your help. If you are interested in using Hasty’s inference engine and monitoring and are willing to be a beta user and give us feedback and what you like, would change, and find missing — both with the engine itself and with our monitoring solution — email me at alex(at)hasty.ai.

Shameless plug time

Finally, our CEO, Tristan, has been hammering me on Slack about the need to promote the positioning whitepaper we’ve just finished up. I know, I know. I say whitepaper. You think of corporate documents with little to no actual information. However, we believe we managed to write something reasonably readable on vision AI today and the problems we see with it. Among other things, you can learn about:

  • Vision AI flywheels and how you can build self-improving systems
  • How to be Agile in machine learning
  • How Bayer Agriculture got 40% faster using Hasty

Like all whitepapers, we might be guilty of marketing ourselves and what we can do, but if you can stomach that, we hope it is an exciting read. It is also great to explain Hasty for non-technical people; our CEO says while pointing at me with his golden cane. Perfect for sending to your boss to explain why you should use (and pay for!) Hasty.

Get it here.

Originally published at https://hasty.ai.

--

--