Looking For A Needle In A Haystack: 3 Good Practices for AI Quality Control

German Suvorov, PhD
Product AI
Published in
4 min readFeb 18, 2021

Are there any shortcuts in implementing computer vision for quality control? Check out 3 hints from AI practitioners that can help save your time and budget.

by German Suvorov

Manufacturers always strive to detect quality issues early. Not a single product should hit the market with a defect. Six Sigma principles help to eliminate the majority of defects. Now, QA (quality assurance) professionals are facing a new challenge: finding those few defective products out of hundreds of thousands of good ones.

This seems to be the right job for AI. However, what exactly should be done in order to build the technology capable of detecting such rare defects? Let’s take a computer vision quality control system as an example.

The traditional machine learning approach assumes the preparation of a large dataset of good and bad items, and training deep learning models to classify them. Data scientists play with different approaches and parameters so that they can come up with a set of models that are able to confidently tell the good from the bad..

Get a good shot

Data scientists love data — the more the better. And it’s true: the more images you have, the better the model is able to train. Even more important is the quality of the dataset. The best data has a high signal-to-noise ratio, which means it contains the features of defects distinguishable enough to be separated from other features that are not related to defects.

It’s good to catch defective products early. The sweet spot at the production line is where the defects may already be present, but not too far down the line. This way, a defective product is isolated as early as possible. At this location, the items may be not in the best condition for an automated evaluation — they could be covered with water drops, for example. Here is a catch: the human eye can easily tell a dent on a surface from a water drop at a glance, but under certain circumstances, it’s hard to train a model to do the same.

The right imaging technology means a lot here. The sensors that are used for human QA (say, a camera that operates in the visible range) may not be the best choice for AI. Sometimes, good data may come from signals that are hard to comprehend by humans. The data might look noisy to people, but it may contain the patterns that deep learning models can build on.

Good practice:

Research imaging/sensing technologies, and look outside the current technology. Experiment with different settings and equipment.

Consult a person with a comprehensive hands-on understanding of your manufacturing technology for feature engineering (selecting the right measurements and data). It’s especially beneficial to have such an expert on your project team.

Is it good enough?

“Good” has always been a subjective term. Telling good from bad may become a problem, especially when defects are rare. Different QA experts given the same product may decide differently whether a defect is present or not. Sometimes, even the same person will provide inconsistent evaluations when analyzing for defects at different times of the day. If people cannot tell the difference — it’s hard to teach machines.

Good practice:

Get the defects defined early on. Gather QA and manufacturing experts, show them the samples and ask them not only where they see the defects, but also how they distinguish them. This can help to formalize the features of defects and choose the right sensors, lights, and post-processing filters.

Scaling is the key

If you have several lines or production sites — you may want to install the same kind of system in all your facilities. The risk here is that the slight differences in images that people would ignore may prove to be a big thing for AI. Varying view angles and distances, fluctuations in ambient lighting, different times of the day, or local nuances in the manufacturing process and products may result in poor performance of the models trained at another location, despite the hardware setup appearing identical.

In our example, the water drops can be of different sizes in different humidity and temperature conditions. Also, moisture can build up or water droplets may collect in clusters if the line is stopped for a couple of minutes, changing the image of the surface.

Good practice:

Engineer the hardware setup to minimize deviations. Look for potential differences. Dedicate more time to inspecting the production process: watch and analyze the production process during different shifts and times of the day. If you have different products on the same line — make sure to test your imaging systems with varying items. Consider using transfer learning with fine-tuning on site.

--

--

German Suvorov, PhD
Product AI

Industrial AI solution architect and engineer. German’s background is in automotive manufacturing, manufacturing automation, supply chain management, AI.