Hands-On Lab: How to Perform Automated Defect Detection Using Anomalib

OpenVINO™ toolkit
OpenVINO-toolkit
Published in
9 min readMar 2, 2023

About the Authors

Paula Ramos, Intel AI Evangelist, America
Zhuo Wu, Intel AI Evangelist, China
Samet Akcay, Intel AI Research Engineer/Scientist

In a previous post, we introduced the deep learning, anomaly detection library Anomalib. To recap, Anomalib can be a great tool when you want to perform automated defect detection but have an unbalanced dataset.

Hopefully you’ve had the chance to access and try out the open-source project for yourself through the getting started notebook. If not, don’t worry, because in this post we will teach you how to use Anomalib with your own datasets.

For this example, we will walk through an exciting industrial use case featuring the Dobot robot, a robot arm used for educational, industrial, and intelligent use cases. If you don’t have the Dobot robot available, you can simply modify the notebook and avoid, comment, or change the robot code to work for you.

Figure 1: Defect detection with Anomalib using an educational robot.

Let’s Get Started

To understand how Anomalib works, we will navigate through a production line that inspects color cubes (Figure 1). Some of these cubes will have holes or defects that will need to be taken off the conveyor belt. Since these defects are not common in the production line, we will take a few images for our AI model.

Installation:

To install Anomalib using the source file, follow these steps:

  1. Create an environment to run Anomalib + Dobot DLL using Python version 3.8
  • For Windows, use the following:
python -m venv anomalib_env 

anomalib_env\Scripts\activate
  • For Ubuntu:
python3 -m venv anomalib_env 

source anomalib_env/bin/activate

2. Install Anomalib using pip install:

pip install anomalib[full]

3. Install Jupyter Lab or Jupyter Notebook through: https://jupyter.org/install

pip install notebook 

pip install ipywidgets

4. Then connect your USB camera and verify it works using a simple camera application. Once it is verified, close the application.

Optional: If you have access to the Dobot, follow these steps:

  1. Install Dobot requirements (refer to the Dobot documentation for more information).
  2. Check all connections to the Dobot and verify it is working using Dobot Studio.
  3. Install the vent accessory on the Dobot and verify it is working using Dobot Studio.
  4. In the Dobot Studio (Figure 2), hit the “Home” button, and locate the:
  • Calibration coordinates: Initial position upper-left corner of cubes array.
  • Place coordinates: Position where the arm should leave the cubic over the conveyor belt.
  • Anomaly coordinates: Where you want to release the abnormal cube.
  • Find the instruction to calculate the coordinates in the Dobot documentation.

5. For running the notebooks using the robot, download the Dobot API and drivers files from here and add those to the notebooks/500_uses_cases/dobot in the Anomalib folder of this repo.

Figure 2: Dobot Studio interface.

Note: If you don’t have the robot, you can jump to another notebook for training your models with you own data, here is the link of the 501a notebook, download the dataset from this link, and try the training and inferencing there.

Data Acquisition and Inferencing of the Notebook

Next, we need to create a folder with a normal dataset. For this example, we created a dataset of color cubes and added the abnormality with a black circle sticker, which will simulate a hole or defect on the box (Figure 3). For data acquisition and inferencing, we will use the 501b notebook.

Figure 3: Dataset used for the first round of training.

For acquiring data, be sure you set up this notebook with the acquisition variable True and define normal FOLDER for data without abnormalities and abnormal FOLDER for anomalous images. The dataset will be created directly in the Anomalib cloned folder, so we will see Anomalib/dataset/cubes folder there.

In case you don’t have the robot, you can modify the code to save images or use the downloaded dataset for training.

Inferencing:

For inferencing, the acquisitionvariable should be False, so in that case we won’t save any images. We will read the frame, run the inference using OpenVINO, and make decisions about where we will locate the cube: on the conveyor belt for normal pieces and out of the conveyor belt for abnormal cubes.

We need to identify the acquisition flag — True for acquisition mode and False for inferencing mode. In acquisition mode, be aware of the normal or abnormal FOLDER you want to create. For instance, in acquisition mode, the notebook will save every image in the anomalib/datasets/cubes/{FOLDER} for further training. In inferencing mode, the notebook won’t save images; it will run the inference and show the results.

Training:

For training, we will use the 501a notebook. In this notebook, we will use PyTorch Lighting, and we will use “Padim” model for training. This model has several advantages: We don’t need a GPU, so with just the CPU we can perform the training process, and the training speed is high.

Now let’s deep dive into the training notebook! 🤿

Imports

In this section, we will explain the packages we are using for this example. We will also be calling the packages we need to use from the Anomalib library.

Configuration:

There are two ways to configure Anomalib modules — one using the config file, and another one using the API. The simplest way to see the functionality of the library is through the API. If you want to implement Anomalib in your production system, use the configuration file (YAML file), which is the core training and testing process where you will find the dataset, model, experiment, and callback management (Figure 4). You can find that file in the repository, in this path “anomalib/src/Anomalib/models/Padim/config.yaml”, if you are using a different model find the proper name in the list to select the proper config file. For this specific example, I can also use this config file.

In the next sections, we will describe how to configure your training using the API.

Figure 4: The modules for training and validation.

Dataset Manager:

Via the API, we can modify the dataset module. We will prepare the dataset path, format, image size, batch size and task type. And then we load the data into the pipeline using:

i, data = next(enumerate(datamodule.val_dataloader()))

Model Manager:

For our anomaly detection model, we are using Padim, but there are other Anomalib models you can also use, such as: CFA, CFlow, DFKDE, DFM, DRAEM, FastFlow, GANomaly, Patchcore, Reverse Distillation, and STFPM. Also, we have set up the model manager using the API; Padim is imported using anomalib.models.

Callbacks Manager:

To train the model properly, we will have to add some other “nonessential” logic such as saving the weights, early stopping, normalizing the anomaly scores, and visualizing the input/output images. To achieve these, we use callbacks. Anomalib has its own callbacks and supports PyTorch Lightning’s native callbacks. With this code, we will create the list of callbacks we want to execute during the training.

Training:

Now that we set up the datamodule, the model, and the callbacks, we can train the model. The final component to training the model is pytorch_lightning Trainer object, which handles the train, test, and predict pipeline. An example of trainer object in the notebook can be found here.

OpenVINO inferencer:

For validation, we are using the OpenVINO inference. Previously, in the imports section, we imported OpenVINOInferencer from the anomalib.deploy module. Now we will use it to run the inference and check the results. First, we need to check that the OpenVINO model is in the result folder.

Prediction Result:

For performing the inference, we need to call predict method from OpenVINOinference where we can set up where the OpenVINO model and metadata of the model and also determine which device we want to use:

predictions = inferencer.predict(image=image)

Predictions contain a variety of information related to the result: original image, prediction score, anomaly map, heat map image, prediction mask, and segmentation result (Figure 5). More information will be useful, depending on the task type you want to select.

Figure 5: Prediction result

At the end, our defect detection use case featuring the Dobot robot looks something like this (Figure 6):

Figure 6: Education robot running the inference of the Anomalib model.

Tricks and Tips for Using Your Own Dataset

Dataset transformation:

If you want to improve the accuracy of your model, you could apply data transformations to your training pipeline. You should provide the path of the augmentations config file in the dataset.transform_config section of the config.yaml. That means you need to have one config.yaml file for the Anomalib setup and a separate albumentations_config.yaml file that will be addressed by the Anomalib config yaml file.

In this discussion thread, you can learn how to add data transformations to your actual training pipeline.

Robust models:

Anomaly detection libraries aren’t magic and can fail when used on challenging datasets. But we have good news: You can try 13 different models and benchmark the results of each experiment. You can use the benchmarking entry point script for this, and the config file for benchmarking purposes. This will help you select the best model for your actual use case.

For more guides, please check these “How to Guides.”

What’s next?

Check out our edge AI reference kits, where you can jump-start your AI solutions with how-to videos, step-by-step tutorials, and more. For defect detection using Anomalib, we provide a solutions overview, a technical how-to video, and a step-by-step tutorial to help you get started. And when you’re ready, you can clone the GitHub repository as a stand-alone project and begin building your very own defect detection solution.

If you are using the Dobot and want to see more on working with it through this notebook, please add your comments or questions to this post. If you come across any issues or errors with the Anomalib installation process, please submit your issue in our GitHub repository.

We look forward to seeing what else you come up with using the Anomalib library.

Have fun and share your results in our discussion channel! 😊

Notices & Disclaimers

Intel technologies may require enabled hardware, software, or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

About us:

Paula Ramos has been developing novel integrated engineering technologies — mainly in computer vision, robotics, and machine learning applied to agriculture — since the early 2000s in Colombia. During her PhD and postgrad research, she deployed multiple low-cost, smart edge & IoT computing technologies that can be operated without expertise in computer vision systems, such as farmers. Her inventions run in rugged and critical conditions, such as farming and outdoor environments without lighting control, high full-sun radiation, or even high-temperature extreme conditions. Currently, she’s an AI Evangelist at Intel, developing intelligent systems/machines that can understand and re-create the visual world around us to solve real-world needs.

Samet Akcay is an AI Research Engineer/Scientist. His primary research interests are real-time image classification, detection, anomaly detection, and unsupervised feature learning via deep/machine learning algorithms. He recently co-authored and open-sourced Anomalib, one of the largest anomaly detection libraries in the field. Samet holds a PhD from the Department of Computer Science at Durham University, UK, and received his MSc degree from the Robust Machine Intelligence Lab at the Department of Electrical Engineering at Penn State University, USA. He has more than 30 academic papers published in top-tier computer vision and machine/deep learning conferences and journals.

Zhuo Wu is an AI evangelist at Intel focusing on the OpenVINO™ toolkit. Her work ranges from deep learning technologies to 5G wireless communication technologies. She has made contributions in computer vision, machine learning, edge computing, IoT systems, and wireless communication physical layer algorithms. She has delivered end-to-end machine learning and deep learning-based solutions to business customers in different industries — such as automobile, banking, insurance, etc. She also has carried extensive research in 4G-LTE and 5G wireless communication systems, and filed multiple patents when she was working as a research scientist at Bell Labs in China. She has led several research projects as the principal investigator when she was an associate professor at Shanghai University.

--

--

OpenVINO™ toolkit
OpenVINO-toolkit

Deploy high-performance deep learning productively from edge to cloud with the OpenVINO™ toolkit.