Create a Cosmetic Anomaly Detection Model using Visual Inspection AI

Juwono (ジュウオノ)
9 min readJul 15, 2024

--

Overview

Visual Inspection AI Cosmetic Inspection inspects products to detect and recognize defects such as dents, scratches, cracks, deformations, foreign materials, etc. on any kind of surface such as those shown in the following image.

In this lab, you will upload a collection of training images and then annotate these training images with a set of sample defect instances to facilitate Cosmetic Inspection solution training. You will then use the UI to prepare a Visual Inspection AI Cosmetic Inspection model for training.

Model training can take a a long time so this lab is paired with Deploy and Test a Visual Inspection AI Cosmetic Anomaly Detection Solution where you deploy the Visual Inspection solution artifact created in this lab and then use that artifact to generate inferences about sample images.

Task 1. Create a dataset

In this task, you will enable the Visual Inspection AI API and create a new Visual Inspection AI Cosmetic Inspection training dataset.

  1. In the Navigation Menu, click Visual Inspection AI to open the Visual Inspection console.
  2. Click the Enable Visual Inspection AI API button.
  3. Click Create a Dataset.
  4. In the Create Dataset page:
  • For Dataset name, enter cosmetic
  • For Objective select Cosmetic Inspection.
  • For Annotation type select Polygon.
  • For Region, select US Central1.

5. Click Create.

Dataset creation will take a minute or two to complete.

Task 2. Import training images into the dataset

In this task, you will import the training images into the Dataset. You will use a Google Visual Inspection demo dataset to follow along with the instructions below. This dataset consists of a set of sample product images some of which include sample defects that you will have to locate and identify to prepare the dataset for training. To upload the images you must upload a CSV file that contains a list of Cloud Storage paths to the sample images that are to be included into the Visual Inspection AI Cosmetic Inspection model training.

  1. In Cloud Shell, run the commands below to copy images to your Cloud Storage bucket:
export PROJECT_ID=$(gcloud config get-value core/project)
gsutil mb gs://${PROJECT_ID}
gsutil -m cp gs://cloud-training/gsp897/cosmetic-test-data/*.png \
gs://${PROJECT_ID}/cosmetic-test-data/

Copying will take a few minutes.

2. Create the CSV import file using the following commands:

gsutil ls gs://${PROJECT_ID}/cosmetic-test-data/*.png > /tmp/demo_cosmetic_images.csv
gsutil cp /tmp/demo_cosmetic_images.csv gs://${PROJECT_ID}/demo_cosmetic_images.csv

3. Back in the Visual Inspection AI console, on the Import tab, under Select an import method, select Select an import file from Cloud Storage.

4. In Import File Path, click Browse.

5. Expand the bucket with a name that matches the lab project ID.

6. Select the demo_cosmetic_images.csv import file.

7. Click Select.

8. Click Continue.

A status bar will appear to indicate Import in progress.

The import will take a few minutes to complete. Once the import is completed, you should see the imported images displayed in the Visual Inspection AI user interface as shown below:

Now that you are able to see the list of imported images, you can browse through them and Click any of the images to have a close-up view as you explore your dataset.

Task 3. Provide annotation for defect instances in training images

In this task, you will annotate sample defect instances using a polygon shape. The Visual Inspection AI Cosmetic Inspection solution learns to detect and localize small and subtle defect instances, such as dents, scratches, and foreign materials visible in the training images by training a dedicated defect instance localization model.

  1. Click Defects on the Visual Inspection console.

Depending on the Cosmetic Inspection problem, you will need to define defect instance types, which will be subsequently used to associate specific defect instance polygon shapes that you annotate in sample images with specific defect types.

2. Click Add New Defect Type.

3. For Defect type, enter dent then select Done and click Create.

4. Click Add New Defect Type

5. For Defect type, enter scratch then select Done and click Create.

After defining the defect types, you can start browsing each individual imported image to provide detailed annotations for each visible defect instance, falling into one of the predefined defect types.

6. Click each image to get the close-up view of the image to annotate your defect instance.

7. Select an image with a defect.

8. Click the Add Simple Polygon icon in the close-up view of the image to start annotating images with defect instances.

9. Locate a concrete defect instance on the image and then provide polygon vertices to annotate the instance.

Note: When locating defects be sure to close your polygon by going back to the first polygon vertex to complete each polygon shaped defect instance annotation.

10. Select a defect type from your previously defined defect type list, either dent or scratch, that matches the defect type you have just annotated.

11. Click Save.

A fully labelled annotation of an image is shown below, where 2 defect instances were identified in the image, one dent and one scratch with their corresponding polygon shaped locations.

12. Select an image without any defects.

Since all imported images by default do not have a label, if an image does not contain any defect instances, the image represents a non-defective or normal image, you should explicitly confirm in the UI that the image is a clean non-defective or normal image.

13. Click the drop-down at the top of the UI tab to explicitly set the image label as No defect.

14. Click Confirm to set image label as No defect.

If you were continuing the process to train a model you would now continue to annotate and label the visible defect instances for all of the remaining images in the dataset and label all of the images that are defect free with the label No defect. Since you are not proceeding to train the model you have now completed all of the tasks in the lab.

In general, the more images in the dataset with fully labelled annotations the better; however, you are not required to annotate every single image in the dataset.

This training process takes approximately 24 hours for this sample dataset, so rather than waiting, the remainder of this lab provides an overview of the steps involved in evaluating the trained model and creating the solution artifact.

If you were training your own model you would click the button to start training at this point, but you should not do that for this lab. In the lab Deploying and Testing a Visual Inspection AI Cosmetic Anomaly Detection Solution, you learn how to deploy and use this solution artifact to analyse images.

Note: The remaining sections of this lab are for information only, they are not steps that your should carry out in this lab session.

Overview 1. Evaluating a trained Cosmetic Inspection model

This section demonstrates how to access and interpret the model evaluation user interface for a trained model.

Once the training is completed, the Go to the evaluation page button will show up on the right panel. This button opens the Model Evaluation review page.

Visual Inspection AI Cosmetic Inspection reports solution-specific metrics related to the localization accuracy of the model, that is the accuracy with which the defect instance locations predicted by the model match the ground truth defect locations.

The model evaluation detailed user interface page shows the IOU, Precision and Recall model evaluation metrics for the trained Cosmetic Inspection model, where the Precision and Recall metrics here refer to the pixel-level Precision and Recall.

The page also shows the confusion matrix calculated based on the model’s classification of each label. This matrix shows how often the model classified each label correctly (in blue), and which labels were most often confused for that label (in gray).

The Confidence threshold slider on the top of the Model Evaluation page can be used to see how the precision / recall evaluation metrics change with the confidence threshold.

Overview 2. Creating a trained Cosmetic Inspection solution artifact

This section provides an overview of the process of creating a trained Cosmetic Inspection solution artifact.

After evaluating the results of the trained model, a trained solution artifact can be created in a docker format, and exported to a Container Registry location.

The steps to create a trained solution artifact are as follows:

  1. In the Test & Use tab on the Models page for the cosmetic model, click Create Solution Artifact to create a Cosmetic Inspection solution artifact for a trained model.

2. When creating the solution artifact you must specify the Solution artifact name, Cloud Source Repository Output gcr path, and the Solution type.

Note: You can select either a GPU or CPU container here. A GPU container can only be deployed to a container platform that supports the appropriate type of GPU. For the purposes of the next lab you can only test a CPU container so CPU is selected here.

3. Clicking Create triggers the solution artifact container image creation process.

It usually only takes a couple of minutes to create the solution artifact.

At this stage the solution artifact is now ready and can be tested in the UI. This allows you to check out the quality of the trained solution using handful of test images by running a batch prediction.

Overview 3. Performing batch predictions using a Cosmetic Inspection solution artifact

This section provides an overview of how to make batch predictions using the Cosmetic Inspection solution artifact created in the previous section.

This process can be used to check the quality of the trained solution using a handful of test images prior to deploying it to on-premise environment.

The steps to create a batch prediction job and check out the prediction results are as follows:

  1. On the Test & Use tab in the Models page, click Create Batch Prediction in the Test your model section to start a cloud batch prediction job using your solution container.

2. You must provide the following details to start a batch processing task:

3. Click Create start the batch processing task.

4. Click the Storage link of the completed batch prediction job to show the batch prediction results and details.

In the results preview page, users can change the image from the dropdown button to show the prediction results of different images, as well as play with the Confidence Threshold scrollbar to visualize the results of model prediction masks at different threshold levels.

The batch prediction data is contained in the JSON output file stored in the Cloud Storage bucket. An example of the output is shown below. You can see the data for the annotated defect that has been detected and classified as a dent as well as information about the source image.


"predictionResult": {
"annotationsGroups": [
{
"annotationSet": {
"name": "projects/624839602356/locations/us-central1/datasets/1923855077538267136/annotationSets/2435347886779662336",
"displayName": "Predicted Masks Regions",
"mask": {},
"createTime": "2021-08-23T12:54:20.313205Z",
"updateTime": "2021-08-23T12:54:20.313205Z"
},
"annotations": [
{
"name": "localAnnotations/1",
"annotationSetId": "2435347886779662336",
"mask": {
"confidenceMask": {
"imageBytes": "iVBORw0KGgoAAAANSUhEUgAAAaAAAAGUCAAAAABWtqk+AAAD8ElEQVR4Xu3czWtcVRQA8MmkSaPSagRBBSUUBYNLdSHFjeDGTfFvEFcu3QhuXAnButAiQqvQVaFSKUIUCS21UPGjalqtLaVCGtMqJqY1mZj5yLyJk2aazJy6dd6F9/ut5p1zdof73n333TulEgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA/TAQA0UyvVpZXnolRknF2ZWVdoNOxzCJmFzKNtRmP4gZknBtrXVLs3IupkjAyepmf9rWrk/ELLk7s92gVvPvkzGdinIMFMZw1wS2vOu5xU+2L0nBdH17BG2oTceKJBR4BMXrJ5OcKxS3QXON9d7AjvGfegNJKG6DLq+0QmRo/Pd3Q4gcnVvNep9C7dnc4kexKm+DMVAczz88Eu8fAyNjY5+FGLn5cKZ6xxhqXI1V5Oj0jc5yz5as9mksIkdv/7wcWpRdizXk6v2rtd77XPWtWJKr+JgsnFfHLqxk3YFWWvOmwjeoVHr62NVKc/ulNbEG7YiBAnq5NPHMntG7y7dWT9eztBrEbT9UmhtzhGzxQMyQhuN/1Ou1Wi2xBTm3uC0vlb65a2R4+LsYz1dq264OPtto1OvTr8d4vxx4dF8M5SutBk3s3XNv+zHQ/PXLN2KKBBy/tFhtNqrLC5MxU1wpPYP2771nqNx+MVtv/BVTxZVSg/bdVx4YaL86l8vzMUUCTnX2QWWNmzFVYOmMoKmnOts4Wsvf9mZIwS//dBaVs0qymwiLbHbt9qL/2kzMFVoit7i5h7aW1ZuJrbVQKr0z1+gMn/YA+jFmydvmOvKmxm8xS86OznftCsi8AQW5P4O+Hx/pWg9sfrX9mxSc7dmykV2JefJ1qfcMSCXmyXXTyMH5x4a6r1vnu6/I2xdh523TGlxSJm8v7nRkN4/EEnJ0Jpz+yJaPxRJyNBOOiLZWT8QS8nPoRjj4kTVMEBJy6M94pqB6MdaQn6k7juXUL8Qa8nPxjoNtzeuxho7+v6h+PP/4zrgbb81H1GQcvR4OTLWt+UaXjMPz/9GfhVjFlj5/btj/wujmMZwuraWpECE3l+PraXuGvfJ5rGJbnycJ5ytZ+IecUv3siyFCjr5eqm1tsNrQrKf5L2DJ6PeBzEceGGgrdc69rLfqC0+ECnrEJ/b/78ieB3cPDw7ealO2OnvitVhAt/43qO29+0d379o5PLTjyqk3Yw4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIrhX21+bxRVNpTBAAAAAElFTkSuQmCC"
},
"categoryMask": {
"imageBytes": "iVBORw0KGgoAAAANSUhEUgAAAaAAAAGUCAIAAAD8v2G1AAADpUlEQVR4Xu3bwW2DMBiGYVp1jq7UgXruQF2pk/QQKa0IMcbY4B8/zy0ckXj1mSTTBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPG9zC/A1X29f///+Pnz8f8jVyJwjGVWtxuNuyqBYyCLdbuTuet5nV+AUaXzR0QWHKPI75cpdxkWHMzlp5DOWXCMoiBbplx0Fhw8VdBEuiJwkKJxoQkcrNC4uLyDYxT7O+WVXDgCx0A0bjSOqLDB/kRyJIFjIPbXaBxRGdHOISaUUVhwjEihBmHBMbTiKSeRIQgclGRO4EJwRAW1uiwLDv7kTzlNDEHgYFk6dgIXgsDBU88ap25RCByseMycwEUhcJE8PmmTh+1At/vvhgcicDEspu3OIweLBC4AdYMyfgfXu3TdgASBAy7LEbVrq/PN+RQSLLh+rdYNSLPgOpVTN/MN0gSuR+oGVTiidienbkAOC64j+Wkz3yCHwPVC3aA6gTtfftomdYMtvIM72aa6AZtYcGfaWjfzDTYRuNOoG7T2Nr9Ae1vTBpSx4I5WVjfzDQr4kuFQZXUDylhwxymum/kGZQTuIOoGxxO45orTNqkb7CNwbakbnEjgGlI3OJfAtaJucDo/9K1vT9qAiiy4yvbXzXyDWgSuJnWDrvgnQzX76wbUJXAdMd+gLoHrhbpBdd7B1VR8SlU3aMGCO5+6QSMWXH1bd5zAQSMC19Bq6aQNAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADr3Cwk0jvR5tBeBAAAAAElFTkSuQmCC"
},
"annotationSpecColors": [
{
"annotationSpecId": "4043103266936979456",
"color": {},
"annotationSpecDisplayName": "none"
},
{
"annotationSpecId": "687921544545959936",
"color": {
"red": 0.521568656,
"green": 0.117647059,
"blue": 0.694117665
},
"annotationSpecDisplayName": "dent"
}
]
},
"source": {
"type": "MACHINE_PRODUCED",
"sourceModel": "projects/624839602356/locations/us-central1/solutions/6673419854088765440/modules/9138120303283535872/models/424534633623846912"
}
}
]
},
{
"annotationSet": {
"name": "projects/624839602356/locations/us-central1/datasets/1923855077538267136/annotationSets/4948356478852399104",
"displayName": "Predicted Classification Labels",
"classificationLabel": {},
"createTime": "2021-08-23T12:54:20.416215Z",
"updateTime": "2021-08-23T12:54:20.416215Z"
},
"annotations": [
{
"name": "localAnnotations/0",
"annotationSpecId": "2516382993258381312",
"annotationSetId": "4948356478852399104",
"classificationLabel": {
"confidenceScore": 0.817249537
},
"source": {
"type": "MACHINE_PRODUCED",
"sourceModel": "projects/624839602356/locations/us-central1/solutions/6673419854088765440/modules/9138120303283535872/models/424534633623846912"
}
}
]
}
]
},
"predictionLatency": "1.248653110s"
}

Source: https://www.cloudskillsboost.google/course_templates/644/labs/462780

--

--

Juwono (ジュウオノ)
0 Followers

Machine Learning Student | Web Developer | ICT Teacher | Japanese Teacher