AI in Agriculture — Detecting defects in Apples
A tutorial on training an AI for counting and detecting damaged apples
It is an unquestionable fact that agriculture is the oldest and most important field of work in human history. As the global human population rises every year, the demand for agricultural food and produce increase as well. On the contrary, the rapid civilization experienced in nations and cities across the world continually shows a massive rural-to-urban migration, coupled with the shift in the nature of work where most people prefer to work in non-agricultural sectors. Therefore, there is high pressure on the agricultural sector to produce and meet the food and industrial needs of the ever growing demands of modern living.
Technology has been the key factor in achieving optimal yield and minimum wastage in agriculture for the past decades through the use of heavy machines on farms and as well as digital computing. With the advent of Big Data and Artificial Intelligence, the agricultural sector has received an incredible boost in solving most of its challenges as well as ensuring maximum quality in produce. For example, Israel is a global leader in exporting fresh agricultural produce despite having a geography not conducive for agriculture. This is because of the extensive use of digital computing and intelligent systems.
Artificial Intelligence has been continually used to improve agricultural produce, storage and analytics since the rise of machine learning and deep learning, assisting to effectively:
- obtain accurate health data of crops
- detect pest and plant diseases
- automate harvesting and crop sorting
- obtain real-time data on soil conditions
- aid the implementation of precision irrigation
In this tutorial, we show you how you can use Artificial Intelligence through the use of computer vision and deep learning to automate apple detection, sorting and identification of damaged ones.
Please note: You can apply the same process in this tutorial on any fruit, crop or conditions like pest control and disease detection, etc.
Let’s get started by following the 3 steps detailed below.
Step1 — Getting Training Data
To use machine learning for a computer vision task, we need to provide sufficient sample images ( dataset ) of the object(s) we need the AI to detect and identify. In this case, we will be a deep learning algorithm for detecting and counting apples as well as identify damaged apples in images and videos.
For the purpose of this tutorial, we have collected and prepared 713 pictures containing apples, both healthy and damaged ones. It has also been annotated for object detection training which is the process we will use to train our AI model to detect and identify apples. You can download the dataset below via the link below.
https://github.com/OlafenwaMoses/AppleDetection/releases/download/v1/apple_detection_dataset.zip
It is divided into:
- 563 images for training the AI model
- 150 images for testing the trained AI model
To prepare dataset for your own custom crop or item, you can follow the tutorial linked
Step2 — Train your AI model
To generate our AI model, we will be training a YOLOv3 model using ImageAI.
ImageAI is a python library that allows you to use and train AI models for detecting objects in images and videos using few lines of code.
Because the training process is compute intensive, you can use Google Colab to perform this training. See the link below.
https://colab.research.google.com
To train, follow the instructions below:
- Install Tensorflow GPU version via PIP using the code below
pip3 install tensorflow-gpu==1.13.1
- Install Keras
pip3 install keras
- Install OpenCV
pip3 install opencv-python
- Install ImageAI
pip3 install imageai --upgrade
- Once the installation of the above is complete, download the sample dataset provided in Step 1 and unzip it.
unzip apple_detection_dataset.zip
- Download a pre-trained YOLOv3 model which will be used to facilitate the training process, via the link below. Once downloaded, move the file to the same folder as the unzipped dataset
https://github.com/OlafenwaMoses/ImageAI/releases/download/essential-v4/pretrained-yolov3.h5
- Then create a new Python file in the same folder you unzipped the dataset and write the code below into it.
SIMPLE! The above 6-lines of code is all you need to initiate the training on our apple dataset. Now let’s break down the code to its part:
- In the first line, we import the “DetectionModelTrainer” class from ImageAI
- In the 2nd and 3rd lines, we created an instance of the class and set our model type to YOLOv3
- In the 4th line, we set the path to our apple dataset
- In the 5th line, we specified the following parameters:
— object_names_array: This is an array of the names of all the objects in your dataset.
— batch_size: This is the batch size for the training. The batch size can be values of 8, 16 and so on.
— num_experiments: This is the number of times we want the training code to iterate on our apple dataset.
— train_from_pretrained_model: This is used to leverage transfer learning using the pretrained YOLOv3 model we downloaded earlier.
Once the training starts,
- ImageAI will generate detection_config.json file in the hololens/json folder. This JSON file will be used during detection of objects in images and videos
- ImageAI will create apple_dataset/models folder which is where all generated models will be saved
- You will see at the log like the sample details below.
Using TensorFlow backend.Generating anchor boxes for training images and annotation...
Average IOU for 9 anchors: 0.78
Anchor Boxes generated.
Detection configuration saved in apple_dataset/json/detection_config.json
Training on: ['apple', 'damaged_apple']
Training with Batch Size: 8
Number of Experiments: 100
Training with transfer learning from pretrained Model
Epoch 1/100
- 598s - loss: 42.5869 - yolo_layer_1_loss: 5.8333 - yolo_layer_2_loss: 11.9026 - yolo_layer_3_loss: 24.8509 - val_loss: 21.9279 - val_yolo_layer_1_loss: 3.6049 - val_yolo_layer_2_loss: 6.5100 - val_yolo_layer_3_loss: 11.8130
Epoch 2/100
- 560s - loss: 20.3933 - yolo_layer_1_loss: 3.2060 - yolo_layer_2_loss: 5.9345 - yolo_layer_3_loss: 11.2528 - val_loss: 19.1719 - val_yolo_layer_1_loss: 2.9118 - val_yolo_layer_2_loss: 5.6962 - val_yolo_layer_3_loss: 10.5639
Epoch 3/100
- 562s - loss: 17.8154 - yolo_layer_1_loss: 2.8638 - yolo_layer_2_loss: 5.3614 - yolo_layer_3_loss: 9.5903 - val_loss: 18.0630 - val_yolo_layer_1_loss: 2.6872 - val_yolo_layer_2_loss: 5.6299 - val_yolo_layer_3_loss: 9.7458
Epoch 4/100
- 548s - loss: 16.4284 - yolo_layer_1_loss: 2.4571 - yolo_layer_2_loss: 4.9751 - yolo_layer_3_loss: 8.9962 - val_loss: 18.3783 - val_yolo_layer_1_loss: 2.9664 - val_yolo_layer_2_loss: 5.3752 - val_yolo_layer_3_loss: 10.0367
Epoch 5/100
- 534s - loss: 15.4298 - yolo_layer_1_loss: 2.2201 - yolo_layer_2_loss: 4.6647 - yolo_layer_3_loss: 8.5451 - val_loss: 17.9699 - val_yolo_layer_1_loss: 2.4477 - val_yolo_layer_2_loss: 5.2248 - val_yolo_layer_3_loss: 10.2974
Epoch 6/100
Step 3 — Start using your trained Models
- Once you training is done, go to the apple_dataset/models folder and pick the model with the lowest loss- value in its filename. For the purpose of this tutorial, we have provided a sample trained apple detection model. Download it via the link below.
https://github.com/OlafenwaMoses/AppleDetection/releases/download/v1/detection_model-ex-028--loss-8.723.h5
- Then take the sample image below.
- Then create a new python file and write the code below to apply the trained model to detect the apples in the image.
When you run the above code, you will see the result below.
damaged_apple : 93.93747448921204 : [164, 117, 447, 334]
apple : 93.14679503440857 : [0, 2, 185, 61]
apple : 92.1971321105957 : [197, 3, 365, 73]
apple : 95.81946730613708 : [378, 8, 572, 65]
apple : 97.50433564186096 : [491, 29, 630, 160]
apple : 92.4841821193695 : [33, 12, 243, 189]
apple : 95.06285786628723 : [275, 17, 471, 187]
apple : 93.58476400375366 : [1, 109, 158, 314]
apple : 96.47991061210632 : [458, 147, 621, 315]
apple : 95.83896398544312 : [589, 3, 631, 64]
As you can see, the trained apple detection model is able to detect all the apples in the image as well as identify the apple with a defect (apple with a spot).
You can use this trained model to
- count apples
- detect apples with defect
As you can observe, we just created a new AI model that can be used in apple farming, production and packaging. Our sample dataset was prepared for detecting and identifying defects. We can make our models do other tasks by collecting more image samples. This task can be:
- detect ripe and unripe apples
- detect apples of different sizes
- detect apple of different types.
For more on creating more AI models for your custom detection, visit the tutorial links provided below.