Artificial Intelligence in the Food Industry — Using AI to Enhance Singapore’s Food Security

Muhammad Nazar
DSAID GovTech
Published in
10 min readMar 19, 2021

Engineers: Tan Kai Wei, Muhammad Nazar, Muthiah Nachiappan, Elyn See (Govtech-intern)
UI/UX Designer: Nicholas Gwee
Project Management: Xu Qun Ying (SFA),
Jiang Junhui (SFA), Ng Yong Kiat, Ruth Cheng
Data Collection and Annotation: Zen Lee Cai Neng (SFA-intern), Jordan Chia Chaprong (SFA-intern), Adam William Petrie (SFA-intern), Tan Cai Ling (SFA-intern)

The COVID-19 pandemic has shown that disruptions to global food supply chains can occur at any time. This highlights the importance of ensuring we have a stable supply of food in Singapore. In this article, we share how GovTech and SFA have collaborated to enhance our food security by using AI technology to automate a laborious process.

Rotifer is a type of zooplankton, which is the first feed for fish larvae and plays a crucial part in large-scale fish hatchery production. Rotifer is a perfect feed as they are the optimal size for the fish larvae and being a filter feeder, they can be enriched with essential fatty acids and other nutrients. However, it is a laborious task to grow and sustain a rotifer culture. In a typical hatchery production of 250,000 pcs of fish fry, a total of 3–4 billion rotifers are required to feed fish larvae during the first 2 weeks of culture.

In any fish hatchery, consistent rotifer production is critical as the absence or lack of rotifers for even a single day can lead to significant loss of marine fish larvae as the larvae need to be fed every few hours. The restocking of newly hatched larvae will not be immediate as the brooders spawn once or twice a month. If the rotifer culture crashes due to contamination, an additional 3 to 4 weeks to inoculate a new batch of rotifers will be needed, thus affecting the fish harvest downstream.

Before we look at how we can make use of technology to ensure that the rotifer culture remains healthy, let’s take a look at how rotifers appear under a microscope.

Rotifers under a microscope

Checking the health of rotifer culture is a tedious process

Every day, hatchery technicians, including Singapore Food Agency (SFA)’s officers based at the Marine Aquaculture Centre (MAC) on St. John’s Island, have to manually count the number of rotifers, assess their swimming activity, and observe for the presence of contaminants such as ciliates from a sample taken from a rotifer tank. Based on these observations, decisions to implement operational actions to improve rotifer cultures to maintain their stability would then be carried out. The rotifer counting and assessment of the rotifer quality requires well-trained staff, so as to carry out timely mitigating measures if the rotifer culture is declining.

The water sample with rotifers is first placed on a petri-dish and iodine is then added to stain the rotifers so that they can be counted under a microscope.

SFA officer counting rotifers

Each water sample usually contains more than 1000 rotifers which makes manual counting a laborious task. On average, one staff will have to spend 40 minutes daily for 5 to 10 samples to examine the rotifers under the microscope for the parameters listed in the following table.

Different types of rotifers in a water sample

This is where Artificial Intelligence (AI) can be used to replace the laborious task of manually counting the rotifers. We decided to experiment with object detection AI models to see if they could identify and count the different types of rotifers.

Determining the appropriate image-capturing device

Together with SFA, we did market research to explore if there were existing products to automate the process. None was available. Therefore, we decided to develop a solution that could be adopted by the SFA and hatcheries both locally and regionally.

The approach for image capturing should be cost-effective. In total, SFA considered 4 different ways for image capturing and this included using a laboratory microscope, a basic student microscope, a table magnifier, and a camera phone to capture the image directly. Eventually, the team decided to use the camera phone based on user feedback, and that the photo resolution of smartphones available in the market was sufficient for this project.

Image Capturing Devices Review

Developing an Object Detection AI model to automate rotifer counting

Before conducting model training and inference, we had to design an object detection pipeline for the rotifer counting use case. The figure below illustrates the object detection pipeline.

Object Detection Pipeline

Identifying the different classes of rotifers

Beyond identifying healthy or unhealthy rotifers, the AI model is trained to identify the 6 different types of rotifers which fall under healthy and unhealthy classes. The table below illustrates the 6 different classes of rotifers sub-classified as being healthy and unhealthy.

Breakdown of the 6 rotifer classes

From the illustration, we can see a one-egg carrier has a form that resembles the number “8” and a healthy rotifer looks like a thick dot. On the other hand, a dead rotifer looks like a broken eggshell. Clumps are irregular in shape. And ciliates are tough to differentiate as they do not have visibly distinctive features from healthy rotifers.

Data Collection and Annotation

Data collection is done simply by taking pictures of the water samples in a petri dish using commonly available smartphone models. Throughout this project, we used the same set of smartphone models to ensure consistency of image capture and quality. For practical reasons, we certainly did not want the users needing to purchase a microscope in order to use this solution. To ensure that the objects’ features were salient, images were captured under good lighting and a high-resolution setting, with the smartphone, placed 10 cm above the petri dish during image capture.

So how small does a rotifer object appear in an image? In 1200 by 1200 pixels image, an average rotifer size varies from around 15 x 15 to 18 x 18 pixels. This means a single rotifer object only takes up 0.0225% of the area in an image! This provides the fundamental challenge for feature recognition to happen for the object detection model to perform well.

Different classes of Rotifers in a water sample

Data annotation is a crucial process that requires accurate labelling as it affects the outcome of the AI model training. As such, the annotation was carried out by SFA officers who have the domain knowledge to identify the 6 types of rotifers. It takes about 40 minutes to label each image and there are 6 different classes to annotate within each image. And in each image, there are at least 100 rotifer objects. SFA assisted with the annotation of all the training and test images. The diagram below shows how an annotated image looks like.

Rotifer Classes labelled on the water sample

When we reviewed the data distribution, we realized that two-egg carriers do not contribute significantly to the training data as they rarely appear in the collected dataset. Thus, we excluded the two-egg carriers from the dataset. As for the rest of the classes, we augmented the data by adding salt and pepper noise (to simulate poor image quality taken from low-grade camera quality), adjusted the image brightness and rotated the images. The figure below illustrates the distribution of the 5 classes. Ciliates, Dead and Clumps objects represent only 6% of the total counts.

Data distribution of the 5 classes of rotifers

We tried to handle the data imbalance by trying to do data augmentation to artificially increase the training samples of Ciliates, Clumps and Dead rotifers. However, artificially increasing the quantity of just these samples was not feasible without affecting the quality and integrity of the original training images. The affected images will impair the model training and inference accuracies. Since our focus here was to correctly detect and count the rotifers and one-egg carriers, which are not the minority class, we proceeded to train a rotifer detection model with the acquired annotated dataset to establish a baseline model and to ascertain the model performance.

Automating Data Preprocessing

Some of the images from the acquired dataset contained background noise which may affect the model performance in training and evaluation. Thus, we decided to minimize the non-uniform background area while ensuring that the coordinates of the annotation are in-place.

Noises in images

To reduce background noise, we decided to crop the image to the iodine blob as seen in the image below. The cropping was done manually for around 60 training images. For model inference, users will also be required to crop each image to the desired region before sending for inference. However, the process was laborious as it was time-consuming therefore, we had to experiment with ways of automating this process.

Cropping to the desired region

To automate the laborious cropping process, we experimented with colour spaces and OpenCV tools to auto crop the desired region. This was achieved by converting the RGB (Red, Green & Blue) to HSV (Hue, Saturation and Value) and extracting the saturation channel with a certain threshold. This will isolate the iodine patch to its contour for us to crop the region of interest. The diagram below explains the steps.

Cropping Automation to Desired Region

Model Training & Evaluation

We prepared 47 training images with 10% of each type of objects assigned to the validation dataset and another 10% to the testing dataset. The remaining 80% of objects were assigned to the training dataset. The objects allocation for each of the sets can be seen in the following table.

With the training data, we trained a YOLOv3 detector and the detector was benchmarked on the test set. The table below shows the test results.

[1] AP@0.5 means average precision at 0.5 Intersection of Union (IOU)

From the table, we can see that the AI model does well in detecting and identifying healthy rotifer, healthy one-egg carrier and Clumps. However, it has a low AP score for ciliates and dead rotifer objects due to very few training samples from these classes. Nevertheless, with the decent AP score for one-egg carrier and rotifer, we decided to roll out the model to the end-users for beta testing and to also collect more data that are using for model refinement. The image below shows the inference outcomes from the trained AI model output, detecting the different rotifer objects.

AI Model Inference Results

Final Product

SFA approved the AI model for their internal use as the model can identify healthy rotifers namely rotifers and one-egg carriers well. To support the usage workflow, we have developed a mobile web application to allow end-users to upload images for the AI model to count the rotifers instantly.

Web Application for Rotifers Counting Results

With this solution, the time taken to count the number of rotifers has reduced drastically from 40 minutes to a mere 1 minute for a batch of 5 to 10 water samples. Only 1 hatchery technician is required to prepare the water samples, image captures of the water samples and pass the water sample images to the AI-solution for automatic counting. We think this is a great example of how an AI-enabled solution can help improve productivity! In addition, the use of AI helps to establish a consistent and objective way of counting the rotifer objects and the count outputs are no longer dependent on the subjective human judgement that can differ from technician to technician. This enabled the users to improve their Standard Operating Procedures (SOP) to rotifer culture. Now, the SFA officers have more time to focus on other more important tasks.

Challenges Faced

The use-case was particularly challenging as the objects of interest (the different classes of rotifers) were minute in size and we were not able to identify the ground truths easily — we certainly do not have the experience nor knowledge to differentiate the different Rotifer types ourselves.

In addition, we saw the importance of building a strong relationship with our stakeholders in the development of the AI model. We had to manage the expectations of our stakeholders so that they can understand the limitations of an AI model while at the same time assuring them of the usefulness of the model. We worked using an agile approach of quick iterations and prototyping, starting with only 10 images for training, to demonstrate the viability of the object detection model to the users. It was through this iterative approach that our stakeholders understood how the labelling process directly affected the model training. Consequentially, our stakeholders mustered their internal resources to increase the number of good quality labelled images that helped to improve the model significantly.

Future Work

We have made available this AI-solution for use by SFA officers. However, more could be done to improve this solution. Some of them are:

  • Improve accuracy of unhealthy rotifer types by isolating objects from images and up-sampling them
  • Improve robustness to handle varying image conditions
  • Improve the UI design of the solution

Having seen how this project has helped to enhance food security, DSAID looks forward to working with SFA on other use-cases involving object detection counting and classification such as Artemia (small crustaceans).

The project was awarded the Ministry of Sustainability and the Environment (MSE) Innovation Award in 2020. Our team would like to express gratitude to the MSE award committee for recognising the hard work of our team.

--

--