Semantic Segmentation Boosts Kiwifruit-Harvesting Robot Performance

Synced
SyncedReview
Published in
3 min readJul 2, 2020

Facing a shortage of seasonal workers and rising labour costs, kiwifruit growers may get some relief from robots equipped with a new AI-powered fruit detection system.

Traditional human-based kiwifruit picking is a tedious and repetitive job. It’s also unhealthy — Kiwi pickers must carry heavy picking bags, which can cause back strain and lead to more serious musculoskeletal problems.

These issues have increased interest in advanced autonomous harvesting robots, which can reduce labour costs while also increasing harvested fruit quality. Accurate and reliable kiwifruit detection is one of the biggest challenges faced by orchard fruit-harvesting robots, whose computer vision systems must deal with dynamic lighting conditions, fruit obstructions, etc.

A team of researchers from the University of Auckland’s Centre for Automation and Robotic Engineering Science recently introduced a semantic segmentation method to meet these challenges. The system employs two novel image simulation techniques aimed at detecting kiwifruit and is shown to work efficiently under the changing and often harsh lighting conditions found in orchard canopies.

The researchers integrated a preprocessing method for different lighting conditions to improve system performance. On overexposed images and glare images, it will apply histogram equalization (HE) to reduce dynamic lighting conditions. In images with problematic glare the intensity changes dynamically, and so HE is applied to each sub-image. In the case of over-exposed images, HE is used for the entire frame.

Examples of (a) glare image, (b) its blue channel, © its green channel, and (d) its red channel
Example of occluded calyx by (a) branch, (b) leaf, © wire, (d) fruit, (e) post, and (f) support beam
Overall performance of the detection method on the kiwifruit occlusion dataset
Detection method performance on occluded and non-occluded kiwifruit

The performance of the University of Auckland’s method was evaluated on a 3D real-world kiwifruit image set under a variety of different lighting conditions and fruit occlusion scenarios based on F1 accuracy score and processing time.

The semantic segmentation method alone obtained an F1 score of 0.82 on a typical lighting image set, but struggled under harsh lighting with an F1 score of just 0.13. With the application of the proposed preprocessing method, visual system performance under harsh lighting improved to an F1 score 0.42. In the case of fruit occlusion, the method was able to detect 87.0 percent of uncovered kiwifruit and 30.0 percent of covered kiwifruit under all lighting conditions.

The paper Kiwifruit Detection in Challenging Conditions is on arXiv.

Author: Xuehan Wang | Editor: Michael Sarazen & Fangyu Cai

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global