Food Waste Deceleration with AI Acceleration

OpenVINO™ toolkit
OpenVINO-toolkit
Published in
5 min readDec 12, 2022

Author: Anisha Udayakumar

Did you know that about $230 billion of food is wasted every year, and a whopping fifth of that comes from edible produce? We all experience some type of food waste in our own homes, but the biggest amount of waste happens at the grocery store in the fresh food and produce section.

That amount of food waste has been a concern of mine for a long time! Before joining Intel, I was an innovation consultant where I worked with multiple global retail customers on improving operations and customer satisfaction. That’s when I first noticed the amount of fresh food being wasted in stores. This is due to the fact that, unlike prepackaged foods, fresh produce does not come with an expiration date — making it extremely more difficult to predict when it’s about to go bad.

Since making sustainability mainstream using technologies like AI has been a dream of mine, I knew there had to be a better way for retailers to address this problem and reduce the fresh-produce waste. So I built a computer vision AI model that could determine the freshness of produce, like a tomato or a banana. But making my AI model work in real time was a challenge and I realized that improving the model performance would be critical for deployment and ultimately scaling.

In this post, I will show you how I optimized the model and accelerated AI inference time using the Intel® Distribution of OpenVINO™ Toolkit.

Building the Model

By using cameras to monitor grocery store shelves and scanning their images with object detection and recognition techniques, I used an AI model to accurately recognize each single item in the produce section.

With the labeled images, the object detection and recognition algorithm was able to determine whether the produce was still fresh, damaged, or about to spoil. With this information readily available, we can set up automated alerts that notify store managers and retailers if produce needs to be replaced or prices need to be marked down — reducing the amount of produce that needs to be discarded.

OpenVINO™ to the Rescue

My freshness recognition model can analyze images of single items of fresh produce, like the one shown above. To be useful in real-world scenarios, it had to be able to distinguish and label every piece of produce on a large shelf quite quickly.

To build the model, I used the SSDLite MobileNetV2 from Open Model Zoo, following very closely the object detection and recognition procedure (OD/OR) described in this Jupyter notebook. While I used the notebook for food waste, it is not at all limited to that specific use case. Developers can easily follow the detailed source code to apply it to their own OD/OR scenarios.

As I already mentioned, while my initial OD/OR model was accurate, it was too slow to detect the fresh produce images. At first, I tried to fine-tune the model, and even switched to a different model, but that didn’t improve the performance very much. That’s when I turned to OpenVINO and was able to reduce the inference time from 10 seconds to 1.5 seconds.

Another benefit of using OpenVINO was that I was able to make both my models much more portable. My freshness model was built on TensorFlow, but for both that and the OD/OR one I could easily use multiple other frameworks such as PyTorch, Caffe, or ONNX, and run my models on various different hardware.

Here are two tips if you’re trying this on your own:

  1. First, as the code snippet below shows, with OpenVINO you need only six lines of code to load and initialize the OpenVINO runtime, compile your model on the desired hardware, call its output layer to activate the inference engine, and then pass it the image to get the result!
  2. The other thing that you really want to keep in mind is the “device_name” section. OpenVINO gives you the flexibility to select — among many possible devices (CPUs, GPUs, Vision Processing Units, or even FPGAs) — the devices that best match your design targets, be they throughput or latency.
from openvino.runtime import Core
img = load_img ()
core = Core()
compiled_model = core.compile_model(model=model, device_name=”CPU”)
output_layer = compiled_model.output[0]
results = compiled_model([input_img])[output_layer]

What’s Next?

As you can see, building software for AI applications doesn’t have to be hard. The real trick is getting them to work in real time. OpenVINO improved my AI model’s performance and made my food waste reduction application a reality.

To summarize the two most important lessons from this project:

  1. Since OpenVINO works with all common frameworks, you can optimize your existing model with just a few lines of code.
  2. The model I walked through in this post can go beyond just reducing food waste and solve many similar problems that AI developers face every day.

To learn more about how you can start solving real-world problems with AI and OpenVINO, check out Intel AI Dev Team Adventures for even more walkthroughs and tutorials, and visit Open Model Zoo to take advantage of more pre-trained and optimized models.

I can’t wait to see what other problems you solve with OpenVINO!

Notices & Disclaimers

Intel technologies may require enabled hardware, software or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

--

--

OpenVINO™ toolkit
OpenVINO-toolkit

Deploy high-performance deep learning productively from edge to cloud with the OpenVINO™ toolkit.