Seeing The World Project

AI For Mankind
3 min readDec 9, 2019

--

by AI For Mankind

AI For Mankind’s Seeing the World Project wants to make the physical world more accessible and navigable to people who are blind or have low vision.

If an autonomous vehicle can drive itself, we strongly believe we can enable people who are blind or have low vision to navigate the world with ease.

Imagine a world where people who are blind or have low vision do not need a white cane to walk around. They go around doing chores like grocery shopping just like everyone else. Isn’t it wonderful ? Is it possible ? I believe so. It will not be easy for sure, but it is totally possible.

With the significant advancement in computer vision driven by breakthrough in Deep Learning, it is exciting to see companies like Microsoft and Google created mobile applications like Microsoft Seeing AI and Google Lookout to assist people who are blind or have low vision in gaining information about the world around them. Each has the capability to read text, scan product barcodes, identify currencies, and etc. Each object recognition capability is called a channel in Microsoft Seeing AI. You can learn more about these exciting functionalities via the links below.

Microsoft Seeing AI

Google Lookout

We want to leverage the power of community to crowdsource different images to scale up the creation of different channels available in Microsoft Seeing AI, Google Lookout or any other mobile applications that provide object recognition capability for people who are blind or have low vision. That is why we launched the Seeing the World project.

See the demo of Microsoft Seeing AI to learn about the available channels:

Anne Taylor from the Microsoft accessibility team shares Seeing AI app demo with CEO Satya Nadella
Anne Taylor from the Microsoft accessibility team shares with CEO Satya Nadella how the Seeing AI app enables her to turn the visual world into an audible experience

As a baby step, we kicked off the Seeing the World project last year by focusing on crowdsourcing and curation of fruit/vegetable images. We want to help create the fruit/vegetable channel for Microsoft Seeing AI mobile app to identify fruit/vegetable in grocery stores and Farmers’ Markets across the world. Grocery shopping is such an important chore we carry out in our lives.

Data (images) is the main building block in creating an object recognizer. We need to have enough curated training images to build a successful model to recognize object.

Crowdsourcing from across the world is important if the image/object recognizer is intended to be used everywhere in the world. This is due to regional differences in objects. Incorporating these images from different parts of the world will only help make the recognizer more robust and be able to handle different regional variations of objects. In the case of fruit/vegetable, we have exotic fruits like durian and mangosteen from South East Asia.

All the crowdsourced fruit/vegetable images will remain as open data and made available to everyone in the world.

Here are some examples of curated fruit/vegetable images.

A pile of okras. This picture was taken at Hayward Farmers Market.
A pile of okras
A pile of bittermelons. This picture was taken at Hayward Farmers Market.
A pile of bittermelons

If you go to grocery shopping this weekend, please help us and take a few fruit and vegetable pics from your phone and share with us (ai.for.mankind@gmail.com­) via Google Photos.

You can also scan this QR code to share fruit/vegetable pics with us.

Note: Seeing the World project is proud to be selected as one of the Microsoft AI for Accessibility’s grantees.

--

--

AI For Mankind

AI for Mankind’s mission is to use data and AI to bring positive impacts for all mankind. Join us https://www.meetup.com/AI-for-Mankind/