Air Cognizer: Predicting Air Quality with TensorFlow Lite
Large cities like Delhi can suffer from air pollution, especially in winter, and we’ve seen headlines like “Cold Morning In Delhi, Air Quality Continues To Be Severe” appear on the front page of newspapers. Poor air quality in the winter months can lead to smog, which can restrict outdoor activities and cause health concerns.
As engineering students, we strive to use technology for social good. A crucial first step in solving the air pollution problem is to enable citizens to gauge the quality of air they breathe.
This could be done with pollution sensors — although they can be expensive to deploy at scale. Our goal was to design a reliable and inexpensive air quality estimation solution, accessible to everyone with a smartphone.
Research like Particle Pollution Estimation Based on Image Analysis has shown that machine learning can be effectively used to estimate air quality using camera images, although previous work was usually limited to images from a few static cameras.
Our goal was to develop an Android-based mobile application to provide local, real-time air quality estimation using smartphone camera images. The Celestini Project India from the Marconi Society inspired us, and provided an internship opportunity at IIT Delhi and the resources to develop our project.
We decided to focus on predicting air quality in terms of “PM 2.5”, or particles that have a diameter less than 2.5 micrometers. To visualize results, we predict the PM 2.5 values and map them on a color gradient Air Quality Index (AQI) scale. This is a standard scale set by each country’s government. Warnings are then displayed according to the AQI values.
Using TensorFlow Lite to Predict Air Quality
The application we developed collects images from the camera on the mobile phone, and processes them on-device using Tensorflow Lite to provide an AQI estimate. Before developing our app, we trained an AQI estimation model on cloud. The model is automatically downloaded using Firebase ML Kit in the Android application.
We describe the system in detail below.
- The Mobile Application. This is used to capture images and predict AQI levels. The application processes images on-device.
- TensorFlow Lite is used to power on-device inference, in a small binary size (which is important for download speed, when bandwidth is limited) for the trained machine learning model.
- Firebase. Parameters extracted from the images (described below) are sent to Firebase. Whenever a new user uses the app, a unique ID is created for them. This can be used later to customize the machine-learning model for different geo-locations.
- Amazon EC2. We train our models here, using these parameters and the PM values from the geo-location.
- ML Kit. Trained models are hosted on ML Kit, and automatically on to the device, then run with TensorFlow Lite.
Here are more details about how we analyze images to predict AQI. We train two image-based machine learning models to build the application: the first model predicts the AQI using the features of the user-uploaded photo, and the second model filters out images where no skyline is present.
We predict the AQI from user photos using the following features. These features are extracted by traditional image processing techniques, and combined by a linear model. Our second model (discussed later), works with images directly as is common in deep learning.
Transmission: This describes scene attenuation and the amount of light entering the phone camera after being reflected by air particles. It can be described from this equation:
where I is the observed hazy image, t is the Transmission from the scene to the camera, J is the scene radiance, A is the airlight color vector.
Transmission for a single hazy image was found by using the concept of dark channel, which assumes the existence of some pixels with zero or very low intensity at least for one color channel in all the outdoor images. For a haze–free image J, the dark channel is:
where Jc is one of the color channels of J, and Ω(x) is a local patch centered at x. The airlight can be estimated from the sky or the brightest region, so the transmission can be obtained by:
where Ic(y)/A is the hazy image normalized by air-light A, and the second term on the right is the dark channel of the normalized hazy image.
Blue color of the Sky: This feature is somewhat similar to how we perceive a polluted day. If the sky is gray, we perceive it is as a polluted day. Blue color was estimated using RGB splitting.
Gradient of Sky: Now the sky could appear gray due to cloud cover, so to take this possibility into account we incorporated this feature. The gradient was calculated by making a mask of sky region and then calculating the Laplacian of the region.
Entropy, RMS Contrast: These features also tell us about the details contained in an image. If it is a polluted day, an image loses its details. RMS contrast is defined as the standard deviation of the image pixel intensities. Following is the equation for RMS contrast:
where Iij is intensity at (i,j) pixel of the image with size M by N, and avg(I) is the average intensity of all pixels in the image. Thus Contrast has an inverse relation with PM 2.5. For estimating entropy the following equation was used:
where pi is the probability that the pixel intensity is equal to i, and M is the maximum intensity of the image. As the PM concentration increases, the image increasingly loses its details, and the image entropy decreases. Thus it follows an Inverse relation with PM 2.5
Humidity: Through our research we concluded that on humid days, pollution levels rose, as PM 2.5 absorb moisture and lowers the visibility.
Initially, when the app was released, people were curious if they were able to use it to predict AQI inside their houses as well as outside. Our model is capable of predicting if an image contains at least 50% skyline, and will accept skyline images by making use of a binary classifier.
We used Transfer Learning to create this classifier and re-trained the model on our labeled dataset using TensorFlow Hub. The dataset consisted of 2 classes: 500 images with 50% skyline, and 540 images which don’t contain a skyline (or less than 50% skyline). This includes images of rooms, offices, gardens, outdoor scenes etc. We used the MobileNet 0.50 architecture and while testing on unseen 100 samples, an accuracy of 95% was achieved. TF for Poets is helpful in Image retraining.
The confusion matrix for the retrained Model is as follows:
Custom models for each user
We realized that each user needs a custom ML model, because each smartphone has different camera specification. To train such a model, we collect images from each user.
We decided to combine the results from two models, an image-based model and a temporal model using meteorological parameters. The temporal model using meteorological parameters helps achieve higher inference accuracy and give some results to the user while the image-based machine learning model is being trained, while the image-based machine learning model helps us customize the model to the specific user; thereby improving the inference accuracy by reducing the estimation error.
To create a small training dataset for each user, 7 images are taken from which features are extracted and used for training. The images must be of 7 consecutive days with half of them covering the sky and no direct source of light, for instance the sun. After extracting the features from the image, they are used to train a Regression Model. The model is linear, as all the image features were more or less linearly proportional to the PM 2.5 values.
After the training dataset and the model were created, a second set of images for testing are created. Once the dataset has image features of 7 different days, testing starts. If the training RMSE is less than 5 for 7 different days, the model is frozen and sent to ML Kit, which is downloaded in the application. In case RMSE is not less than 5, more training data is collected.
We also use a temporal model using meteorological data to predict AQI based on historical AQI available for nearest location and the temporal model supplements the image-based model to improve the inference accuracy. We collected meteorological datasets for Delhi from Government website for 2015 to 2017 and performed Ridge Regression with LASSO Optimization to select key parameters for affecting the PM 2.5 levels. The key parameters selected were: Previous hour’s PM 2.5 concentration, concentration of various gases like NO2, SO2, O3 and dew point. The data was then split for training and testing. For training we used data from Jan 2015 — Jan 2017. Testing was performed on data from Jan 2017 — June 2017. We achieved an accuracy of 90% on our dataset.
This codelab helped us with TFLite on Android. The next challenge was to host the adaptive image based model for each user. To solve for this we came across an interesting solution by Firebase ML Kit. It allows custom and adaptive ML models to be hosted in cloud and on-device. We followed this documentation.
The road ahead
We intend to make the following improvements to improve the application in future:
- Generate results on photos taken at night.
- Extend reachability to other major cities.
- Make the model robust in various weather conditions.
We started this project with the aim of spreading awareness about the toxic levels of pollution. We hope overtime we will all be motivated to take steps to reduce activities which reduce air quality.
Lastly, we would like to thank Dr. Aakanksha Chowdhery (Google) and Prof. Brejesh Lall (IIT Delhi) for mentoring us throughout our journey. We acknowledge the financial and learning support provided by Marconi Society via Celestini Program India, providing us a wonderful platform to prototype this application. We could not have completed without this mentoring and support.