Mask On — A Push for Social Media Change

Rudy Wang
The Startup
Published in
7 min readSep 23, 2020

Tell me a more dangerous concoction than college kids, parties, and COVID-19. Ok, maybe this one.

We can continue to debate endlessly on what the worst part of 2020 is, but a single fact remains unchanged: as grade schools and universities continue to reopen in the U.S., younger people are testing positive with coronavirus in larger numbers than before.

The saddest part about it? We all saw this coming months ago before the announcement of reopening schools, yet so many of us believed in the “kids will be kids” adage and did not take the extra steps to identify what other hidden ways we could have informed the younger people to protect themselves.

The question to ask now is: before schools reopened, could we have done more for young people by instilling more socially conscious behavior and habits for wearing masks in public?

The Initiative

I believe the simple answer is a resounding YES.

If social media can build algorithms to recommend content to users and effectively form subtle habits in daily life, I see an opportunity to use social media for the greater good rather than simply push out the next viral dance — by promoting the wearing of masks.

The goal of my project was to build a mask detection classifier that can be integrated with Instagram to encourage mask-wearing habits and trends.

I will break down my process of collecting the datasets and creating a neural network to classify the images. Later on, I will show a demo of a potential integration idea with Instagram using my model classifier.

All code used in this project can be found on my Github.

The Datasets

To train a neural network to recognize differences between a face wearing a mask versus not wearing one, I needed two different sets of images for my deep learning model.

Dataset Quantity

For images with no masks and only faces, I used the LFW dataset.

For images with masks on faces, I used both the Real World Masked Face (RMF)and Kaggle Face Mask Detection datasets. Since RMF images were heavily biased towards having only Asians as compared to all other races, I supplemented with the Kaggle images that captured a larger diversity of races.

For LFW and Kaggle Face Mask Detection datasets, I used Pytorch’s MTCNN, a neural network trained to recognize faces, to crop out faces in a larger image and save them individually as separate photos so I would be able to successfully train my model on recognizing only faces.

Convolutional Neural Network

Using Cyberduck, I was able to upload my images into a Google Cloud Instance in order to perform my neural network training in a faster and more efficient manner (training images can take a lot longer than you think).

Our splits of training, validation, and testing will be organized manually into three separate folders. Within each folder, we will classify images by assigning them into their respective subfolders, “Face” (no masks) vs “Mask”. Arrangement of the images in this hierarchy will allow us to take full advantage of a Keras method, flow_from_directory, when we decide to call in the images into our Jupyter Notebook below.

Split Images into Defined Folders

Within Keras image data preprocessing utilities, I was able to take advantage of using the ImageDataGenerator class to manipulate and spawn different sets of the same images with varying properties. More images would mean more data for my neural network model to be able to train and learn on and yield more accurate classification results.

Training/Validation ImageDataGenerator Classes

Using the defined classes in conjunction with the flow_from_directory method, I would be able to pass each of my images in their specific directories into the ImageDataGenerator pipeline, which will automatically compile all the images in their designated classes. As seen in the screenshot below, I was able to seamlessly import my training dataset totaling 10,941 images belonging in 2 classes — praise Keras.

After getting my images in order, I constructed a convolutional neural network (CNN) model with Keras and used the training and validation splits to identify the epoch at its highest validation accuracy score.

To construct the best neural network model, I utilized a combination of A/B testing and domain knowledge to make improvements over the validation accuracy as my guidepost. The graph below compares the training and validation accuracy scores at each epoch of my model. After the 1st Epoch, the validation accuracy scores remained higher than the training accuracy scores, due to having significantly less images within the validation dataset.

I ultimately decided to save the weights for Epoch 8, since the validation score was slightly lower than the training score, but still remained very high.

Accuracy vs Epoch Graph

My final neural network model consisted of:

  • Convolutions: 4
  • Optimizer : Adam
  • Dropout: 0.85
  • Class Weights: 4 to 1 (Mask to Face)
  • Epochs: 8

To give a sense of visualization of my model, please see the image below as well as my uploaded Notebook on Github (Neural_Net.ipynb).

Neural Network Visualized (creds to http://alexlenail.me/NN-SVG/LeNet.html)

With a validation accuracy score of 97%, I felt confident running my test dataset through the model to make strong predictions.

Confusion Matrix with Scores

As seen in the image above, I had darn great results — with much higher true positives and true negatives than their bad halves, false positives and false negatives. Combined with my high precision, F1, and test accuracy scores, I was sitting pretty on a mask detection model that worked great!

Mask Detection Function

For my final steps, I created a function that would allow a user to upload any images in a directory and the application would use the model with the optimal weights to predict the results on top of the original image.

I used a function draw_image_with_boxes (thanks to Jason Brownlee on Machine Learning Mastery) that enabled me to identify faces within a picture and simply draw a rectangular box over it. From there, I added in my personal code within the function to have my model predict the labels for each image.

Predicts Labels and Writes the Text

Lastly, my mask detector function will take in any Path string and be able to predict labels for each picture within the directory.

Mask Detector Function

It will use MTCNN to crop out each face from the original image and simply pass on the cropped images’ information into the draw_image_with_boxes function to draw a rectangular box around the face with its correctly labelled predictions. A visual representation can be seen below to demonstrate what the final output would be on the right side.

Mask Detection App

Integration with Instagram

By being able to classify images whether people are wearing masks or not, this opens up many opportunities for Instagram.

Video Demo of Instagram Integration

Using Sketch and a Instagram UI Kit, I was able to create a short demo of the many creative ways Instagram would be able to incentivize users to promote pictures with masks:

  1. Create a new “verified badge” (blue check-mark) for users who participate in a mask-on program.
  2. Match donations of masks to uploads with pictures of users wearing masks.
  3. Give a higher promotion rate on Instagram newsfeeds of users who post pictures of wearing masks.

These are just a few examples of the endless opportunities for Instagram to be able to use their popular social platform to make a positive change in people’s habits.

Conclusion

Schools are reopening, flu season is upon us, and a pandemic with no vaccine continues to threaten our daily lives.

A social initiative to encourage mask-wearing aimed at younger people on Instagram could potentially save many lives as well as rebrand social media in a better light (ahem, Social Dilemma, ahem).

So let’s bring social media back to its roots with a positive twist: rekindling relationships with old friends, connecting with new people, and now, taking a stance against the biggest virus threat to humanity.

--

--