Pneumonia Detection, All On Your Phone

Darren Su
7 min readApr 18, 2023

--

Imagine it’s the middle of he night, you’re fast asleep, dreaming of candy land, and then, all of a sudden, you hear your Grandma yelling.

You wake up, and see a pool of blood, and even more dripping out.

Horror slowly dawns upon you, as you realize the potential implications of this; Cancer? Tuberculosis? Some rare untreatable disease?

You rush her to the hospital, and nerve-rackingly wait for the doctor to give you the news. After what seems to be a eternity, he finally tells, your sweet old Grandma has Viral Pneumonia, a life or death disease

I think we can all relate with this, whether it was a miscall, or a serious disease, we all know the feeling of impending doom when a loved one is in danger, and that's exactly what happened to my grandma last week.

Thankfully, it was not serious life threatening viral Pneumonia, and we caught the syptoms early on; however, Pneumonia still kills millions of people world wide each year

2.5 million people died from pneumonia in 2019. Almost a third of all victims were children younger than 5 years, it is the leading cause of death for children under 5.
- https://ourworldindata.org/

While we had the luck and advantages of being in an area with accessible healthcare, knew the symptoms of Pneumonia, and was well off enough to afford treatment, hundreds of millions of people don’t.

Don’t get me wrong, we aren't Jeff Bezos and don’t have state-of-the-art treatment, but we didn’t need that. Pneumonia has been a issue since Mankind built compact housing; we’ve studied it for centuries, and invented innovative cures for decades now.

If Pneumonia is treated early on, the death rate would be almost non-existent

The sad thing is that we have the cure, but people don’t have the education, financial resources, or abilities to get treatment.

What we need is accessible treatment for all, an easy solution to detect pneumonia, and then direct people to hospitals.

Machine learning can do just that.

Remeber How Chat GPT-4 Saved that dog’s life? We’re doing extacly that, but with an XRay

Here’s how this would work:

Phone → Chest X-Ray → Convolution Neural Network →
Prediction % → Direction to Nearby Hospital If Over 60%

We can use the Phone XRay App, or just get a XRay at a nearby clinic

Lets quickly get down to the technical part, the coding:

Excited?

Here are the steps:

  1. Data Preprocessing

2. Building/Stacking the Layers

3. Running the model

4. Model Evaluation w/ Graphs

1. Data Preprocessing✍

First we need to process the data so the machine algorithm can understand this, from this dataset.

It’s around 5000 images, enough for our model, but not large enough to reach the 99 percentage.

from google.colab import drive
drive.mount('/content/gdrive', force_remount = True)
!unzip gdrive/MyDrive/Colab/archive.zip

Lets then import the libraries.

import numpy as np
import matplotlib.pyplot as plt
import os as os
from PIL import Image
from tensorflow import keras
import tensorflow as tf

from keras.models import Sequential
#provides training and inference features on this model
from keras import layers

from keras.preprocessing.image import ImageDataGenerator
from keras.utils import load_img
#image data generator + load imager - er

import seaborn as sns

import cv2
import pandas as pd

from sklearn.metrics import plot_confusion_matrix
from sklearn.model_selection import train_test_split

Set some labels and variables.

print(os.listdir(r'chest_xray'))
labels = ['PNEUMONIA', 'NORMAL']
img_size = 150

Extract the data and format it to arrays.

def get_training_data(data_dir):
data = []
for label in labels:
path = os.path.join(data_dir, label)
class_num = labels.index(label)
for img in os.listdir(path):
try:
img_arr = cv2.imread(os.path.join(path, img), cv2.IMREAD_GRAYSCALE)
resized_arr = cv2.resize(img_arr, (img_size, img_size))
data.append([resized_arr, class_num])
except Exception as e:
print(e)
return np.array(data)

Set the paths.

train=  get_training_data(r'chest_xray//train')

val = get_training_data(r'chest_xray/val')
#val lessens the chance of overfitting (the network gets to cozzy with the train data and is surprized by the test)

test = get_training_data(r'chest_xray/test')

Lets take a look at the images on a Matplotlib plot:

l = []
for i in train:
if(i[1] == 1):
l.append("Pneumonia")
else:
l.append("Normal")
sns.set_style('darkgrid')
sns.countplot(l)

plt.figure(figsize = (5,5))
plt.imshow(train[0][0], cmap='gray')
plt.title(labels[train[0][1]])

plt.figure(figsize = (5,5))
plt.imshow(train[-1][0], cmap='gray')
plt.title(labels[train[-1][1]])

Make arrays for X and Y, and label the data for “Training”, “Testing”, and “Validation”.

x_train = []
y_train = []

x_val = []
y_val = []

x_test = []
y_test = []

for feature, label in train:
x_train.append(feature)
y_train.append(label)

for feature, label in test:
x_test.append(feature)
y_test.append(label)

for feature, label in val:
x_val.append(feature)
y_val.append(label)

Normlize the data (so processing is smoother).

x_train = np.array(x_train) / 255
x_val = np.array(x_val) / 255
x_test = np.array(x_test) / 255
x_train = x_train.reshape(-1, img_size, img_size, 1)
y_train = np.array(y_train)

x_val = x_val.reshape(-1, img_size, img_size, 1)
y_val = np.array(y_val)

x_test = x_test.reshape(-1, img_size, img_size, 1)
y_test = np.array(y_test)

Build the function for data augmentation for data generation, for a more balanced dataset.

data_generator = tf.keras.preprocessing.image.ImageDataGenerator(
featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
zca_epsilon=1e-06,
rotation_range=0,
width_shift_range=0.0,
height_shift_range=0.0,
brightness_range=None,
shear_range=0.0,
zoom_range=0.0,
channel_shift_range=0.0,
fill_mode='nearest',
cval=0.0,
horizontal_flip=False,
vertical_flip=False,
rescale=None,
preprocessing_function=None,
data_format=None,
validation_split=0.0,
interpolation_order=1,
dtype=None
)

data_generator.fit(x_train)

2. Stacking Up The Layers 🥞

This is where the magic happens: the layers. We can use Tensorflow to build and stack up the layers for the neural network, as well as the hyper-perimeters for these layers.

cnn = keras.Sequential([
layers.Conv2D(32, kernel_size=(3, 3), strides=1, activation='relu', padding='same', input_shape=(150, 150, 1)),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='same'),
  • Convulation 2D — The base of the model, scans though the image (now processed into a matrix), and transforms it into a smaller matrix through it’s matrix muliplation algthoithm
  • Batch Normalization — Normalizes the data for better processing
  • Max Pooling 2D — Downsizes the matrix by taking the biggest data point in it’s kernal (in this case 2,2 square)
  • Dropout — Randomly “drops out” data points to reduce underfitting

We can now repeat this pattern for 3 more times

layers.Conv2D(64, kernel_size=(3, 3), strides=1, activation='relu', padding='same'),
layers.Dropout(0.3),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='same'),

layers.Conv2D(128, kernel_size=3, activation='relu', padding='same'),
layers.Dropout(0.4),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='same'),

layers.Conv2D(256, kernel_size=3, activation='relu', padding='same'),
layers.Dropout(0.5),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='same'),

Finally, we can dense the output (as well as slide in some dropout layers) to get two possible outputs, positive and negative, as well as it’s loss.

layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dropout(0.6),
layers.Dense(2, activation='sigmoid'),
])

Now we can take a look at the model:

cnn.summary()

Finally, set the loss optimizer and cross-entropy for backpropagation

cnn.compile(optimizer='adam', 
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)

3. Training Phrase 💪

Since we want this model to run as effient as possible, we can set a higher learning rate (how fast the model can change), at the start, then scale it back so it’s more accualte during it’s fine-tuning phrase

def scheduler(epoch, learning_rate):
if epoch < 3:
return learning_rate
else:
return learning_rate * tf.math.exp(-0.1)

callback = tf.keras.callbacks.LearningRateScheduler(scheduler)

Finally, we can train the model

history = cnn.fit(
data_generator.flow(x_train,y_train, batch_size = 32),
epochs = 10,
validation_data = data_generator.flow(x_val, y_val),
callbacks = [callbac

4. Evaluation Phrase🤖

After all this hard work, lets see how our model did!

loss = cnn.evaluate(x_test,y_test)[0]
print('The loss of the cnn is ' + str(loss))
accuracy = cnn.evaluate(x_test,y_test)[1]*100
print("The accuracy of the cnn is " + str(accuracy))

Once you run this, you should see the accuacy of your model. This should be around 70 to 85%, depending on your luck, not bad on this model and dataset.

Now we can plot down the data using Mathplotlib, which would tell us if the model is having any certain issues, such as underfitting:

plt.figure(figsize=(8,6))
plt.title('Accuracy scores')
plt.plot(history.history['accuracy'],'go-')
plt.plot(history.history['val_accuracy'],'ro-')
plt.legend(['accuracy', 'val_accuracy'])
plt.show()
plt.figure(figsize=(8,6))
plt.title('Loss value')
plt.plot(history.history['loss'],'go-')
plt.plot(history.history['val_loss'],'ro-')
plt.legend(['loss', 'val_loss'])
plt.show()

And.. Thats it!

After the building phrase is done, the rest should be smooth sailing. You can use Tensorflow Lite to actually implement the model, and give device based on the result and how certain the AI is (loss function).

With this, people can detect Pneumonia early on, easily understand what to do, and the ratio of deaths can be further decreased.

As Simplee As Taking A Selfie

The cross intersection between AI and Medicine is exciting, and can bring a world where detecting cancer is as simple as taking a selfie.

--

--