Remote Heart Rate Detection using Webcam and 50 Lines of Code

Dmitrii Eliuseev
Sep 2, 2020 · 5 min read

Once I came across a description of an Android application that measured the heart rate remotely by using the smartphone’s camera. The camera was not touching to the skin, also was not illuminated by LED. An interesting point was that the Google Play reviewers did not believe in the possibility of such a measurement, and the application was rejected. I don’t know the end of this story, but it became interesting to check whether this is possible. There is no need to make an Android application, it is much easier to test the idea in Python.

Let’s get started.

Getting the Camera Stream

First, we need to get a stream from the webcam, for which I will use OpenCV. The code is cross-platform and can run on both Windows and Linux/OSX.

import cv2
import io
import time
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
cap.set(cv2.CAP_PROP_FPS, 30)
while(True):
ret, frame = cap.read()
img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Display the frame
cv2.imshow('Crop', crop_img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

The idea of determining the heart rate is that the skin tone changes slightly due to the blood flow in the vessels. So we need a picture crop, which contains only a fragment of the skin:

x, y, w, h = 800, 500, 100, 100
crop_img = img[y:y + h, x:x + w]
cv2.imshow('Crop', crop_img)

If everything was done correctly, we should get the camera image (blurred for privacy reasons) and crop:

Image Processing

Once we have the camera stream, it’s pretty simple. For the selected image fragment, we get the average brightness value and add it to the array along with the measurement timestamp.

heartbeat_count = 128
heartbeat_values = [0]*heartbeat_count
heartbeat_times = [time.time()]*heartbeat_count
while True:
...
# Update the data and timestamps
heartbeat_values = heartbeat_values[1:] + [np.average(crop_img)]
heartbeat_times = heartbeat_times[1:] + [time.time()]

The numpy.average function calculates the average of a two-dimensional array, at the output, we get a number, which is the average brightness of our 100x100 square frame.

We can display the graph in real-time using the matplotlib library:

fig = plt.figure()
ax = fig.add_subplot(111)
while(True):
...
ax.plot(heartbeat_times, heartbeat_values)
fig.canvas.draw()
plot_img_np = np.fromstring(fig.canvas.tostring_rgb(),
dtype=np.uint8, sep='')
plot_img_np = plot_img_np.reshape(fig.canvas.get_width_height()[::-1] + (3,))
plt.cla()

cv2.imshow('Graph', plot_img_np)

There is a little hack here: OpenCV works with images in numpy format, so I have to convert a plot from matplotlib, for which I use the numpy.fromstring function.

That’s all.

I run the program, select such a position so that only a skin fragment is in the camera crop area, take The Thinker pose with the head resting on hand — the image should be as motionless as possible. And voila, it really works!

It is worth repeating again that the camera is not applied to the skin, we are simply analyzing the overall picture of the person. And it’s amazing that even at this distance, the change in skin tone is confidently captured by the camera! As we can see from the graph, the real difference in brightness is less than 0.5% and, of course, it is not visible to the “naked eye”, but confidently distinguishable on the graph. The approximate pulse turned out to be about 75bpm. For comparison, the BPM result from the pulse oximeter:

To verify that we get the real heart rate and not any fake signal, like flickering the lightbulb, it is also interesting to check if the heart rate changes depending on the physical exercise. Yes, it does change, I can see it if I put both graphs at the same image, the first graph was taken before the workout, the second after.

It is easy to see that in the second case the heart rate is higher.

Conclusion

Oddly enough, but it really works. To be honest, I wasn’t sure about the result. Of course, for real use, we first need to find a face in the image, but there is already a built-in face detection in OpenCV. And some math is needed to extract the heart rate value from noisy data. For the smartphone camera, it can be a challenge to get the accurate data because the image is shaky when the phone is keeping in hand. For the webcam, it is, in general, more stable.

And since we are analyzing a video stream, a separate question may arise — does this work with compressed video data, is it possible to see the heart rate of a movie actor or on TV? I do not know the answer, those who wish can try it on their own. To do this, it is enough to replace the line cap = cv2.VideoCapture(0) in the code with cap = cv2.VideoCapture(“video.mp4”), other code remains the same. It can be another challenge to find a video where the person is not moving at all for several seconds though. Alternatively, the motion stabilisation algorithm can be used before processing, but it’s another complex task.

For those wishing to make more tests, the source code is attached below.

import numpy as np
from matplotlib import pyplot as plt
import cv2
import io
import time
# Camera stream
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1280)
cap.set(cv2.CAP_PROP_FPS, 30)
# Video stream (optional, not tested)
# cap = cv2.VideoCapture("video.mp4")
# Image crop
x, y, w, h = 800, 500, 100, 100
x, y, w, h = 950, 300, 100, 100
heartbeat_count = 128
heartbeat_values = [0]*heartbeat_count
heartbeat_times = [time.time()]*heartbeat_count
# Matplotlib graph surface
fig = plt.figure()
ax = fig.add_subplot(111)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
crop_img = img[y:y + h, x:x + w]
# Update the data
heartbeat_values = heartbeat_values[1:] + [np.average(crop_img)]
heartbeat_times = heartbeat_times[1:] + [time.time()]
# Draw matplotlib graph to numpy array
ax.plot(heartbeat_times, heartbeat_values)
fig.canvas.draw()
plot_img_np = np.fromstring(fig.canvas.tostring_rgb(),
dtype=np.uint8, sep='')
plot_img_np = plot_img_np.reshape(fig.canvas.get_width_height()[::-1] + (3,))
plt.cla()
# Display the frames
cv2.imshow('Crop', crop_img)
cv2.imshow('Graph', plot_img_np)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

Sign up for Best Stories

By Dev Genius

The best stories sent monthly to your email. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Dmitrii Eliuseev

Written by

Python and IoT Developer, science and ham radio enthusiast

Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

Dmitrii Eliuseev

Written by

Python and IoT Developer, science and ham radio enthusiast

Dev Genius

Coding, Tutorials, News, UX, UI and much more related to development

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store