MLearning.ai
Published in

MLearning.ai

Live Webcam with Streamlit

A web demo application for hand tracking and other real-time vision tasks

Streamlit is a very good tool for creating web demo: today we well see streamlit-webrtc a library that get a webcam component that work from the browser (also on mobile devices, I tested it on Android with Chrome).

Hand tracking demo

The deep learning have some interesting real-time vision tasks, in particular this article is focused on hand tracking and object detection.

Hand Tracking

I chouse Mediapipe as library that implement real-time hand tracking, so I create a Streamlit demo with following code (full repo and demo) that I will explain step by step:

We need at first to define our import:

import cv2
import numpy as np
import av
import mediapipe as mp
from streamlit_webrtc import webrtc_streamer, WebRtcMode, RTCConfiguration

We can procede by creating a dummy working streamlit app that use the webcam:

mp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_hands = mp.solutions.hands
hands = mp_hands.Hands(
model_complexity=0,
min_detection_confidence=0.5,
min_tracking_confidence=0.5
)
RTC_CONFIGURATION = RTCConfiguration(
{"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]}
)
webrtc_ctx = webrtc_streamer(
key="TEST",
mode=WebRtcMode.SENDRECV,
rtc_configuration=RTC_CONFIGURATION,
media_stream_constraints={"video": True, "audio": False},
async_processing=True,
)

We also need only the webrtc_streamer instruction, but to work also when we deploy it on remote server we need to specify iceServer (see this issue).

Now we can define the dummy image processing class (VideoProcessor) that return the same image without process:

class VideoProcessor:
def recv(self, frame):
img = frame.to_ndarray(format="bgr24")
# img = process(img) return av.VideoFrame.from_ndarray(img, format="bgr24")webrtc_ctx = webrtc_streamer(
key="WYH",
mode=WebRtcMode.SENDRECV,
rtc_configuration=RTC_CONFIGURATION,
media_stream_constraints={"video": True, "audio": False},
video_processor_factory=VideoProcessor,
async_processing=True,
)

Now we can implement the process function to obtain the full code (app.py):

import cv2
import numpy as np
import av
import mediapipe as mp
from streamlit_webrtc import webrtc_streamer, WebRtcMode, RTCConfiguration
mp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_hands = mp.solutions.hands
hands = mp_hands.Hands(
model_complexity=0,
min_detection_confidence=0.5,
min_tracking_confidence=0.5
)
def process(image):
image.flags.writeable = False
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = hands.process(image)
# Draw the hand annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_drawing.draw_landmarks(
image,
hand_landmarks,
mp_hands.HAND_CONNECTIONS,
mp_drawing_styles.get_default_hand_landmarks_style(),
mp_drawing_styles.get_default_hand_connections_style())
return cv2.flip(image, 1)
RTC_CONFIGURATION = RTCConfiguration(
{"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]}
)
class VideoProcessor:
def recv(self, frame):
img = frame.to_ndarray(format="bgr24")
img = process(img)return av.VideoFrame.from_ndarray(img, format="bgr24")webrtc_ctx = webrtc_streamer(
key="WYH",
mode=WebRtcMode.SENDRECV,
rtc_configuration=RTC_CONFIGURATION,
media_stream_constraints={"video": True, "audio": False},
video_processor_factory=VideoProcessor,
async_processing=True,
)

Deploy on Streamlit

The web demo can be used at this link and I use share.streamlit.io to deploy it, that is free for public github projects.

So as first step commit the conde on github, including the requirements.txt file with the list of needed python libraries

streamlit                             
streamlit_webrtc opencv-python
mediapipe==0.8.9.1

and eventually the packages.txt file with the list of (ubuntu apt install) system libraries that we need:

python3-opencv

Login on share.streamlit.io create new app and follow the steps, remember to select correct repo and the correct name of your streamlit app.

The Hand tracking demo is Online!

Object Detection

In the case of object detection I made the same passage: build a streamlit script with object detection lib (yolov5), create requirements.txt and packages.txt, but to obtain the real-time on the deploy on streamlit I must scale down a lot the images because it can be realtime with biggest images but using GPUs that share.streamlit.io do not offer for free.

Objecct Detection demo

The full code can be found here and the streamlit demo here.

Conclusions

Streamlit are growing day by day and more libraries arrive to made ti more powerfull and easy to use: yes it is not a good choice to have a fully customized app, but it is possible to create good demo app in few line of codes and we can also put it online free.

That is possible thanks to the open source! So use it to create your beautiful and innovative demo and share!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Nicola Landro

Nicola Landro

Linux user and Open Source fun. Deep learning PhD Student, Full stack web developer, Mobile developer, Musitian.