Image Processing using Streamlit
Understanding Streamlit with OpenCV
In this article, we will see how we can use Streamlit with Image Processing Techniques and Object Detection Algorithms. I am assuming that you have Streamlit installed and working on your system. If you do not know how to start with streamlit then you can refer to my previous article where I have explained it in brief.
I have also deployed this streamlit app on Streamlit Sharing. You can have a look at it before reading the article further.
Here is the link: Image Processing using Streamlit
Let’s dive into the code now.
First importing the necessary libraries
import streamlit as st
from PIL import Image
import cv2
import numpy as np
I have created the main function which is the starting point of the code. In this function, you will see a sidebar that I have created. Streamlit provides you sidebar selectbox function to easily create one. In the sidebar, I have added some values and with each of the values, a function is associated. When the user clicks on any one of them the corresponding function is triggered. By default, the first value which is the Welcome string is selected and the if statement calls the welcome() function.
def main():selected_box = st.sidebar.selectbox(
'Choose one of the following',
('Welcome','Image Processing', 'Video', 'Face Detection', 'Feature Detection', 'Object Detection')
)
if selected_box == 'Welcome':
welcome()
if selected_box == 'Image Processing':
photo()
if selected_box == 'Video':
video()
if selected_box == 'Face Detection':
face_detection()
if selected_box == 'Feature Detection':
feature_detection()
if selected_box == 'Object Detection':
object_detection()if __name__ == "__main__":
main()
The below image shows the welcome page. Let's look at the welcome function as well.
def welcome():
st.title('Image Processing using Streamlit')
st.subheader('A simple app that shows different image processing algorithms. You can choose the options'
+ ' from the left. I have implemented only a few to show how it works on Streamlit. ' +
'You are free to add stuff to this app.')
st.image('hackershrine.jpg',use_column_width=True)
With st.title you can create a bold title and with st.subheader you can have a bold with lower font size. And with st.image you can display any image on your streamlit app. Make sure you set the column width to true so that it fits properly.
Next, we have the image processing part in the sidebar. Here, we will see thresholding, edge detection, and contours. I have used a slider to change the value of the threshold as per user convenience. To make an interactive slider you simply write st.slider in streamlit. With the help of OpenCV, we are doing the conversion of RGB to Gray and then using the threshold function in OpenCV. We pass the value of the slider in the threshold function. So when we move the slider the value changes and is stored in thresh1. Then we use st.image to display the thresh1 image. Make sure you set clamp to True.
Next, we have a bar chart displaying that image. You can do this using the bar chart function in streamlit. The value that is passed is calculated with the cv2.calchist function. You can use different histograms and plots for analysis of your images.
Then we have the Canny edge detection technique. I have made a button and when the user clicks on it, it runs the algorithm and displays the output. You can create a button simply by writing st.button along with if statement. Inside the if you write your edge detection code.
For Contours, again I have used a slider to change the image so that it the contours change. In OpenCV, you have the find contours function and draw contours function where you pass the image.
def photo():st.header("Thresholding, Edge Detection and Contours")
if st.button('See Original Image of Tom'):
original = Image.open('tom.jpg')
st.image(original, use_column_width=True)
image = cv2.imread('tom.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
x = st.slider('Change Threshold value',min_value = 50,max_value = 255)ret,thresh1 = cv2.threshold(image,x,255,cv2.THRESH_BINARY)
thresh1 = thresh1.astype(np.float64)
st.image(thresh1, use_column_width=True,clamp = True)
st.text("Bar Chart of the image")
histr = cv2.calcHist([image],[0],None,[256],[0,256])
st.bar_chart(histr)
st.text("Press the button below to view Canny Edge Detection Technique")
if st.button('Canny Edge Detector'):
image = load_image("jerry.jpg")
edges = cv2.Canny(image,50,300)
cv2.imwrite('edges.jpg',edges)
st.image(edges,use_column_width=True,clamp=True)
y = st.slider('Change Value to increase or decrease contours',min_value = 50,max_value = 255)
if st.button('Contours'):
im = load_image("jerry1.jpg")
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(imgray,y,255,0)
image, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
img = cv2.drawContours(im, contours, -1, (0,255,0), 3)
st.image(thresh, use_column_width=True, clamp = True)
st.image(img, use_column_width=True, clamp = True)
The next part we will look at is Face detection. For face detection, I am using a haar cascade file. Using the cascadeClassifier we will load the XML file. We have the detectMultiScale function where we pass the image to find the faces in that image. If we find the image then draw a rectangle around the face. If you wish to save the image then you can use write function. Then we display the image using st.image using Streamlit.
def face_detection():
st.header("Face Detection using haarcascade")
if st.button('See Original Image'):
original = Image.open('friends.jpeg')
st.image(original, use_column_width=True)
image2 = cv2.imread("friends.jpeg")face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
faces = face_cascade.detectMultiScale(image2)
print(f"{len(faces)} faces detected in the image.")
for x, y, width, height in faces:
cv2.rectangle(image2, (x, y), (x + width, y + height), color=(255, 0, 0), thickness=2)
cv2.imwrite("faces.jpg", image2)
st.image(image2, use_column_width=True,clamp = True)
The last part that I want to show is the Object Detection. I have used wallclock and eye haar cascade files to do object detection. Similar to face detection, we will load the XML files and use the detect multiscale function. If the respective objects are found we are going to draw a rectangle around that object. The below images show the output.
def object_detection():
st.header('Object Detection')
st.subheader("Object Detection is done using different haarcascade files.")
img = load_image("clock.jpg")
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
clock = cv2.CascadeClassifier('haarcascade_wallclock.xml')
found = clock.detectMultiScale(img_gray,
minSize =(20, 20))
amount_found = len(found)
st.text("Detecting a clock from an image")
if amount_found != 0:
for (x, y, width, height) in found:
cv2.rectangle(img_rgb, (x, y),
(x + height, y + width),
(0, 255, 0), 5)
st.image(img_rgb, use_column_width=True,clamp = True)
st.text("Detecting eyes from an image")
image = load_image("eyes.jpg")
img_gray_ = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
img_rgb_ = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
eye = cv2.CascadeClassifier('haarcascade_eye.xml')
found = eye.detectMultiScale(img_gray_,
minSize =(20, 20))
amount_found_ = len(found)
if amount_found_ != 0:
for (x, y, width, height) in found:
cv2.rectangle(img_rgb_, (x, y),
(x + height, y + width),
(0, 255, 0), 5)
st.image(img_rgb_, use_column_width=True,clamp = True)
You can find the code on my Github. I have also made a video on it.
For further reading on Image processing and Machine learning, you can refer to this informative article.
https://neptune.ai/blog/what-image-processing-techniques-are-actually-used-in-the-ml-industry
Peace !