MORSE CODE CONVERTER USING MEDIAPIPE AND OPEN CV THROUGH PYTHON

Madha Siddiqui
13 min readMay 12, 2023

--

Hello Everyone!

The collaborative project of Morse Code Converter through hand gesture control is envisioned and executed by Muhammad Hamza Waheed and I(Madha Siddiqui), undergraduate students of University of Engineering and Technology, Lahore, Pakistan currently enrolled in Mechatronics and Control Engineering which is our end semester project. Morse Code is converted into required text through hand gestures and the text is converted into speech through pyttsx3 module and visualization provided by GUI, mediapipe and open CV.

Let’s begin

Morse code is a method used in telecommunication to encode text characters as standardized sequences of two different signal durations, called dots and dashes, or dits and dahs. Morse code is named after Samuel Morse, one of the inventors of the telegraph.

Morse Code (ITU)

The central thought of our project is Human Activity Monitoring using Google’s framework MediaPipe and Machine Vision(Open CV). The case scenario is displayed through Graphical User Interface(GUI) using appropriate GUI widgets. Let’s briefly discuss what is really MediaPipe, Machine Vision and GUI.

MEDIAPIPE

MediaPipe is a Google’s framework providing open-source cross-platform of Artificial Intelligence and Machine Learning solutions that effortlessly customize application requirements and used in development projects also available in mp3 and mp4 format. The solutions employed in our project are Hand Landmark and Human Pose Detection and Tracking. Our major concern is Hand Tracking Solution for Morse Code Decoding.

!pip install -q mediapipe==0.10.0

Hand Landmark is a packaged model: palm detection model and hand landmark detection model which infers 21 3D hand-knuckle coordinates in just a single frame. Its fundamental importance is in sign language conversion and can also perform evenly in gesture control systems, displaying digital content etc.

Hand Landmarks

MACHINE VISION

Machine Vision aka Open CV is a real-time computer vision function library providing common infrastructure to accelerate the machine perception of commercial products.

To install open CV first step includes the use of package manager PIP.

pip install — trusted-host=pypi.org — trusted-host=files.pythonhosted.org opencv-python

Hand Tracking Solution module is provided below through open CV by using the concept of Object Oriented Program(OOP) for efficient usage.

import cv2
import mediapipe as mp


class HandDetector:
"""
Finds Hands using the mediapipe library. Exports the landmarks
in pixel format. Adds extra functionalities like finding how
many fingers are up or the distance between two fingers. Also
provides bounding box info of the hand found.
"""

def __init__(self, mode=False, maxHands=2, detectionCon=0.5, minTrackCon=0.5):
"""
:param mode: In static mode, detection is done on each image: slower
:param maxHands: Maximum number of hands to detect
:param detectionCon: Minimum Detection Confidence Threshold
:param minTrackCon: Minimum Tracking Confidence Threshold
"""
self.mode = mode
self.maxHands = maxHands
self.detectionCon = detectionCon
self.minTrackCon = minTrackCon

self.mpHands = mp.solutions.hands
self.hands = self.mpHands.Hands(static_image_mode=self.mode, max_num_hands=self.maxHands, min_detection_confidence=self.detectionCon, min_tracking_confidence=self.minTrackCon)
self.mpDraw = mp.solutions.drawing_utils
self.tipIds = [4, 8, 12, 16, 20]
self.fingers = []
self.lmList = []

def findHands(self, img, draw=True, flipType=True):
"""
Finds hands in a BGR image.
:param img: Image to find the hands in.
:param draw: Flag to draw the output on the image.
:return: Image with or without drawings
"""
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
self.results = self.hands.process(imgRGB)
allHands = []
h, w, c = img.shape
if self.results.multi_hand_landmarks:
for handType, handLms in zip(self.results.multi_handedness, self.results.multi_hand_landmarks):
myHand = {}
## lmList
mylmList = []
myPointslist=[]
xList = []
yList = []
for id, lm in enumerate(handLms.landmark):
px, py, pz = int(lm.x * w), int(lm.y * h), int(lm.z * w)
mylmList.append([px, py, pz])
myPointslist.append((px,py))
xList.append(px)
yList.append(py)

## bounding Box
xmin, xmax = min(xList), max(xList)
ymin, ymax = min(yList), max(yList)
boxW, boxH = xmax - xmin, ymax - ymin
bbox = xmin, ymin, boxW, boxH
cx, cy = bbox[0] + (bbox[2] // 2), \
bbox[1] + (bbox[3] // 2)

myHand["lmList"] = mylmList
myHand["bbox"] = bbox
myHand["center"] = (cx, cy)

if flipType:
if handType.classification[0].label == "Right":
myHand["type"] = "Left"
else:
myHand["type"] = "Right"
else:
myHand["type"] = handType.classification[0].label
allHands.append(myHand)

## draw
if draw:
# self.mpDraw.draw_landmarks(img, handLms, self.mpHands.HAND_CONNECTIONS)

for i in range(21):
if(0<=i<=3):
img=cv2.line(img,myPointslist[i],myPointslist[i+1],(0,200,200),3)
img=cv2.circle(img,myPointslist[i],3,(0,200,200),3)
img=cv2.circle(img,myPointslist[4],3,(0,200,200),3)

if(5<=i<=7):
img=cv2.line(img,myPointslist[i],myPointslist[i+1],(0,200,0),3)
img=cv2.line(img,myPointslist[0],myPointslist[5],(0,200,0),3)
img=cv2.circle(img,myPointslist[i],3,(0,200,0),3)
img=cv2.circle(img,myPointslist[8],3,(0,200,0),3)

if(9<=i<=11):
img=cv2.line(img,myPointslist[i],myPointslist[i+1],(0,0,200),3)
img=cv2.line(img,myPointslist[0],myPointslist[9],(0,0,200),3)
img=cv2.circle(img,myPointslist[i],3,(0,0,200),3)
img=cv2.circle(img,myPointslist[12],3,(0,0,200),3)

if(13<=i<=15):
img=cv2.line(img,myPointslist[i],myPointslist[i+1],(200,0,0),3)
img=cv2.line(img,myPointslist[0],myPointslist[13],(200,0,0),3)
img=cv2.circle(img,myPointslist[i],3,(200,0,0),3)
img=cv2.circle(img,myPointslist[16],3,(200,0,0),3)

if(17<=i<=19):
img=cv2.line(img,myPointslist[i],myPointslist[i+1],(200,0,200),3)
img=cv2.line(img,myPointslist[0],myPointslist[17],(200,0,200),3)
img=cv2.circle(img,myPointslist[i],3,(200,0,200),3)
img=cv2.circle(img,myPointslist[20],3,(200,0,200),3)

cv2.rectangle(img, (bbox[0] - 20, bbox[1] - 20),
(bbox[0] + bbox[2] + 20, bbox[1] + bbox[3] + 20),
(255, 0, 255), 2)
cv2.putText(img, myHand["type"], (bbox[0] - 30, bbox[1] - 30), cv2.FONT_HERSHEY_PLAIN,
2, (255, 0, 255), 2)
if draw:
return allHands, img
else:
return

GUI(TKINTERMODULE)

Graphical User Interface (GUI) is a digital interface for user interaction with graphical components such as icons, buttons, and menus. In a GUI, the visuals displayed in the user interface convey information relevant to the user and guide about the actions they can take. tkinter is a python interface of GUI toolkit to add fair amount of logic for more pythonic experience.

The features used in the project of Morse Code are described below:

label2= Label(frame_1, text="Welcome to PoseVerter By Mechatronics Students", bg="#808080", font=('Times', '14', 'bold') ).place(x=0,y=2)
label3= Label(frame_1, text="CODED TEXT : ", bg="#808080").place(x=0,y=28)
label4= Label(frame_1, text="Developed by: Muhammad Hamza Waheed & Madha Siddiqui", bg="#808080", font=('Times', '10', 'bold')).place(x=300,y=570)
textlabel= Label(frame_1, textvariable=Words, width=92, height=5, relief=RIDGE)
textlabel.place(x=10, y=50)
b1 = Button(frame_1,text='Recorded Videos', height=1, width=20, relief=RAISED, cursor="hand2", command=ButtonSelect)
b1.place(x=buttonX, y=buttonY)
b2 = Button(frame_1,text='Live Video', height=1, width=20, relief=RAISED, cursor="hand2", command=Live)
b2.place(x=buttonX, y=(buttonY+40))
b3 = Button(frame_1,text='Delete Video', height=1, width=20, relief=RAISED, cursor="hand2", command=DelVid)
b3.place(x=buttonX, y=(buttonY+80))
b4 = Button(frame_1,text='Play', height=1, width=20, relief=RAISED, cursor="hand2", command=Play)
b4.place(x=10, y=220)
b5 = Button(frame_1,text='Help', height=1, width=20, relief=RAISED, cursor="hand2", command=Help)
b5.place(x=10, y=260)
b6 = Button(frame_1,text='Delete Char', height=1, width=20, relief=RAISED, cursor="hand2", command=delChar)
b6.place(x=10, y=300)
select_img()
win.mainloop()

MORSE CODE CONVERTER

Morse Code Working

Libraries:

  • OpenCV library for computer vision pip install opencv-python
  • MediaPipe library for Machine Learning Solutions pip install mediapipe
  • tkinter for GUI toolkit in python interfacing pip install tk
  • pyttsx3 for text-to-speech conversion pip install pyttsx3
  • PIL python module for processing images pip install Pillow
  • os python module to create and remove a directory pip install os-sys

GUI Framework:

In the code below framework features like frame, buttons, label, colors etc are added using tkinter module of python for user interfacing.

import cv2
from tkinter import *
from PIL import Image, ImageTk
from HandTrackingModule import HandDetector
from tkinter import filedialog
import os
import pyttsx3

Morse={'.-': 'A', '-...': 'B', '-.-.': 'C', '-..': 'D',
'.': 'E', '..-.': 'F', '--.': 'G', '....': 'H',
'..': 'I', '.---': 'J', '-.-': 'K', '.-..': 'L',
'--': 'M', '-.': 'N', '---': 'O', '.--.': 'P',
'--.-': 'Q', '.-.': 'R', '...': 'S', '-': 'T',
'..-': 'U', '...-': 'V', '.--': 'W', '-..-': 'X',
'-.--': 'Y', '--..': 'Z', '':' '}

win = Tk()
win.title("PoseVerter")
win.geometry("670x600+200+30")

frame_1 = Frame(win, width=670, height=700, bg="#969696").place(x=0, y=0)
cap = cv2.VideoCapture(0)

buttonX=10
buttonY=450
w = 470
h = 300
label1 = Label(frame_1, width=w, height=h)
label1.place(x=180, y=135)

Designed Buttons:

Following are the designed buttons with their python code used in GUI framework:

  • Play Button (to play the audio for the displayed text)
def Play():
n=Words.get()
n_list = list(n)
n2_string = " ".join(n_list)
engine.say(n2_string)
engine.say(n)
engine.runAndWait()
  • Delete Char Button(to delete the latest character from the string)
def delChar():
global Words
n=str(Words.get())
m=''
if(len(n)>0):
for i in range(len(n)-1):
m+=n[i]
Words.set(m)
  • Help Button(to open a Morse code text file if user forgets about the respective alphabet’s morse code)
def Help():
os.startfile("Help.txt")

Help text file is provided below

***********************************************************                                  User Guide Manual to use Hand gesture for Morse Code         
***********************************************************
Computer Programming II Project

Mechatronics & Control Engineering Department UET Lahore
Title: --- Morse Code Converter ---
***********************************************************
Morse Code for the English Alphabets are as follow:
***********************************************************
A -> .-
B -> -...
C -> -.-.
D -> -..
E -> .
F -> ..-.
G -> --.
H -> ....
I -> ..
J -> .---
K -> -.-
L -> .-..
M -> --
N -> -.
O -> ---
P -> .--.
Q -> --.-
R -> .-.
S -> ...
T -> -
U -> ..-
V -> ...-
W -> .--
X -> -..-
Y -> -.--
Z -> --..

***********************************************************
Developed By:
Muhammad Hamza Waheed & Madha Siddiqui
***********************************************************
-----Instructions-----

>> Always keep the "HandTrackingModule.py" in the same folder as the main program.

>> Keep the video in a folder having the same name as the video name and keep it inside the working folder.
  • Recorded Videos Button (to add pre-recorded videos. This button in turn creates another button named as “Add Videos”)
def Rec(num):
global cap, detector
if(num==1):
if(".mp4" not in Vid1.get()):
m=Vid1.get()
n=''
o=''
for i in range(-1, -len(m), -1):
if m[i]=='/':
break
n+=m[i]
for i in range(-1, -len(n)-1, -1):
o+=n[i]
m+=("/"+str(o)+".mp4")
Vid1.set(m)
cap = cv2.VideoCapture(Vid1.get())
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")
elif(num==2):
if(".mp4" not in Vid2.get()):
m=Vid2.get()
n=''
o=''
for i in range(-1, -len(m), -1):
if m[i]=='/':
break
n+=m[i]
for i in range(-1, -len(n)-1, -1):
o+=n[i]
m+=("/"+str(o)+".mp4")
Vid2.set(m)
cap = cv2.VideoCapture(Vid2.get())
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")
elif(num==3):
if(".mp4" not in Vid3.get()):
m=Vid3.get()
n=''
o=''
for i in range(-1, -len(m), -1):
if m[i]=='/':
break
n+=m[i]
for i in range(-1, -len(n)-1, -1):
o+=n[i]
m+=("/"+str(o)+".mp4")
Vid3.set(m)
cap = cv2.VideoCapture(str(Vid3.get()))
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")
elif(num==4):
if(".mp4" not in Vid4.get()):
m=Vid4.get()
n=''
o=''
for i in range(-1, -len(m), -1):
if m[i]=='/':
break
n+=m[i]
for i in range(-1, -len(n)-1, -1):
o+=n[i]
m+=("/"+str(o)+".mp4")
Vid4.set(m)
cap = cv2.VideoCapture(str(Vid4.get()))
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")
elif(num==5):
if(".mp4" not in Vid5.get()):
m=Vid5.get()
n=''
o=''
for i in range(-1, -len(m), -1):
if m[i]=='/':
break
n+=m[i]
for i in range(-1, -len(n)-1, -1):
o+=n[i]
m+=("/"+str(o)+".mp4")
Vid5.set(m)
cap = cv2.VideoCapture(str(Vid5.get()))
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")
  • Add Videos Button (opens a dialog box upon a click. From the dialog box, any video can be selected)
def AddVid():
global Vid1, Vid2, Vid3, Vid4, Vid5, bVid1, bVid2, bVid3, bVid4, bVid5
global maxVids
filename = filedialog.askdirectory()
if(filename!=""):
if(maxVids<5):
if(Vid1.get()==""):
Vid1.set(filename)
filename=""
bVid1 = Button(frame_1,text='Video 1', height=1, width=10, relief=RAISED, cursor="hand2", command= lambda : Rec(1))
bVid1.place(x=(buttonX+400), y=buttonY)
elif(Vid2.get()==""):
Vid2.set(filename)
filename=""
bVid2 = Button(frame_1,text='Video 2', height=1, width=10, relief=RAISED, cursor="hand2", command= lambda : Rec(2))
bVid2.place(x=(buttonX+500), y=buttonY)
elif(Vid3.get()==""):
Vid3.set(filename)
filename=""
bVid3 = Button(frame_1,text='Video 3', height=1, width=10, relief=RAISED, cursor="hand2", command= lambda : Rec(3))
bVid3.place(x=(buttonX+400), y=(buttonY+40))
elif(Vid4.get()==""):
Vid4.set(filename)
filename=""
bVid4 = Button(frame_1,text='Video 4', height=1, width=10, relief=RAISED, cursor="hand2", command= lambda : Rec(4))
bVid4.place(x=(buttonX+500), y=(buttonY+40))
elif(Vid5.get()==""):
Vid5.set(filename)
filename=""
bVid5 = Button(frame_1,text='Video 5', height=1, width=10, relief=RAISED, cursor="hand2", command= lambda : Rec(5))
bVid5.place(x=(buttonX+450), y=(buttonY+80))
else:
maxLim()
if(maxVids<=4):
maxVids+=1
def ButtonSelect():
bAdd = Button(frame_1,text='Add Videos', height=1, width=15, relief=RAISED,cursor="hand2", command=AddVid)
bAdd.place(x=(buttonX+200), y=buttonY)
label1 = Label(frame_1, text="Max 5 videos",width=10, height=1, bg="#808080")
label1.place(x=(buttonX+220), y=(buttonY+30))
Adding Recorded Video Procedure
  • Delete Video Button (to delete any added recorded video in order to make room for more videos as Only five videos can be added at a time)
def DelVid():
global maxVids, bVid1, bVid2, bVid3, bVid4, bVid5, Vid1, Vid2, Vid3, Vid4, Vid5
if(isinstance(bVid5,Button)):
bVid5.place_forget()
bVid5=''
Vid5.set("")
maxVids-=1
elif(isinstance(bVid4,Button)):
bVid4.place_forget()
bVid4=''
Vid4.set("")
maxVids-=1
elif(isinstance(bVid3,Button)):
bVid3.place_forget()
bVid3=''
Vid3.set("")
maxVids-=1
elif(isinstance(bVid2,Button)):
bVid2.place_forget()
bVid2=''
Vid2.set("")
maxVids-=1
elif(isinstance(bVid1,Button)):
bVid1.place_forget()
bVid1=''
Vid1.set("")
maxVids-=1

Hand Landmarks:

For morse code detection hand landmarks of thumb, index finger and middle finger are necessary. The flexure quantity between thumb and two fingers is compared with the 21 3D coordinates of Hand to interpret the required text. As morse code is a combination of dots and dashes in hand gesture we have to interpret it as follow:

Dash/Hyphen

Y- coordinates of Index Finger are checked for the representation of hyphen. The dip (point 7) and tip (point 8) of index finger are compared to formulate that the finger is bent or not. If the ordinate of Tip is greater than the ordinate of Dip than finger is bent downwards and a Check variable is set to True so when the ordinate of Tip is less than ordinate of Dip than the finger is in original position with (-) appended to a string named ‘words’.

Dot/Period

Y- coordinates of Middle Finger are checked for the representation of period. The dip (point 11) and tip (point 12) of middle finger are compared to formulate that the finger is bent or not. If the ordinate of Tip is greater than the ordinate of Dip than finger is bent downwards and a Check variable is set to True so when the ordinate of Tip is less than ordinate of Dip than the finger is in original position with (.) appended to a string named ‘words’.

Alphabet

Instead of Y-coordinates, X-coordinates of Thumb are compared to display the respective alphabet of morse code. When the thumb is bent in the x-direction then dots and dashes in string of words is compared with dictionary named Morse Code defined above to display the respective alphabet it matches with.

def select_img():
global checkThumb, checkx, checky, Words, words, cap, detector
success, img = cap.read()
if(success==False):
cap = cv2.VideoCapture(0)
select_img()
img=cv2.flip(img, 1)
hands, img = detector.findHands(img,flipType=False)
img = cv2.resize(img, (w, h))
if hands:
hand1 = hands[0]
lmList1 = hand1["lmList"] # List of 21 Landmark points
Index_tipY=lmList1[8][1] #8 for tip of index finger, 1 for y coordinate
Index_dipY=lmList1[7][1] #7 for dip of index finger, 1 for y coordinate
Middle_tipY=lmList1[12][1] #12 for tip of middle finger, 1 for y coordinate
Middle_dipY=lmList1[11][1] #11 for dip of middle finger, 1 for y coordinate
Thumb_tipX=lmList1[4][0] #4 for tip of thumb, 0 for x coordinate
Thumb_ipX=lmList1[3][0] #3 for ip of thumb, 0 for x coordinate
if (Index_tipY>Index_dipY):
checky=True
if Index_tipY<Index_dipY and checky :
words+='-'
checky=False
if (Middle_tipY>Middle_dipY):
checkx=True
if Middle_tipY<=Middle_dipY and checkx :
words+='.'
checkx=False
if(Thumb_tipX>Thumb_ipX):
checkThumb=True
if Thumb_tipX<=Thumb_ipX and checkThumb:
p=str(Words.get())
n=p+str(Morse.get(words,''))
Words.set(n)
words=''
checkThumb=False
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
image = Image.fromarray(imgRGB)
finalImage = ImageTk.PhotoImage(image)
label1.configure(image=finalImage)
label1.image = finalImage
win.after(1, select_img)

Main Program

The complete code is provided below:

import cv2
from tkinter import *
from PIL import Image, ImageTk
from HandTrackingModule import HandDetector
from tkinter import filedialog
import os
import pyttsx3

Morse={'.-': 'A', '-...': 'B', '-.-.': 'C', '-..': 'D',
'.': 'E', '..-.': 'F', '--.': 'G', '....': 'H',
'..': 'I', '.---': 'J', '-.-': 'K', '.-..': 'L',
'--': 'M', '-.': 'N', '---': 'O', '.--.': 'P',
'--.-': 'Q', '.-.': 'R', '...': 'S', '-': 'T',
'..-': 'U', '...-': 'V', '.--': 'W', '-..-': 'X',
'-.--': 'Y', '--..': 'Z', '':' '}

win = Tk()
win.title("PoseVerter")
win.geometry("670x600+200+30")

frame_1 = Frame(win, width=670, height=700, bg="#969696").place(x=0, y=0)
cap = cv2.VideoCapture(0)

buttonX=10
buttonY=450
w = 470
h = 300
label1 = Label(frame_1, width=w, height=h)
label1.place(x=180, y=135)

words=""
checky=False
checkx=False
checkThumb=False

Vid1=StringVar()
Vid2=StringVar()
Vid3=StringVar()
Vid4=StringVar()
Vid5=StringVar()
Vid1.set("")
Vid2.set("")
Vid3.set("")
Vid4.set("")
Vid5.set("")

bVid1=''
bVid2=''
bVid3=''
bVid4=''
bVid5=''

engine=pyttsx3.init()

Words=StringVar()
Words.set("")

TTSsave='PoseverterFile'

maxVids=0

def delChar():
global Words
n=str(Words.get())
m=''
if(len(n)>0):
for i in range(len(n)-1):
m+=n[i]
Words.set(m)

def Help():
os.startfile("Help.txt")

def Play():
n=Words.get()
n_list = list(n)
n2_string = " ".join(n_list)
engine.say(n2_string)
engine.say(n)
engine.runAndWait()

def maxLim():
n="Maximum Limit Reached"
engine.say(n)
engine.runAndWait()

def DelVid():
global maxVids, bVid1, bVid2, bVid3, bVid4, bVid5, Vid1, Vid2, Vid3, Vid4, Vid5
if(isinstance(bVid5,Button)):
bVid5.place_forget()
bVid5=''
Vid5.set("")
maxVids-=1
elif(isinstance(bVid4,Button)):
bVid4.place_forget()
bVid4=''
Vid4.set("")
maxVids-=1
elif(isinstance(bVid3,Button)):
bVid3.place_forget()
bVid3=''
Vid3.set("")
maxVids-=1
elif(isinstance(bVid2,Button)):
bVid2.place_forget()
bVid2=''
Vid2.set("")
maxVids-=1
elif(isinstance(bVid1,Button)):
bVid1.place_forget()
bVid1=''
Vid1.set("")
maxVids-=1

def Live():
global cap, detector
cap = cv2.VideoCapture(0)
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")

def Rec(num):
global cap, detector
if(num==1):
if(".mp4" not in Vid1.get()):
m=Vid1.get()
n=''
o=''
for i in range(-1, -len(m), -1):
if m[i]=='/':
break
n+=m[i]
for i in range(-1, -len(n)-1, -1):
o+=n[i]
m+=("/"+str(o)+".mp4")
Vid1.set(m)
cap = cv2.VideoCapture(Vid1.get())
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")
elif(num==2):
if(".mp4" not in Vid2.get()):
m=Vid2.get()
n=''
o=''
for i in range(-1, -len(m), -1):
if m[i]=='/':
break
n+=m[i]
for i in range(-1, -len(n)-1, -1):
o+=n[i]
m+=("/"+str(o)+".mp4")
Vid2.set(m)
cap = cv2.VideoCapture(Vid2.get())
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")
elif(num==3):
if(".mp4" not in Vid3.get()):
m=Vid3.get()
n=''
o=''
for i in range(-1, -len(m), -1):
if m[i]=='/':
break
n+=m[i]
for i in range(-1, -len(n)-1, -1):
o+=n[i]
m+=("/"+str(o)+".mp4")
Vid3.set(m)
cap = cv2.VideoCapture(str(Vid3.get()))
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")
elif(num==4):
if(".mp4" not in Vid4.get()):
m=Vid4.get()
n=''
o=''
for i in range(-1, -len(m), -1):
if m[i]=='/':
break
n+=m[i]
for i in range(-1, -len(n)-1, -1):
o+=n[i]
m+=("/"+str(o)+".mp4")
Vid4.set(m)
cap = cv2.VideoCapture(str(Vid4.get()))
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")
elif(num==5):
if(".mp4" not in Vid5.get()):
m=Vid5.get()
n=''
o=''
for i in range(-1, -len(m), -1):
if m[i]=='/':
break
n+=m[i]
for i in range(-1, -len(n)-1, -1):
o+=n[i]
m+=("/"+str(o)+".mp4")
Vid5.set(m)
cap = cv2.VideoCapture(str(Vid5.get()))
detector = HandDetector(detectionCon=0.5, maxHands=2)
Words.set("")

def AddVid():
global Vid1, Vid2, Vid3, Vid4, Vid5, bVid1, bVid2, bVid3, bVid4, bVid5
global maxVids
filename = filedialog.askdirectory()
if(filename!=""):
if(maxVids<5):
if(Vid1.get()==""):
Vid1.set(filename)
filename=""
bVid1 = Button(frame_1,text='Video 1', height=1, width=10, relief=RAISED, cursor="hand2", command= lambda : Rec(1))
bVid1.place(x=(buttonX+400), y=buttonY)
elif(Vid2.get()==""):
Vid2.set(filename)
filename=""
bVid2 = Button(frame_1,text='Video 2', height=1, width=10, relief=RAISED, cursor="hand2", command= lambda : Rec(2))
bVid2.place(x=(buttonX+500), y=buttonY)
elif(Vid3.get()==""):
Vid3.set(filename)
filename=""
bVid3 = Button(frame_1,text='Video 3', height=1, width=10, relief=RAISED, cursor="hand2", command= lambda : Rec(3))
bVid3.place(x=(buttonX+400), y=(buttonY+40))
elif(Vid4.get()==""):
Vid4.set(filename)
filename=""
bVid4 = Button(frame_1,text='Video 4', height=1, width=10, relief=RAISED, cursor="hand2", command= lambda : Rec(4))
bVid4.place(x=(buttonX+500), y=(buttonY+40))
elif(Vid5.get()==""):
Vid5.set(filename)
filename=""
bVid5 = Button(frame_1,text='Video 5', height=1, width=10, relief=RAISED, cursor="hand2", command= lambda : Rec(5))
bVid5.place(x=(buttonX+450), y=(buttonY+80))
else:
maxLim()
if(maxVids<=4):
maxVids+=1

def ButtonSelect():
bAdd = Button(frame_1,text='Add Videos', height=1, width=15, relief=RAISED,cursor="hand2", command=AddVid)
bAdd.place(x=(buttonX+200), y=buttonY)
label1 = Label(frame_1, text="Max 5 videos",width=10, height=1, bg="#808080")
label1.place(x=(buttonX+220), y=(buttonY+30))

def select_img():
global checkThumb, checkx, checky, Words, words, cap, detector
success, img = cap.read()
if(success==False):
cap = cv2.VideoCapture(0)
select_img()
img=cv2.flip(img, 1)
hands, img = detector.findHands(img,flipType=False)
img = cv2.resize(img, (w, h))
if hands:
hand1 = hands[0]
lmList1 = hand1["lmList"] # List of 21 Landmark points
Index_tipY=lmList1[8][1] #8 for tip of index finger, 1 for y coordinate
Index_dipY=lmList1[7][1] #7 for dip of index finger, 1 for y coordinate
Middle_tipY=lmList1[12][1] #12 for tip of middle finger, 1 for y coordinate
Middle_dipY=lmList1[11][1] #11 for dip of middle finger, 1 for y coordinate
Thumb_tipX=lmList1[4][0] #4 for tip of thumb, 0 for x coordinate
Thumb_ipX=lmList1[3][0] #3 for ip of thumb, 0 for x coordinate
if (Index_tipY>Index_dipY):
checky=True
if Index_tipY<Index_dipY and checky :
words+='-'
checky=False
if (Middle_tipY>Middle_dipY):
checkx=True
if Middle_tipY<=Middle_dipY and checkx :
words+='.'
checkx=False
if(Thumb_tipX>Thumb_ipX):
checkThumb=True
if Thumb_tipX<=Thumb_ipX and checkThumb:
p=str(Words.get())
n=p+str(Morse.get(words,''))
Words.set(n)
words=''
checkThumb=False
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
image = Image.fromarray(imgRGB)
finalImage = ImageTk.PhotoImage(image)
label1.configure(image=finalImage)
label1.image = finalImage
win.after(1, select_img)

detector = HandDetector(detectionCon=0.5, maxHands=2)

label2= Label(frame_1, text="Welcome to PoseVerter By Mechatronics Students", bg="#808080", font=('Times', '14', 'bold') ).place(x=0,y=2)
label3= Label(frame_1, text="CODED TEXT : ", bg="#808080").place(x=0,y=28)
label4= Label(frame_1, text="Developed by: Muhammad Hamza Waheed & Madha Siddiqui", bg="#808080", font=('Times', '10', 'bold')).place(x=300,y=570)
textlabel= Label(frame_1, textvariable=Words, width=92, height=5, relief=RIDGE)
textlabel.place(x=10, y=50)
b1 = Button(frame_1,text='Recorded Videos', height=1, width=20, relief=RAISED, cursor="hand2", command=ButtonSelect)
b1.place(x=buttonX, y=buttonY)
b2 = Button(frame_1,text='Live Video', height=1, width=20, relief=RAISED, cursor="hand2", command=Live)
b2.place(x=buttonX, y=(buttonY+40))
b3 = Button(frame_1,text='Delete Video', height=1, width=20, relief=RAISED, cursor="hand2", command=DelVid)
b3.place(x=buttonX, y=(buttonY+80))
b4 = Button(frame_1,text='Play', height=1, width=20, relief=RAISED, cursor="hand2", command=Play)
b4.place(x=10, y=220)
b5 = Button(frame_1,text='Help', height=1, width=20, relief=RAISED, cursor="hand2", command=Help)
b5.place(x=10, y=260)
b6 = Button(frame_1,text='Delete Char', height=1, width=20, relief=RAISED, cursor="hand2", command=delChar)
b6.place(x=10, y=300)
select_img()
win.mainloop()

Full code on Github is also available.

Any feedback is highly appreciated.

Thanks!

--

--