Building face_recognition (DLib) and ALPR

With any luck, we will be able to use the pip3 install face-recognition method and it will just work properly. If not we have to build DLib and face_recognition from source.

# I am installing things systemwide since I am using an LXC anyways. So i am using sudo, if you are using a venv make sure it is activated and you are not using sudo to install python packages.sudo -H pip3 install face_recognition# This will take awhile as it builds DLib# SUCCESS!rerun the default objectconfig.ini that has face detection DLIB enabled to test.es.debug.objdet 15# Logs cherry picked ->10/20/21 03:39:19.705719 zmesdetect_m1[10882] DBG2 detect_sequence:324->[ml: loading 'face' sequence 'DLIB based face recognition' (2 of 2) with options-> {'name': 'DLIB based face recognition', 'enabled': 'yes', 'save_unknown_faces': 'yes', 'save_unknown_faces_leeway_pixels': 100, 'face_detection_framework': 'dlib', 'known_images_path': '/var/lib/zmeventnotification/known_faces', 'unknown_images_path': '/var/lib/zmeventnotification/unknown_faces', 'face_model': 'cnn', 'face_train_model': 'cnn', 'face_recog_dist_threshold': '0.6', 'face_num_jitters': '1', 'face_upsample_times': '1', 'gpu_max_processes': 2, 'gpu_max_lock_wait': 100, 'cpu_max_processes': 3, 'cpu_max_lock_wait': 100, 'max_size': 800, 'disable_locks': 'no'}]10/20/21 03:39:23.890758 zmesdetect_m1[10882] DBG1 face_dlib:35->[perf:dlib:face:gpu: importing Face Recognition library took: 4059.27 ms ]10/20/21 03:39:23.89188 zmesdetect_m1[10882] DBG1 face_dlib:42->[dlib:face: initializing face_recognition with DNN model: 'cnn' upsample_times: 1, num_jitters (distort): 1]10/20/21 03:39:23.89295 zmesdetect_m1[10882] DBG2 face_dlib:63->[dlib:face:portalock: [max: 2] [name: pyzm_uid33_GPU_lock] [timeout: 100]]10/20/21 03:39:23.893284 zmesdetect_m1[10882] ERR face_dlib:98->[dlib:face: error loading KNN model from faces.dat -> [Errno 2] No such file or directory: '/var/lib/zmeventnotification/known_faces/faces.dat']# This is expected as we have not trained any faces for it to load yet (This is an error that needs to not stop the detection, this will be fixed)
# The important part is it imported the library without any errors!

Train Faces

Relevant docs from the original author of ZMES -> Here

# Get the headshots into their respective folders, if you have someone named Jeff who has 6 photos.sudo -u www-data mkdir /var/lib/zmeventnotification/known_faces/jeff# Copy all of Jeffs photos to the jeff folder
cd /var/lib/zmeventnotification/bin/sudo -u www-data ./zm_train_faces.py -c /etc/zm/objectconfig.ini -d# At the end there should be some output like ->10/21/21 00:28:33.047656 zmes_face_train[14925] DBG1 face_train_dlib:160->[mlapi:face-train: wrote encoding file: /var/lib/zmeventnotification/known_faces/faces.dat]10/21/21 00:28:33.048328 zmes_face_train[14925] DBG1 face_train_dlib:162->[perf: Face Recognition training took: 7468.00 ms]# You can delete faces.dat and rebuild if you want to add more photos or more people.

Build ALPR Binary for local ALPR detections

This isn’t necessary but I thought I would cover it anyway. Platerecognizer offers 2500 free plate lookups per month via their cloud API and it is really good. ALPR binary allows you to play around though, there are some cool things you can do to warp the image to get a license plate good enough to run OCR on. All the configuration you can do with local ALPR binary is super cool so let’s do it!

First, clone the openalpr repo and follow the build instructions, there are some dependencies but using Ubuntu makes it fairly easy!

cd ~/gitgit clone https://github.com/openalpr/openalpr.git
cd openalpr/src
mkdir build && cd build
sudo apt-get install libopencv-dev libtesseract-dev libleptonica-dev
sudo apt-get install liblog4cplus-dev libcurl3-dev
# If using the daemon, install beanstalkd
sudo apt-get install beanstalkd
# There are some JAVA components to alpr that you do not need to install, If you want thewm you need to install openjdk-8 and openjdk-8-dev# setup the compile environment with CUDA/GPU support
cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr -DCMAKE_INSTALL_SYSCONFDIR:PATH=/etc –DCOMPILE_GPU=1 -D WITH_GPU_DETECTOR=ON ..
# compile the library
make -j 19
# Install the binaries/libraries to your local system (prefix is /usr)
sudo make install -j 19
# Test the library (This doesnt use CUDA/GPU just yet)
wget http://plates.openalpr.com/h786poj.jpg -O lp.jpg
alpr lp.jpg
# copy the openalpr default config file to the override directory
sudo cp /usr/share/openalpr/config/openalpr.defaults.conf /etc/openalpr/openalpr.conf
# Edit the openalpr.conf file to enable gpu
sudo nano /etc/openalpr/openalpr.conf
# Find the line
detector = lbpcpu
# And change it to
detector = lbpgpu
# Save and exit then rerun the alpr test.
baudneo@ZMES-test:~/git/openalpr/src/build$ alpr lp.jpg--(!)Loaded CUDA classifierplate0: 10 results- 786P0 confidence: 88.3243- 7860 confidence: 78.1484- 786PQ confidence: 76.4509- 786PO confidence: 76.4182- 786PD confidence: 75.1378- 7B6P0 confidence: 74.4098- 786PU confidence: 70.6696- 786PB confidence: 70.0247- 786PG confidence: 68.0718- 786Q confidence: 66.2749# If you use the --debug flag you may see this error ->
Unable to init server: Could not connect: Connection refused
Caught exception in OpenALPR recognize: OpenCV(4.5.4) /home/baudneo/opencv-4.5.4/modules/highgui/src/window_gtk.cpp:635: error: (-2:Unspecified error) Can't initialize GTK backend in function 'cvInitSystem'# This is just alpr trying to create some windows to show you what it is doing during detection. If you want to see the windows you need to install xorg xorg-dev xorg-server-source and then logout of the ssh session and then run ssh -X <user>@<ZMES host> to activate the X environment over ssh. Now running alpr with --debug should popup windows.ALPR binary has now been built with CUDA/GPU support. Play away!

THERE IS A WEIRD BUG WHEN USING ALPR BINARY

Here is a post where I try and describe this bug, it may present itself to you, it may not.

Example scenario -> 3 cameras watching the outside and 1 camera for testing face/ALPR models. Everything works as expected as long as I DO NOT run alpr_cmdline. Currently, I run ALPR cmd line for testing (may add it as a pre-existing label for ALPR cloud calls). So here is the scenario, ZM is on its own host and uses CPU for detections as a ‘local_fallback’, mlapi is its own host and has 1 GPU and 1 USB TPU for detections. openALPR was compiled with CUDA support and works as expected from CLI and If I run it on its own. So mlapi is freshly started (no models loaded into memory or queued at all), I do a detection on the test monitor that has object, face, ALPR models configured. The sequence goes -> Object-> tpu::mobiledet_ssd_lite , gpu::yolov4, Face -> tpu::mobilenetv2_ssd_face, gpu::D-Lib and finally ALPR -> openALPR cmd line. Everything works as expected and a person is detected along with a face that DLib correctly recognizes, ALPR does not have any positive returns. This is where the problems begin, from this detection going forward, every single detection sent to yolo returns the very first detection that yolo returned.

Example: the first run returned a person from yolo @ 98%, a face by TPU and ‘my_name’ from DLib. Every single run going forward will always return person @ 98% from yolo on every frame until I restart mlapi. This only happens when using yolo and the ALPR local binary, I have tried changing openALPR processor to CPU with the same results. I thought maybe somehow the image was being left in GPU memory or something wild like that so I used cv2.imwrite to save every image right before yolo turns it into a blob to run detections on and every frame was the correct frame. I do not know how to further debug this, it may be a one-off issue! I used PDB and stepped through line by line checking variables and the images being fed, I could find no irregularities.

--

--