Standard surveillance cameras can send alerts when they detect movement. But Deep learning can greatly improve detection and reduce false positives.
Camera configuration
I have Chinese h264 960p POE video surveillance cameras, which can detect movement. But often the alarms are raised by moving branches, cloud shadows or other bad detection.
Node-RED
To retrieve requests from the alarm server, I installed a Node-RED on my Raspberry Pi (this installation is outsize the scope of this article, but you can use docker file installation for it). We create a TCP in node which listens on the port previously configured, for me 10000.
Image grab
Now that we’ve detected when an alarm is raised, we just need to retrieve the camera image. An FTP solution is possible, but it wasn’t effective in my case. I therefore used FFMPEG to recover an image from the camera’s video stream.
To retrieve an image, this is the command line to execute:
user@raspberrypi:~ $ sudo apt install ffmpeg
user@raspberrypi:~ $ffmpeg -i rtsp://{CAMERA_IP}:554/user=admin_password=tlJwpbo6_channel=0_stream=0.sdp?real_stream -vframes 1 -r 1 -y output.jpeg
So, I write a minimalistic Flask server to run this command, I think is it possible to run this command directly in Node-RED, but I want to use it in other application, and web request is more open way to do that.
from flask import Flask, send_file
import subprocess
import time
app = Flask(__name__)
@app.route('/')
def index():
return 'Camera image grabber'
@app.route('/ffmpeg/<int:ip>')
def ffmpeg(ip):
url = 'rtsp://192.168.1.%d:554/user=admin_password=tlJwpbo6_channel=0_stream=0.sdp?real_stream' % ip
output = '/home/user/dvr/{}/{}.jpeg'.format( ip, time.strftime("%Y%m%d-%H%M%S") )
ret=subprocess.check_call(['/usr/local/bin/ffmpeg','-i',url,'-vframes','1','-r','1','-y',output])
return send_file(output,mimetype='image/jpeg')
if __name__=='__main__':
app.run(host='192.168.1.130',debug=True)
It’s also important to keep the images you’ve acquired so that you can carry out the learning we’ll be doing in the next stage.
AI Deep learning detection
To make deep learning as simple as possible, there’s nothing better than the https://teachablemachine.withgoogle.com application. You will create two classes, a detection class and an OK class. Select the upload button to send images, you can start with 50 images per class to get very good results. The model used has a size of 224x224 in colour, so please resize your images before uploading.
Please note that it’s best to have images that are as eclectic as possible. For example, images of day, night, rain, snow… This is difficult to achieve in a few days, but it will be possible to re-learn later.
After training the model, click on Export Model to download it. Select TensorFlow Lite Quantized setting.
Inference on Raspberry Pi
To analyse the images from the camera, we’re going to use a python program on our Raspberry Pi, although any other Linux could be suitable.
First step download dependency, for Tensorflow Lite:
user@raspberrypi:~ $ python3 -m pip install tflite-runtime opencv-python flask
Let’s write a python class:
import sys
import numpy as np
import tflite_runtime.interpreter as tflite
class Inference(object):
def __init__(self,model_path):
self.interpreter = tflite.Interpreter(model_path=model_path)
self.interpreter.allocate_tensors()
self.input_details = self.interpreter.get_input_details()
self.output_details = self.interpreter.get_output_details()
def invoke(self, im):
"""
Run inference on image model is 224x224x3
"""
input_im = np.expand_dims(im, axis=0)
self.interpreter.set_tensor(self.input_details[0]['index'], input_im)
self.interpreter.invoke()
return self.interpreter.get_tensor(self.output_details[0]['index'])
And another python Flask minimal server:
#!/usr/bin/python3
from flask import Flask, request, jsonify
from inference import Inference
import cv2
import numpy as np
import json
app = Flask(__name__)
inf = Inference(model_path='model.tflite')
@app.route("/analyze",methods=['GET','POST'])
def analyze():
if request.method == 'POST':
# retrieve data
arr = np.frombuffer(request.data, np.uint8)
# decode jpeg image from query
im = cv2.imdecode( arr, cv2.IMREAD_COLOR )
# resize to model input size
im = cv2.resize( im, (224,224) )
# Invoke AI deep model
r = inf.invoke(im)
# Select best class
label = int(np.argmax(r))
# Store class precision
precision = float(r[0][label]) / 255.0
# return json message to be parse in Node-RED
return json.dumps({ 'label': label, 'precision': precision })
@app.route("/")
def index():
return "<p>AI Deeplearning detection!</p>"
if __name__ == '__main__':
app.run(debug=True, port=8090)
The file, model.tflite, is in the zipped file downloaded from Teachable machine website. Install gunicorn3, nginx and run this as a Linux services, use this file as service configuration:
user@raspberrypi:~/grab $ cat /etc/systemd/system/ai.service
[Unit]
Description=Gunicorn daemon for AI
Before=nginx.service
After=network.target
[Service]
WorkingDirectory=/home/user/deep
ExecStart=/usr/bin/gunicorn3 --bind $YOUR_IP:8090 app:app
Restart=always
SyslogIdentifier=gunicorn3
User=user
Group=user
Send detection on Telegram
I couldn’t find an easier way than to use the Telegram messaging application to send me the detections with the corresponding image. To do this, install Telegram on your smartphone, create a Bot and a group, then install the node-red-contrib-telegrambot extension in Node-RED. Configure this node so that it corresponds to your Bot, and test whether sending messages works.
Conclusion
Et voilà, you can download my Node-RED flow https://gist.github.com/mdauphin/60e21447106eb6a61854f77238373508, and the version I’m using:
- Node-RED v1.2.2 on docker
- Raspberry Pi 3 Model B Rev 1.2 with Raspbien 10 buster
- China NVR, with Techege h264 960p POE cameras