No-code AI-detection automation with DeepStack and Node-RED

memudu alimatou sadia
DeepQuestAI
Published in
10 min readNov 13, 2021

Step-by-step tutorial on how to implement your AI automation process with no code using DeepStack and Node-RED

Node-red and DeepStack

Industrial and Home AI automation can be a very tedious process especially when working with different tools that uses different interfaces and protocols. Most of the time, you need to write a lot of code to have fully working end-to-end workflows which can be hard to maintain and debug when issues happen.

Stress no more; In this tutorial, we will walk you through a painless and highly modular process to setup a robust AI automation workflow you can deploy in your home or in an industrial setup without writing a single code.

We will cover the following

  • No-code and Low-code platforms (introduction)
  • Node-RED (a no-code, low-code platform )
  • DeepStack ( an opensource and free AI API server )
  • Automating object detection on static image
  • Automating object detection on IP camera live feed
  • Automating face recognition

No-code and low-code platforms

No code platform are development platforms where application software or workflows are created without writing any code or very little code (a.k.a low-code). The development platforms are primarily visual software development environments that allow enterprise developers and citizen developers to drag and drop application components, connect/chain them together to create mobile, web and console applications.

These platforms use visual interfaces with simple logic and drag-and-drop features instead of extensive programming languages. They allow non-tech individual to build rigorous software with no coding skills.

In this article we will introduce you to a No-code and Low-code platform called Node-red.

Introduction to NODE-RED

Node-RED is a programming tool that provides a web browser-based flow editor for visual programming developed originally by IBM for wiring together hardware devices, APIs and online services as part of the Internet of Things (IoT)that can be deployed to its runtime in a single-click. Node-RED is a web based platform built on Node.js that allows you to easily represent the flow of an application through drag-and-drop canvas and connect them. To create any application workflow in Node-red each node must be wired together and deployed it.

Node-red Installation

Node-RED can be installed through any means locally, on a device such as a Laptop, Raspberry Pi and on a cloud machine.

Node-RED Local Installation

To install Node-RED locally you will need a supported version of Node.js.

Option 1: Installation with NPM
Then, use the npm command that comes with node.js to install Node-RED :

  • On Windows
npm install -g --unsafe-perm node-red
  • On Linux
sudo npm install -g node-red

This command will install Node-RED as a global module along with its dependencies. Once installed, start Node-RED by running the command below

node-red

After running the above command, Node-RED is accessible as shown below at http://localhost:1880

Node-red UI

Option 2 : Installation with Docker
To use with Docker, run the command below:

docker run -it -p 1880:1880 -v node_red_data:/data --name mynodered nodered/node-red

For more information visit its documentation page.

Node-red Interface

The default view of Node-red is a workspace of three column layout with nodes on the left ,the flows work space in the middle and a third column on the right.

The third column or output pane has 4 to 5 tabs (info, debug, config ,context and dashboard).

Its UI interface is composed of three layouts:
1. Nodes palette: All types of draggable nodes used to create a flow are available in this section. there are 5 types storages nodes, parser, sequence, network, function and common nodes
2. Workspace : This space is the place where the nodes are linked together to create a flow.
3. Information sidebar: This section provides information about the component of the flow work space. This includes an outline view of all flows and nodes, as well as details of the current selection.

Introduction to DeepStack

DeepStack is an open-source AI API server that empowers developers, IoT experts to easily deploy AI systems both on premise and in the cloud. DeepStack is device, programming language agnostic and available on Docker for multiple operating systems such as Windows, Mac OS, Linux, Raspberry PI ( + all ARM devices)and NVIDIA Jetson devices with CPU and GPU acceleration. It’s mostly used for face, objects detection and recognition and scene recognition.

Fore detailed information about its features and installations, check this link:

DeepStack Installation

DeepStack is installable on Docker CPU, Docker GPU, Windows OS, NVIDIA Jetson, Raspberry Pi and other ARM devices. Check this link for the installation guide.

DeepStack provides different installation options depending the version installed and the task you want to perform either face recognition or object detection. In this article we will show how to set up face recognition and object detection on Docker CPU and Window OS. Follow the installation instructions detailed below

  • Install Docker on your machine with the version that corresponds to your operating system.
  • Install DeepStack on your machine that corresponds to your hardware and operating system via the link below.

https://docs.deepstack.cc/index.html#installation

DeepStack Face Recognition and Object Detection Installation

Run any of the command below that corresponds to the version of DeepStack installed to start Detection and Face APIs

  • Docker CPU
sudo docker run -e VISION-FACE=True -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack
  • Window OS
deepstack --VISION-FACE True --VISION-DETECTION True --PORT 80

Now visit localhost:80 or https://127.0.0.1:80 on your web browser

DeepStack

No-code AI automation Tutorial

In this tutorial, we will demonstrate how to perform object detection, face registration and recognition with Deepstack and a Node-RED package powered called Node-red-contrib-Deepstack.

What is Node-red-contrib-DeepStack ?

Node-red-contrib-DeepStack is simple and incredible Node-RED node developed and maintained by Joakim Lundin for interacting with the Deepstack API. It is a application that allows you to easily represent the flow of object detection, face recognition and face registration application through drag-and-drop Deepstack nodes which are deepstack-object-detection, deepstack-face-registration and deepstack-face-recognition.

Node-red-contrib-DeepStack Installation

There are mainly two ways to install node-red-contrib-deepstack

Option 1

To install node-red-contrib-deepstack, run this command

npm install node-red-contrib-deepstack

Option 2

Installing node-red-contrib-deepstack through node-RED web interface

For a simple guide do this:

Hamburger Menu ->Manage Palette ->Install-> search “node-red-contrib-deepstack” -> Install

Now Node-red-contrib-Deepstack is successfully installed on your node-red as its nodes are displayed in the Nodes palette section.

node-red-contrib-deepstack

Node-red-contrib-Deepstack Nodes

To perform object detection, face recognition and face registration in Node-red, we make use of four nodes which are:

  • inject: used to trigger the flow when its button was clicked and can be configured to trigger the flow at a fixed time or regular interval.
  • File In: Reads the contents of a file from path as either a string or binary buffer on which further action can be performed.
  • deepstack-node: A node to query the Deepstack API.
  • Debug: for displaying detection results in the Debug sidebar.

Double-click each nodes to view and change their settings.

Deepstack-object-detection

Object detection is implemented using Deepstack, through the use of Deepstack-object-detection node which allows you to identify common objects in an image. It has four properties that can be viewed by double clicking on the node:

  • Server: specifies the URL to DeepStack endpoint.
  • Confidence : A number that varies from 0 to 100 used to set the threshold of the each object prediction confidence in percentage.
  • Outline Color: Used to specify the color of the object bounding box.
  • Filter Output: Used for additional object filtration such as Dog, Bag etc.

We will perform object detection on the image below

object detection sample image

Detection automation Demo (sample image)

This object detection workflow will Scan the above image for object detection, Convert that information into a useful form and Display the result in the Debug sidebar every 5 seconds.

Detection automation Demo (IP camera live feed)

This demo will show you how to perform live object detection on camera feed from an IP camera. For the purpose of this tutorial, we will be using and Android app that converts your SmartPhone to an IP camera. The automation workflow will work as below

  • triggers the workflow every 5 seconds using the timestamp node
  • captures a frame from the Android IP camera using the request node
  • sends the capture image frame at that moment to DeepStack’s detection API using the deepstack-object-detection node
  • DeepStack detects the object in the image and sends the debug node to view the result

Follow the steps below

  • Install the IP Camera app on your Android Phone
  • Connect your Laptop and the Android phone to the same Wifi network.
  • Open the app , scroll to the bottom of the and click on the Start Server button
  • Open the web browser and visit http://<ip_shown_on_app>:<port_shown_on_app>/video
    E.g http://192.168.1.145:8080/video
    When you visit the url descibed below in the browser, you will see the live camera feed from your phone camera.
  • To view the static image frame from the current point of the IP camera feed, visit http://<ip_shown_on_app>:<port_shown_on_app>/photo.jpg
    E.g http://192.168.1.145:8080/photo.jpg

Now, we will update our workflow as seen in the demo below by

  • removing connection of timestamp node to file node and removing the file node connection to Object Detection node
  • Add a new http request node, set the url of the request node to http://<ip_shown_on_app>:<port_shown_on_app>/photo.jpg and set the return type as a binary buffer
  • Connect the timestamp node to http request node, and the http request node to the Object Detection node.
  • Deploy the new flow and view the result from the detection triggered every 5 seconds.
  • The detection processes image frame from the IP camera’s live feed

There is so much more we can to extend this automation flow, such

  • single timestamp node connected to 2,3 or even 10 http request nodes connected to 2,3 to 10 IP camera while all the http request nodes connect to the same of multiple Object Detection nodes.
  • Add more nodes to connect to a remote server to dump the results for more processing, trigger phone/email notifications, start a machines, open a IoT powered door, switch off/on a bulb, etc.

Deepstack-face-registration

To register a face using Deepstack, we make use of Deepstack-face-registration which allows you to register a face with a specific ID that you may use for recognition in the future. It has two properties that can be viewed by double clicking on the node:

  • Server: specifies the URL to DeepStack endpoint.
  • UserId: This signify the name of the person to register eg: Anna, Moses etc.

We will perform Face Registration on images of two popular footballers; Lionel Messi and Cristiano Ronaldo, but demonstrate how to register Messi’s face.

Face Registration Sample Demonstration

This face registration workflow will Scan the above image for face detection, Store the facial features into the database and Display the result in the Debug sidebar at a specific time.

Deepstack-face-recognition

To register a face using Deepstack, we make use of Deepstack-face-recognition which allows you to perform face recognition on face previously registered. It has Four properties that can be viewed by double clicking on the node:

  • Server: specifies the URL to DeepStack endpoint.
  • Confidence : A number that varies from 0 to 100 used to set the threshold of the each object prediction percentage.
  • Outline Color: Used to specify the color of the object bounding box.
  • Filter Output: Used for additional specific userID filtration.

We will perform face recognition on the image below

face recognition sample image

Face Recognition Sample Demonstration

The face recognition flow will Scan the above image for face detection, Detect the known facial features and Display the result in the Debug sidebar at a specific time. In the sample image, Deepstack detected a total of Six faces but identified only the user ID of registered faces.

Did you enjoy this article ? Give it claps and share with your network.

To read more on DeepStack AI Server, visit out blog via the link below

--

--