Updated: Mar 5 — Many changes have been implemented since this post that greatly improves object detection along with face recognition. The rest of this post is largely incorrect now with all the new changes. I’ll create a new post soon(…ish)
I’ll soon be releasing an updated version of the event server with more focus on the “hook” part (the machine learning object detection hooks). Key changes:
- Some key bug fixes that resulted detection text not being written to DB
- Support for picture notifications in iOS (Needs an updated iOS zmNinja app, currently in iOS beta)
- You now have a dedicated config file that controls various aspects of the hooks (much better than stuffing in scores of variables in
- Specify multiple polygons per monitor ID in the config file and object detection will only be done within those polygons
I’d like to describe the polygon part in some more detail:
Consider this image:
This is a mocked up image from my driveway. The part in red is my “zone” as defined in ZoneMinder. It’s a small area of my driveway which is used for motion detection. If it triggers motion, then in earlier versions of the hook scripts, the entire image is sent for object detection.
Obviously, I don’t want to just restrict object detection within the “red zone” because as you see above, the “objects of interest” don’t fall inside in and detection will fail. However, passing the full image for object detection is also problematic. Suppose ZM detects “motion” in the red zone. As we all know, ZM’s motion detection triggers a lot of false positives — maybe a strong wind or a strong shadow. Now let’s suppose there are people waking on the sidewalk at that time. The object detection code will trigger a positive. However, in this case, I am not interested in that detection because its not in my driweway.
The solution, obviously, is to be able to pass a polygon to the detection script that asks it to detect only a specific area. That is the green area in the above above. It is not the same as a ZM zone, as explained above.
And that is what ES 3.0 allows. In the new config, you can specify as many “polygons” as you need per monitor. What then happens is before the object detection code is invoked, the code will mask out all the parts of the image that do not fall into the polygons.
For example, my driveway is monitor
11. This is my mask for that monitor in my config file:
mask1=307,315 1276,295 1279,719 0,719
When applied to my driveway, the image that actually gets passed to object detection is:
Tip: You can just use the ZM Zone Editor to create your mask(s). Select the monitor in ZM, click on “Add New Zone”, draw away. Copy the the polygon coordinates (sequentially) and just don’t save it.
I’ve currently released 3.0 in the
dev branch — feel free to experiment and give me feedback before I merge it to master. Remember to read the updated hook README — there are config and install changes.