Event Server 3.0: Crop Areas and more

Feb 3, 2019 · 3 min read

Updated: Mar 5 — Many changes have been implemented since this post that greatly improves object detection along with face recognition. The rest of this post is largely incorrect now with all the new changes. I’ll create a new post soon(…ish)

I’ll soon be releasing an updated version of the event server with more focus on the “hook” part (the machine learning object detection hooks). Key changes:

  • Some key bug fixes that resulted detection text not being written to DB
  • Support for picture notifications in iOS (Needs an updated iOS zmNinja app, currently in iOS beta)
  • You now have a dedicated config file that controls various aspects of the hooks (much better than stuffing in scores of variables in )
  • Specify multiple polygons per monitor ID in the config file and object detection will only be done within those polygons

I’d like to describe the polygon part in some more detail:

Consider this image:

Image for post
Image for post

This is a mocked up image from my driveway. The part in red is my “zone” as defined in ZoneMinder. It’s a small area of my driveway which is used for motion detection. If it triggers motion, then in earlier versions of the hook scripts, the entire image is sent for object detection.

Obviously, I don’t want to just restrict object detection within the “red zone” because as you see above, the “objects of interest” don’t fall inside in and detection will fail. However, passing the full image for object detection is also problematic. Suppose ZM detects “motion” in the red zone. As we all know, ZM’s motion detection triggers a lot of false positives — maybe a strong wind or a strong shadow. Now let’s suppose there are people waking on the sidewalk at that time. The object detection code will trigger a positive. However, in this case, I am not interested in that detection because its not in my driweway.

The solution, obviously, is to be able to pass a polygon to the detection script that asks it to detect only a specific area. That is the green area in the above above. It is not the same as a ZM zone, as explained above.

And that is what ES 3.0 allows. In the new config, you can specify as many “polygons” as you need per monitor. What then happens is before the object detection code is invoked, the code will mask out all the parts of the image that do not fall into the polygons.

For example, my driveway is monitor . This is my mask for that monitor in my config file:

When applied to my driveway, the image that actually gets passed to object detection is:

Image for post
Image for post


Tip: You can just use the ZM Zone Editor to create your mask(s). Select the monitor in ZM, click on “Add New Zone”, draw away. Copy the the polygon coordinates (sequentially) and just don’t save it.

I’ve currently released 3.0 in the branch — feel free to experiment and give me feedback before I merge it to master. Remember to read the updated hook README — there are config and install changes.


zmNinja — the best open source surveillance mobile app for…

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store