Zm Zones
From:     https://wiki.zoneminder.com/Understanding_ZoneMinder%27s_Zoning_system_for_Dummies



Understanding ZoneMinder's Zoning system for Dummies



Draw A Zone Re: Ubuntu - How to draw the zone box??? Post by GivingItMyBest ยป Tue Jun 23, 2020 3:57 pm When you click add new zone, it should have the entire box highlighted in RED, with a green dot in each corner. You can move these dots to reshape the box; and at the bottom of the page you will see where each dot has it's x and y coordinates, and next to each co-ordinate is a plus and a minus sign. If you need a shape other than square, you can add points in between existing points and then move those. If you hover over the green dots, they will tell you what point/vertex number they are so you can see which two points you need to add between.
Contents
Background ZoneMinder has a flexible (albeit hard to easily configure) zone detection system using which you can modify how sensitive, precise, accurate your motion alarms are. The official ZM documentation does a good job of describing all the concepts here. However, you will see gobs of posts in the forum about people complaining that ZM logs all sorts of events (ahem, as did I), ZM's detection is rubbish and in-camera is better (ahem, as did I) and what not. But once you get the concept, its incredibly powerful. So instead of giving you a theoretical explanation, lets walk through a live use-case. (Credit: user kkrofft helped a lot in me getting a hang of things here. You should also read his earlier explanation here)
Real object detection People often ask if ZM supports "object detection". The core ZM engine only detects change in pixels (motion), which is what this article is about. That being said, incase you did not know, ZoneMinder now has support for person, object, face recognition. So while you can and should optimize your zones, if your real interest is person detection, you can do that using my event server. Note that object detection works after ZM detects motion change, so this article applies either way.
Some concepts Let's take a look at this area below. Lets suppose you want to trigger motion if someone tries to break into your basement. Does it make sense you monitor the full area (pillars/walls/floor)? Probably not. If someone were to break in, they'd break in from some door, some window, or maybe break in from upstairs and climb down the stairs. So doesn't it make more sense to monitor these areas specifically? I think so. So the first 'common sense' logic is delete the default zone that ZM creates for each monitor (which is called All). Monitoring every part of your image may make sense if you are monitoring and outdoor lawn, for example. Not here. Nph-zms.jpeg
Defining the zone areas So given the explanation above, how about we define zones where motion matters? Any zone you define as "active" is what ZoneMinder will analyze for motion. Ignore the 'preclusive for now'. So let's look at the image below. I've defined polygons around places that are the "entry points" With zones.jpg
Okay, now how do I specify the sensitivity of the zones? ZoneMinder has pre-sets. We live in a world of pre-sets. I bet you want to select "Best and highly sensitive" don't you? DON'T. Not because that setting is nonsense, but because you should understand some concepts first.
Core Concepts The ZM documentation I pointed to earlier does a great job of explaining different methods. At the cost of repeating what has already been said, it's important to note: The first image is a 20x20 grid. Let's assume this is a zone. And the black circle is some object in this grid. The second image shows the next frame of that image, where new 'objects' have appeared, or in ZM's view 'new sets of pixel patterns' Now let's talk about Alarmed Pixels, Filtered Pixels and Blobs
Reference.jpg Reference next frame.jpg
Alarmed Pixels Alarmed pixels only deals with pixels changes. If we use the alarmed pixel method and specify a minimum of "5 pixel" changes (lets forget max for now), then all the new pixels of set A + B + C + D will count as alarmed pixels and the total alarmed pixel count will be A+B+C+D
Filtered Pixels Now let's assume we used Filtered pixels and set it to 2x2 pixels. Then in addition to computing the alarmed pixels (A+B+C+D), it will also count how many of these sets have at least 2 pixels around them that are also alarmed pixels. This will result in B+C+D (set A will be discarded as they don't have any pixels surrounded by at least 2 pixels that have changed color from the prev. frame)
Blob Pixels Now let's assume we used Blob and said a blob needs to be at least 10 pixels. Then what it will do is based on the set computed by Filtered pixels, which is B+C+D it will look for contiguous blobs of 10 pixels and that only means D So, in Alarmed pixels any of A, B, C or D would raise an alarm In filtered pixel mode, only B, C or D would raise an alarm In blob mode only D would raise an alarm Okay, that was a simple explanation. And I did not cover more details on min/max. But I hope you get the core idea.
Got the theory. Let's get back to your basement image Okay, back to my basement and my 3 zones.
Which detection type should I use? I personally feel to detect "humans", blob is the best. As I described above, it combines Alarm + Filtered + ensures that the pixel differences are contiguous and then does an algorithmic analysis to see if it forms 'blobs'
Pixels or percents? What makes more sense to you? "Raise an alarm if 178 pixels are changed" or "Raise an alarm if more than 20% of my zone has changed?". To some, the latter makes much more sense. However, if you really want more fine grained control, you should use pixels. I used percent when I first started off, but then realized that pixels was more powerful when you are trying to eliminate false alarms. (Pixels are especially useful if the difference in sizes is small. Example: its not possible to visualize the difference between 10% and 15%)
Selecting the right values
Using percents (As of 2019, I recommend pixels, but I've kept this here for those who prefer percentages) It helps to think visually here. Let's go back to the zones I drew of my basement and try and visually place how a person and a pet would look in each zone. Here is a take: Of men and animals.jpg Stairs foyer.png
Using pixels (This is what I ended up using once I got comfortable with zones) Tip: Don't try to get super precise in your first round: Use less aggressive values to begin with - that is, lesser area values. Keep increasing and testing till you reach a good threshold between bogus detections and real detections. Pixel based detections have a limit. You'll never get it perfect. For real object detection as an 'add-on' to motion, look at zmeventnotification, the machine learning based alarm detection extension to ZM Pixel area.jpg ZM's motion detection algorithm for this zone:
Some other optimizations There are other optimizations you can do: