i’m using v4 cams (FW: 4.52.9.1134) on an iPhone 16, wyze app 3.2.6 (1)
using no zoom or pan
strictly monitoring for motion (no PIR features)
the grid remains fixed to the image in both portrait or landscape (vertical or horizontal)
in Detection Settings> Detection Zone
I see a grid
first: the bright areas are active zones, the dimmed areas are dead zones
MY QUESTION – if you have done any testing:
do the grid squares actually align to the sensor (the software reads only the selected precise grid)
or
do the grid squares merely represent a general area within the frame
+++++
I am curious to hear from those who’ve actually done some testing and drawn a conclusion
similar to testing the spot in a light meter – the grid would be time-consuming to test and my instinct is the grids are NOT precise on/off as the UI suggests
+++++
I don’t have any problems – just trying to understand the feature better
my start-out setting for Motion Detection Sensitivity are 15-20
(and that gives my good results)
for example: >> Some behaviors to note when using a Detection Zone:
** The Detection Zone will only be active when the camera is in the position you selected when you set the Detection Zone. If you (1) manually move the camera away from the Detection Zone OR (2) if the camera moves away from the Detection Zone following motion when Motion Tracking is turned on, after 15 seconds of inactivity (no motion detected) the camera will automatically return to the position you selected when you set the Detection Zone and will reinstate the Detection Zone.*
that reads like if I move the camera (even a bit) the selected grids are no longer active
Sounds like a somewhat generic description that parts apply to all cameras and parts only to some. As you know, the V4 does not have pan capability and no Wyze cameras have Zoom.
I’ve never made any effort to precisely measure the detection zone to see how exact the lines are,
yep – no zoom, no pan, no PIR – here in basic free mode – that’s why i disclaimed them upfront
all the other posts i read on the subject sidetracked to those areas – i’m just trying to keep the discussion simple to the v4, motion detection, in wyze basic free mode
how the wyze app interacts with the detection zone grid squares
i saw the answer discussed years back on a Lorex system i was using … it would be an easy question to ask if i knew the terminology for a system that reads detection zone selections off the camera sensor (a precise mapping), or from a general outlined area that has little reference to the actual grid selections…
As far as I know, it’s precise mapping, but the gotcha is that anything in the grid will cause a trigger - bugs, leaves, shadows, etc. That has a lot to do with why I essentially don’t use them. Most of my front yard cameras will see the shadow of the flagpole at some point in time.
K6CCC anything in the grid will cause a trigger - bugs, leaves, shadows, etc,
if bugs in the scene (not flying close in front of your lens, or walking across your lens like a spider) are triggering a motion event, perhaps your sensitivity is way too high
leaves blowing in the wind, moving shadows, flashes of light, rain – you will need to work around those – it’s one of the problems with using motion Motion Detection on these types of systems (along with “loose” detection grids)
a flag pole shadow (without fluttering large attachment) – i would imagine – should not be triggering motion events
Related to detection zone precision…I don’t believe they are precise in the way you are thinking. I know they definitely are not as far as the AI detections and notifications go, and I suspect it’s the same for motion event triggers too. There is a logical rationale behind why. I’ll give some illustrations to help show what I mean, but it should be mentioned that there is likely a little bit of variation between device models.
So they are precise in a way…as long as the object bounding box is fully outside the detection zone, then the object is not detected. BUT if any part of the object bounding box does overlap into the detection zone (even though we think the object itself never actually overlaps the detection zone), then it may be detected.
I know the AI has been confirmed to work this way by a Wyze employee, and I suspect any motion does as well. The REASON Wyze does it this way for AI objects is because if a person or other AI object is only HALF in a detection zone, then the AI wouldn’t recognize they are a person.
For example in the following pictures, if the AI could only analyze the legs of the person here (in the detection zone) but not the upper body/head (outside the detection zone), it wouldn’t realize it’s a person, so it has to be allowed to analyze things outside the detection zone IF part of the object is within the detection zone:
The problem is determining for sure what all is part of the object in question. This is a lot trickier than most people think, both with still frames and with inter-frame comparisons where some pixels of the object might change between 2 frames but not all of them at the same time depending on the speed and FPS, object size, color variation, etc. It’s a complex issue determining what all is part of an object or not in some cases.
So yes, Wyze could limit their motion detection to be strict lines, and I think they have started moving more in this direction for the initial motion event trigger (I think they should improve this)…but there are still some complicated considerations to take into account as well when it comes to AI detections.
i reviewed all comments on this thread, did a new forum search and reviewed relevant threads, and even reread the previously linked Wyze DETECTION SETTINGS AND ZONES white paper with a fresh open mind
i still did not find any factually-supported opinions on how the Detection Zone grid selections interact with the wyze app for Motion reporting using v4 cams in basic free subscription – how accurate the grid selections are
+++++
i will state, based on my experience (and numerous false Motion events observed outside my selected areas), I do not see my so-called precise alignment selection result comparing to the actual area the app reported motion (motion sensitivity set at around 15 out of 100 monitoring static indoor scenes)
i.e., what you select is not precisely what you get
in still other words, I would take the Wyze grid selections as a general reference, a loose starting point only
how I would translate that to actual workflow – if you are looking for a starting point:
set the clear squares in the selection grid sparingly (under-select the areas you want active), more less-active squares the better, and
turn the motion sensitivity down to 10-20 and see how that goes
that is what has worked best for me using similar grid in several branded systems
Carverofchoice Forum Maven (I had to look Maven up in the dictionary, thank you for the new word) >> Related to detection zone precision…I don’t believe they are precise in the way you are thinking.
I believe you are correct, but I can’t explain why or how except for my casual anecdotal observations
>> I know they definitely are not as far as the AI detections and notifications go
I am not sure v4 cam uses those types of AI features in its basic free mode, but I am not trying to use Wyze AI features, in fact I would seek to minimalize AI on my Wyze cams
Your Wyze cam AI tutorial was interesting
I’ve set detection areas off grids like Wyse is using, and bounding boxes placed randomly within the scene frame on various branded systems – including AI type bounding boxes on Ring doorbell systems – they all seem more of an (unreliable) art (and then pray) than setting strict pixel dimensions (coordinates) off a grid than the Wyze grid interface suggests
This post may be all about not much to most, but knowing how the grids work is interesting to me and I gave it another shot….