We do usually tell people this, but it’s not actually always entirely accurate…For clarification, a Wyze employee did clarify with me a few years ago that they have at some times actually had the AI only consider objects within the detection zone and ignore objects outside of it. But we’ve also verified that in other cases they will only use the detection zone for initial event trigger and then analyze the entire frame just like you said.
( @Crease you might enjoy reading that thread if you haven’t…I don’t remember where all the subsequent conversations are, but that is one that got me especially interested in asking more questions about how the AI detections function later on in AMA’s, etc)
Later on, they had different cameras doing it differently. Some people found some models were ignoring AI objects well outside the detection zone (those with no bounding box overlap inside the detection zone), and some had detection notifications regardless of the detection zone.
From what I have seen, Wyze has shifted and experimented over time. Sometimes the cloud AI gets the detection zone parameters and ignores everything fully outside the detection zone, and sometimes it doesn’t consider the detection zone at all and will completely consider EVERYTHING in the frame. The problem is that both ways have been true over different periods of time and seemingly with different camera models.
Some cameras even have their green “Bounding Box” for the “Motion Tagging” respect the detection zone, while others don’t. There is some variation.
Anyway, sometimes it’s hard to know for sure when/if the AI detections are respecting the detection zone or not. Some DEFINITELY DO NOT CARE and will ALWAYS analyze the entire frame, but that hasn’t always been universally true for every camera or over time.
Still, I think you are correct in this case. I just don’t know that for sure without testing it meticulously first.