Does detection sensitivity effect the distance at which motion is captured?
I was hoping detection would happen close to the end of this fence, but it’s occurring about 10 feet away from it. Not sure if this is a limitation of the AI either.
Would changing the ‘Motion Detection Sensitivity’ settings have any impact?
It has been my experience that sensitivity affects the distance but also affects the amount of notifications you’ll get. Keep in mind that these cameras can only detect motion at around 20-25 ft at best. Unfortunately, there is no set rule as every scenario is unique. The best way to set a sweet spot is through trial and error.
I’ve had the same experience as @habib, it will increase the distance, but will also make it more sensitive up close. Closer will always be more sensitive so you’re just “amplifying” the various levels of sensitivity vs distance.
The V4 has 3,686,400 pixels.
Sensitivity decides how many of those pixels need to change in order for something to be considered ENOUGH motion to be an event. If only 10 pixels change from black to white or any other color then not enough of them changed enough to reach the threshold to be considered “Motion”
I don’t know the thresholds for Wyze, so I will have to make up numbers here, but to make it easy, lets say the 1-100 on the sensitivity was a reverse of the Percentage of pixels changing (this isn’t actually true, but I’m using the example for simplicity’s sake). In that case:
A sensitivity of 1 would require that 99% of all pixels are changing (only 1% of pixels aren’t changing) = 3,649,536 out of 3,686,400 need to change for it to say it detected “motion” or an event and thus record and notify you. Obviously a person taking up only a few thousand pixels won’t trigger an event in this case when the threshold is MILLIONS of pixel changes.
A sensitivity of 90 in this example would mean that 90% of pixels could remain unchanged and only 10% of pixels need to change between one frame and the next to be considered motion. So now we only need 368,640 pixels to change colors instead of 3 million.
Again, the above numbers are made up. They don’t actually correspond to percentages. They set a minimum and maximum range somewhere and I don’t know what those ranges actually are. But the concept itself is accurate.
The point is that it doesn’t care about distance exactly…the farther away something is, the fewer amount of pixels it changes and the closer something is, the more pixels the object uses in the frame. That is the only reason distance matters. The sensitivity itself only cares how many pixels are changing. So raising the sensitivity to 100 will definitely make things farther away trigger an event more easily, but it will also make things closer up like plants trigger an event more easily since the closer things use more pixels, and if fewer pixels changing are needed to count as motion, then they’ll trigger motion more often.
It can be a tough balancing act to figure out what’s best in your particular situation.
Of course other things factor in too (you can block out certain areas to be ignored from the detection calculations for example…also see my next comment for another example of what likely makes a difference).
Does the amount that the pixel changes factor in or simply the number of pixels? Either way, something further away won’t change as many pixels and thus potentially needs higher sensitivity to trigger.
But I was thinking possibly if a pixel changed from black to grey it might not “count” as much as one changing from black to white. But no idea if that’s a factor at all.
I have found the v4 to be very sensitive to headlights washing through the detection zone. Haven’t been able to get it to the point where the OG in the same position was (rarely being triggered by headlights but always triggered by a person or animal). Even shadows seemed to affect it less. These both came about with a firmware update for the OG that said it specifically was to deal with lighting changes triggering motion. Maybe someday the v4 will get a similar update.
I recommend starting at 100 to see ALL that is caught, then once you see that baseline, slowly dial back the sensitivity till you find that sweet spot with what you want caught is captured.
That’s a fantastic insightful question. I can’t speak for Wyze specifically with 100% certainty since they haven’t publicly said anywhere…though maybe I’ll ask them: #AMA2ASK
I can say that we know thanks to a few smart camera software manuals, not all pixel changes count equally. Sometimes brightness/contrast make a difference too like you are talking about. Black to grey may not cross a threshold when black to yellow would for the same number of pixels.
Some of those software user manuals have discussed motion detection sensitivity factors and how brightness and contrast can make a difference. And honestly, from a computer perspective it makes sense anyway. A black pixel (RGB: 0,0,0) changing to grey (RGB: 128, 128, 128) has Each RGB value increased by 128. But when you compare that to a change from black to yellow (RGB: 255, 255, 0). Here, the red and green values have increased by 255. This larger shift in RGB values signals a more significant color change, making it more likely to be interpreted as motion than the lesser Grey. It would definitely be a lot more accurate to consider how much of a value change a pixel has to help determine whether it really changed. Changing from 0,0,0 to 5,5,5 is not worthy of a change and is really just a little brightness and contrast change, but everyone else would still say it’s black and think there was no change from 0,0,0. It would be ridiculous from a human perspective to consider that a change, so yes, I would absolutely say that the type of change matters. As for how much, it’s hard to say. They have never released their algorithm, and they have changed the sensitivity limits at least 2 times that I know of.
I assume this means it is likely Wyze does the same thing, but it’s possible they don’t. I think I will ask about this and see if anyone at Wyze will tell us.
I would also like to ask them what the sensitivity thresholds or range are (pixels per 1-100)
Anyway, great question.
Edit note to self: Also ask more about Dave’s question in the comment below.
Can you mention to them whatever change they did to the OG like a year ago helped a ton with false positives due to headlights/other lighting changes and it would be great to have that in the v4 as well?
1.0.71 (November 13, 2023)
Improved motion detection algorithms to reduce detection of light, orbs, and rain at night
Huh, I just realized Wyze was aware of the Orb phenomenon before it was even a thing.
EDIT to add - I’ve also noticed the “bounding box” on the v4 is much bigger than the OG. This may be due to the OG handling light changes better, however this OG improvement might also be of value in the v4:
1.0.84 (October 9, 2024)
Fixed motion tagging to only draw bounding box within motion detection zone
Wyze Cam v3s are the only cameras I have. I have found Detection Sensitivity set at 90 is my sweet spot. In the video below the two people do not trigger motion recording until they get near the dark colored pickup across the street.
I have low foot traffic by my house. One person will walk by every 2 or 3 hours. All motions gets recorded but I have Notifications set to Persons only. This helps me keep a handle on the people going by my corner of the neighborhood.
It would be interesting to hear more details on how Detection Sensitivity works from the Wyze Team. I am just happy to have found my Detection Sensitivity sweet spot.