I think there might be a misunderstanding. Those articles seem to say exactly what I did. I’ll try to clarify what I am saying, and you correct me where/if I misunderstand you as well:
On all of Wyze’s wired cameras, “Motion” is solely determined by pixel changes. Motion sensitivity higher or lower just changes the threshold for what counts as motion…lower sensitivity means more pixels need to change between the previous frame and the current frame in order for something to be considered “motion” If the pixels don’t change, then there is no motion. This is why shadows and light count as “motion” even though nothing actually moves, because light/shadows cause pixel changes between frames and that is the definition of motion on most smart cams (unless they add PIR or Radar sensors, etc).
So, when those articles say the following, they are agreeing with what I said about pixel change detection (motion):
Smart Focus feature zooms in on motion on the Live stream, alongside the bigger picture.
When it says motion here, it means any pixel changes like I was saying. It will follow the biggest grouping of pixel changes. So, this might be a big tree branch waving around instead of the person walking around on the opposite side.
Smart Focus. Focus on what matters, while still viewing the bigger picture.
This is assuming that “what matters” is usually whatever has pixel changes between different frames (motion). This is usually true too. I mean who cares about the background things that are totally stationary and have been the exact same for years? You only care about what CHANGED and is different in the image. But it can’t track both the left side and right side at the same time, so it is possible it might track the wrong thing when it follows the grouping with the most pixel changes. This is why it still has a small window with the full picture overview.
Smart focus is useful, but it’s not perfect. It also often zooms in on my chest or other parts of my body and will cut off my face. Like, I respect that it doesn’t find me attractive, but it still hurts my feelings a little. j/k…I mean, it really does cut off my face sometimes, but doesn’t hurt my feelings. My point is just that it doesn’t specifically follow AI objects like a person, it simply follows the grouping of the most pixel changes, which isn’t necessarily always the most important thing.
Exactly. I have told them a few times it would be much more useful if smart focus followed AI objects instead of any motion. At the very least it should try to give priority to the face/head instead of cutting it off when close.