From my experience with my 10 Cam V3, Cam placement is just as important as the Cam settings.
I have cams also facing the street. They will tag people as people, but vehicles are regularly tagged as pets or people…vehicle detection is off.
In order to determine what is going wrong with yours when it doesn’t AI tag motion, you will first have to determine if a) it never motion activated in the first place, or b) the AI object within the recorded video wasn’t tagged.
When this happens with your cams, if there is a motion only Event video produced in the Events tab, then the motion activation of the cam was working properly, the video was sent to the cloud for AI interrogation, but the AI was never tagged. In this case 1) share the video w/ Wyze and retag it so they can use it to “learn” the AI, and 2) consider repositioning the Cam.
If there is no Motion Event Video at all, then the cam never picked up the motion in the first place and never recorded any video to the cloud. If the cam is not activating, the sensitivity settings may need to be adjusted as well as the detection zone if there is one set. Again, Cam position is important. The more image contrast moving side to side, the better chance of the cam activating.
I have found that my Cams are much more accurate at motion activating and AI tagging when they are at a height about the same as what they are pointed at and with motion moving across the field of view from side to side (think profile shots). My cams are next to useless at AI tagging when they are mounted at heights looking down and when motion is coming toward or away from the cam. The slower the object, the lower the accuracy.
Okay; I think I understand. AI Detection just sucks at this time. But just to be sure I’ve set this up correctly:
I’ve enabled “Detects Motion”.
In “Smart Detection”, I’ve enabled “Person Detection”.
I’d expect to see videos with people walking past (or coming up to the door). What I see, instead, is a video of every passing car with the tag “Person” in blue on some of them. But, in no case, is there an actual person present (well, unless you count the driver but the Cam can’t see them due to the tinted windows).
If I disable “Detects Motion”, nothing gets recorded. My Cams are all at approx eye level. I’ve been providing many videos (either tagged as only “motion” or incorrectly tagged as “People”) back to Wyze. Hope they can do something about this total failure of AI.
At least I can keep the recording continuing and saving for a couple of weeks without having the various cams notify me with every one of the hundreds of “events”. If a neighbor sees my front door smashed in and calls me, I’ll be able to review recent videos for the perp, I guess.
Your events tab will show you only what you want it to. If you are seeing some events that do not have a blue AI tag, you don’t have your motion only events filtered out. Use the Funnel in the upper right to fine tune your events filter to only that you want to see (PD only on X cam only for example). If you check the initiated by motion box, ALL motion events show.
Setting your filters will hide the untagged non-AI videos.
As for the Vehicles being tagged as PD… Refer back to the grade card above. In a reply from that topic post, you will find I posted this:
This is one of the weaknesses I found in the AI as well. Moving cars are being tagged as PD.
Correct. Don’t do that. Motion must get recorded in order for it to be sent to the cloud for AI interrogation. No motion = No AI
I have been told they update the AI weekly. There is an AMA event coming up to ask the AI team questions. Perhaps direct interaction with them on the issue will help.
I have noticed the same thing, the AI person detection used to be really great when it was provided by Xnor.ai. But since that contract ended back in 2020, the replacement AI detection provided in-house by Wyze is now useless.
I can’t really say without taking a look at the video and speculating as to why it didn’t AI tag you approaching. As for the bear, not sure the Pet AI has much bear training. Again, the videos are the key to develop a theory.
User settings like sensitivity and detection zone, like cam placement, also affect the outcome. Lighting conditions can also be a factor.