Vehicle being detected incorrectly

I’ve notice for a while now a bug in the Vehicle detecting AI.
Whenever a vehicle is in the frame and any other motion is detected it will register the detection as a vehicle motion.

For example right now I have 2 cars parked in my driveway and since its windy the plant on the edge of the frame gets blown around now and then. Each time it triggers the motion detection the notification that comes up says Vehicle detected.

If I watch the detection clip you can see the detection box drawn around the plant, so it is register the motion properly but since is sees a car it says there is vehicle movement, even though no pixels of the car moved in the scene.

This is actually functioning correctly with the way it is currently implemented (any motion triggers the AI to search for any recognizable objects in view, not just considering the thing that caused the motion).

Having said that, Wyze said they’re working on a new prototype to update the AI logic to only analyze objects that are in motion, and ignore all objects that are stationary even if they are in view.

To address the detection box drawn around the plant, that green motion tag is totally unrelated to the AI. It is drawn up by the camera locally and overlaid onto the video to tell you what things on the screen are making the most motion (pixel changes from one frame to the next). It isn’t even necessarily telling you what triggered the motion detection (it could’ve been something else based on your sensitivity settings), and it is not telling you that the moving plant is what was detected as a person/vehicle, etc. The AI totally ignores that green tag when it scans the video. The green tag is programmed totally separately from everything else. It’s just to help you more easily locate where the most dramatic movement currently is according to the static settings/sensitivity the programmers gave it (not any settings you chose).

We’re all anxious for the new AI prototype that will ignore stationary AI objects like parked cars. :slight_smile:


This is how it works currently. The motion tagging green box is an overlay of detected motion. When an event is uploaded to the cloud, the cloud ai analyzes the event video and if a recgonized is observed (pet, vehicle, person, package) weither it’s in motion or stationary, the video itself will include the tag of the object recgonized in the event tab.

1 Like

Welcome to the Wyze User Community Forum @daverig! :raising_hand_man:

Many, many users have requested Wyze modify their AI Engine to only tag video clips with Smart Detection tags when the object being tagged is in motion… sometimes referred to as Motion Only Smart Detection.

Wyze has made progress toward this for the newest Wyze Cam, the Floodlight Pro, however it has yet to be developed to the point where it can be implemented in the other cams.

New feature requests are submitted via the Wishlist. If you would like to request that Wyze add this feature, follow the link below, vote for it at the top, like :heart: some posts, and add your reply post to support the request.


I’m aware this is how it currently works but the logic is completely wrong.
Once motion is detected, analyzing the frame to see what is in it is useless.
It assumes whatever it sees cause the motion.

With the current limitations the correct solution would be

  • detecting motion
  • analyze the frame to see what is in it.
  • check to see if the pixels that moved were in the bounding box of the objects found.
  • If so say that is what the notification is.

It would at least be a much more accurate assumption then what is happening now.

1 Like

You say the same thing of “Once motion is detected, analyzing the frame to see what is in it is useless.” and your way of “detect motion then analyze the frame to see what is in it”. Which is the same.

I think what you’re saying is that your following point makes it not useless? That’s all I can get from what you said with the bounding box being last?

That does have the side effect (which is entirely negative) if only a part of the view is to be checked since it would be causing excessive checks on unwanted parts of the view.

It would be good if the entire view was selected for movement.

So technically, BOTH ways are best (edit - depending on selection or entire). Not one or the other. Edit - And anyway entire wouldn’t need bounding box check anyway so that shouldn’t happen for that one to minimize checks.

Edit - Time is rather important.

Edit - If time should not be important then do it that way so that its long all the time.

if the entire image is already being analized to see if a vehicle / pet / person exists, then there is not time difference. You know the pixels that motion was detected in and you know the bounding box of the objects detected.

After that is a simple index lookup to see if the motion pixels are inside the bounding box pixels.
It’s not 100% but probably close enough for most cases.

The way it works now is

  • there was motion
  • there is a car / person / pet in the frame
  • motion was caused by a car / person / pet.

This has no value.

You’re missing the point. It is NOT saying it was a vehicle motion, only that there was a vehicle in frame. Yes there was some motion that caused the clip to be uploaded for AI, but tagging it with Vehicle does not in any way shape or form indicate that the vehicle was the motion.

And as noted, there are previous requests to make it more intelligent.


What camera (s) are you using? Knowing this, then it will be able to explained how the camera detects and recgonizes motion.

1 Like

Correction to the way it works now…

You are advocating for AI that will limit it’s identification to only those objects that are contained within the Green Motion Tagging Bounding Box overlayed by the cam prior to being uploaded, thereby ignoring any object, moving or stationary, that is not bound by the Motion Tagging box.

The issue I see with this is that the Motion Tagging box doesn’t bind every moving object. When there are multiple motion events within a specified frame, the box is regularly applied to only the largest or fastest moving object causing the most disruption in the adjacent pixelation motion detection algorithm used by the cam firmware. That algorithm is determined by motion thresholds set in the user defined sensitivity settings. Should there be a person in the FOV frame, but moving below the motion sensitivity threshold set by the user, no motion tagging box, Smart Detection Event, or Push Notification would result. Also, if I recall correctly from prior reading, the Green Motion Tagging box has a static detection sensitivity setting within the firmware that does not change with the user defined sensitivity settings that adjust the adjacent pixelation motion detection algorithm.

1 Like

I’ve had my Wyze camera set up to monitor the front of my house, and it worked flawlessly for quite some time. However, over the past few weeks, I noticed that it started to trigger vehicle detection alerts for non-vehicle objects, such as large animals or even moving shadows. This false positive detection has become quite frequent, and it’s becoming a bit of a nuisance to sort through all the irrelevant notifications. I’ve double-checked my camera settings, and everything seems to be configured correctly. I even tried adjusting the sensitivity level to see if that would help, but unfortunately, the problem persists. Has anyone else encountered a similar issue with their Wyze cameras? Thanks.

1 Like

Occasionally, my v3 thinks my mailbox is a Person. When this happens I submit those videos to Wyze with the corrected Tag. Seems to help for a while or could be just my imagination.

Welcome to the Wyze User Community Forum Janner! :raising_hand_man:

The Wyze AI Engine on the server gets updated on a regular basis. It is not uncommon to experience a cam activating and tagging on objects that it did not recognize in the past. The question is: Is there a vehicle in the frame that isn’t moving?

While you may see motion in the video and the cam may be adding the green Motion Tracking box, neither of these positively identify the motion that first initiated the upload or the object that is being tagged, which can be different. In fact, the object being tagged as Vehicle may not be moving at all. The AI Engine does not see motion. It only sees still image frames and considers anything in each frame (20 per second) that is within the Included DZ.

Which version if camera is doing this? On most cameras, the motion trigger can be anything that is moving or changing pixels, (If you have motion tagging on, the green box will indicate what it’s determining as motion) but once the event video is uploaded to the cloud is when the AI looks at the video and gives you the text tag indicating what recognized objects were determined to be in the frame.

Your camera could be detecting moving shadows or moving bush limbs, then uploading the event and the AI can see a vehicle somewhere in frame then tag the video as vehicle, the tagging is working as currently intended.

There is a wish list AI request to only have the recognized object tags work on moving objects, whereas a moving vehicle event would get the vehicle tag whereas any event with other motion detected, but a stationary vehicle would not get the vehicle tag.

I just got a Wyze pan-cam-v3 and signed up for the 2 week free trial of their Cam+ AI Smart Tagging Software…

So far, it’s trash. And, I’m having the same issue as you. Every night, I get 100+ “vehicle” alerts which are all, with the exception of maybe 3-4, insects or changed in lighting.

This morning, I had a person pull into my driveway, walk out of their car, and come to my door, then they left. Thank goodness this was a planned visit! Because, my camera didn’t catch the car pulling in due to it being distracted by an insect, but it did pick it up in my driveway. However, it did not pickup the PERSON WALKING TO AND FROM MY DOOR!!! And, yes, they were within the frame of the camera. It was before sunrise, so I don’t know if it just didn’t “see” it or what. But, it saw every bug that went by it.

I’ve been told the “AI” works by detecting motion then labeling everything in the frame. So, if a cat triggers the motion and is gone before the event triggers, the AI will tag your car if it’s in the driveway. To me, this is pointless. It treats motion as a still picture. Instead of looking at what’s moving, it says, “hey, something moved, here’s some stuff they may have moved.”

Welcome to the Wyze User Community Forum @245c32044017c51add9a! :raising_hand_man:

No. “Motion Triggered an Upload Video, Here are the AI Tagged Objects that were found in the FOV” would be more accurate.

Motion determines the trigger for an upload. It has no bearing whatsoever on the returned AI Tags for the majority of Wyze cams and definately the Cam V3. The AI does not discriminate between moving or stationary objects. Once the upload begins and the AI Bot receives it on the server for interrogation, motion is no longer a measured variable. The cam then decides when to stop sending that video based on no more motion within the frame dependent on your sensitivity settings.

The Wyze Server AI Algorithm has been a topic for much discussion. It would certainly be great if we were able to select a setting for AI Tagging of moving objects only. But we don’t have that option yet. It has been requested many times over in the Wishlist topic Objects in motion (not stationary) notifications only.

Wyze is working toward this. They have something like this on the Floodlight Pro that will only return an AI Tag for a stationary AI object if another AI object was in motion. If a non-AI object was in motion, the stationary AI object wouldn’t be tagged. But, the Floodlight Pro has the memory onboard and the processor chip capable of doing this on the cam. The Cam V3 does not. It isn’t capable of running the logic locally.

All I can do with my 16 V3 cams is to tune them the best I can to work for me.

  • Use Detection Zones to block out my parked vehicle on one cam so it detects Vehicles entering only
  • Turn off Vehicle Detection and go full frame on the second cam so that it detects Person around my vehicle or approaching
  • Turn off the cam face LED IR Lights so bugs won’t be spotlighted at night and install IR Floodlights illuminating the FOV from a different angle
  • Keep the sensitivity moderate so that it isn’t blasting uploads 24\7 every time an ant moves.

It doesn’t work that way. The cat can’t be gone before the event triggers if it is what triggered the motion event upload.

If a cat triggers motion, the upload starts. If the Cat, the Car, a Person, and a Package are all within the included Detection Zone and you have detection turned on for these, all should be tagged in the Event Video regardless of motion at the time of AI tagging on the server and regardless of what motion pulled the trigger on the upload… provided the cam has a reasonable FOV. This is the absolute most important factor in getting good AI Tagging results. Location, location, location. These cams are not shoot from any angle in any light cams. The sweet spot is eye level with traffic crossing the FOV. Placing the cam so that it can be the most effective is critical. Distance from the cam is also important.

I think the people trying to explain why the current detection system is broken is missing the point.
It’s like burning a pizza in the oven and then explaining how electricity heats the element in the stove. Who cares, the problems it the pizza was left in the oven too long…

Yes any movement triggers detection and the image to be uploaded
Yes the AI then determines what is in the shot and reports who what it sees.

Who cares because it isn’t reporting what moved. The logic behind the process is fundamentally flawed. In the lab if you only have a person move, and it reports back there is a person moving it looks likes its working because it saw movement and found a person in the frame. The assumptions is since it found a person it must be what moved.

This is the fundamental flaw because in reality you can have a car parked in the driveway, a dog lying on the grass, and a person walking to the door. The person triggered the movement detection and it reports it saw all 3, this is no better the basic detection setting. What’s worse is none of the objects can be moving and a tree or shadow moves and it detects all these objects. There are so many false positives the system is useless.

The software already knows the area that detected movement due to the green square it displays.
The AI will also know the general location of the objects it detects.

All that is needed is a simple check to see which areas overlap and report “I see a person” instead of listing everything in the frame. If 2 objects are close together you may be both reported as moving because the boxes overlap multiple things, but that is way better than the current solution of getting 50 notifications of a car detected because its a windy day and a plant keeps triggering detection and it sees my car parked in the driveway.

Limiting detection, turning IR off, turning down sensitivity are not solutions, what’s the point of having detection at all at that point. I just mute notifications entirely because it is so broken.

Honesty the current solution is so poorly implemented it make me loose a little faith in a generally good brand.

What Wyze cameras do you have? V3 cameras use Pixel Change detection and cameras like the Outdoor use PIR. This may be a factor on satisfaction with detection.

The point is, @daverig, that is the way it works. Yesterday, today, tomorrow. That is the way it has worked for a very, very long time. Nothing anyone has to say here in the forum is going to change that. It isn’t going to miraculously motivate Wyze into flipping a switch and changing it tonight. The request to change that was made a very, very long time ago (in one of the places they actually monitor) and… that is the way it still works. Yesterday, today, and tomorrow.

Educating yourself as to how the technology works will keep you from burning your pizza rather than continually doing the same thing over and over again the same way but expecting different results. There is a well known definition for that.

So, given that there is a hard reality all cam owners have to deal with in the functionality of our cams, should we choose to actually face the reality we have, do we continually armchair quarterback on coding we are neither familiar with nor have any control over? Or, do we try to get the most out of the cams now, in the state they are in for the foreseeable future, and work with what we have? Or, do we just throw up our hands and trash the cams because they aren’t what we would develop even if we knew how?

I am not missing the point, @daverig. I am trying to share with others how it works so that they can understand how to get the most out of what they have, even if what they have isn’t their perfect design ideas.

I’d direct you to vote for this following wishlist then, as I don’t see that you have voted on it to support it.

I believe it is important to know how the current ai tagging works, that objects, weither stationary or in motion, are tagged in the event clip. It’s important to know that ai tagging of stationary objects isn’t a bug, it’s how it currently works. Knowing this, it’ll help the wishlist and support the change of ai recognized object tagging to switch to in movement only. No where in the cam plus description does it state that only objects in motion will be recognized and tagged as such. If it did and it wasn’t doing that, then it would be broken and it would be a bug in the ai. But because it’s working as intended and just doesn’t work how the masses want it to, we need to support the above wishlist (which is listed as researching) so that’s the change can be implemented sooner than later, and not if, but when.