I just set up a camera monitoring my side yard from behind a window. Soon after, I started getting alerts notifying me of a person detected in my side yard. I was concerned and immediately searched the stored video for confirmation of the alert. To my surprise the “person” stalking my yard was my wife’s pink rose, swaying gently in the breeze. I was glad the system was still in test mode!
Any damage to the rose could be very costly, as it might well entail time in divorce court. But, fortunately, I anticipate that I’ll be able to relocate or redirect the camera to avoid the mischievous rose.
But I wonder, are there other amusing stories of false detects? And, more practically, are there any rules of thumb for avoiding such false detects? It seems more desirable to prevent than to detect and correct errors such as this. And, I’d like to be better able to anticipate anomalies such as this.
Please clarify. This cam is assigned to a CamProtect license and the field of view is outdoors in the yard?
My cams will alert on a false positive Person Detection from time to time. It is rarely AI tagging the motion object that initiated the detection event or an object subsequently moving in the video. Rather, it more often tags inanimate objects that have a general outline and shape that might be confused as a person to someone who can only see shapes and shadows and not definition. Since there is absolutely no way to definitively define which object, either moving or not, is actually being tagged as the AI object, it is just a guessing game trying to determine that. It takes lots of trial and error work adjusting the DZ to find and block a recurring mystery person detection.
While the rose may be your phantom person, there’s better than decent chance it wasn’t.
Have you been submitting these false positives to help teach the AI? Can you post a incorrectly tagged event in here for us to see?
I dont know if it was here or in another community, but someone had a patio camera giving them “false” package tagged notifications. They stated there was no packages on their patio. It ended up being some motion somewhere else in frame trigging the event, but the box shaped flower planter triggering the “package” tag. Short story short, its not necessarily the tag thats causing the motion event, its just somewhere else in frame. I see the swaying rose being the trigger, but is there something else in frame thats causing the tag?
It’s not clear to me why you reject the rose as the feature identified as a person. The difficulties inherent in AI seem to me to argue FOR the possibility of a rose being interpreted as a person. It seems to me a priori more plausible than the spider web I’ve seen credited in that role. (While I am a retired professor of mathematics and computer science I did very little research involving AI.)
Once the AI engine pointed it out, it was obvious that the rose does resemble a person. The flower resembles a head and body. The only visible stalk resembles legs. For some reason that I can’t now recall there are two stalks so the “legs” are plural as they should be. The green bounding box that appears now and again and the fact that, apart from the rose, nothing appears in frame other than a blank brick wall pretty much nails the rose as the source of the “person”.
I’ve submitted several videos for analysis. I well understand how useful they can be. As I wrote in my other reply, there’s nothing else in frame that strikes me as plausible person fodder. But, for example, variations in coloring of the wall might be seen differently by humans versus the AI engine.
I’m not so much rejecting the moving rose as the possibility of being tagged as a person as I am introducing the possibility that there is something else in the field of view that is being mistaken as a person. That is from experience with the cams and Wyze AI.
Is it possible… Yes. Is it likely… Maybe. Is it probable… In my experience both with my cams and helping many others here in the forum… Probably not.
The AI engine didn’t point it out. It only alerted you that (a) something in the field of view “moved” thereby producing the video; and (b) something in the field of view looks like a person thereby producing the tag. The two are not linked or interrelated. Once the motion starts, triggers the upload, and overlays the green motion tagging box (if that is on), the AI Bot on the server then takes over interrogating the uploaded video. The AI Bot has absolutely no consideration of movement whatsoever. It doesn’t matter what is moving or what is not moving. It looks at everything. The resulting Event video does not mark on the video what object was AI tagged.
Given your description of the background in the field of view, it may have been the flower that received the tag. Without that being visibly tagged in the video, there is no way to make a precise determination.
Thanks for the clarification, SlabSayer. As it happens there’s nothing but the rose visible in frame other than a brick wall and my neighbor’s wall, both of which are essentially featureless. This IS California but we had no discernible earthquakes during the time in question, so nothing but the rose was visibly moving (and it moved only due to the breeze).
What I meant by “once the AI engine pointed it out” was, expressed more explicitly, “once the AI engine pointed out a person.” Said differently, that person-detect event is what prompted me to review the video. I apologize for my over-reliance on context to make clear my meaning and my use of the pronoun “it,” which obscures my intended meaning.
If detection of a person requires movement, then the rose was the subject of the detect, with high probability. I see no alternative explanation other than a transient object that moved through the frame and disappeared. That object would have to have been small enough or fast enough to be invisible on playback.
For what it’s worth (and, thanks to your explanation of how the AI engine interacts with the video stream, I know it’s not worth much) it was the rose that was tagged with the only bounding box.
It strikes me as odd that the entire video frame is passed to the AI engine. I would have expected only the contents of the bounding box to be passed. Giving the engine less to analyze would enable the engine to be more efficient. Apparently, frame contents outside the bounding box are sometimes useful in recognizing a person.
This is what I was trying to convey. Detection of a Person does not require movement. The AI is not constrained by only objects that are moving. It interrogates every object in the frame.
The assignment of an AI tag to a video is not motion dependant. The AI looks at every object, stationary or in motion.
Motion is only required to start a video upload and will continue to upload so long as their is motion. Once the motion trigger is pulled, what is moving and what is not moving is moot. The AI Bot has no consideration of movement.
That is because it was moving when the upload occured. The Green box is only an overlay for users to see movement. It doesn’t have any bearing on the AI.
If there is a person who somehow moves into the frame undetected because of poor lighting, low sensitivity, etc., and then a car passes by activating a motion event upload, even though that person is standing motionless in my yard, I still want notified that there is a person in my yard. If the AI was restricted to only objects moving or that which the Green Motion Tagging Box highlights (which may be only one object even if several are moving), then there would be a high probability of missing AI that simply stands still.
To give you some context, I have a cam pointing at my front yard with Person and Pet detection active. Every time a car would go by, a motion event would activate and record and upload the video. The video would be tagged as a Person and I would get a Person Event notification. But there was no person. Was the car being tagged as a person because it was the only object moving? No. What was happening was that my mailbox with a small bush in front of it looked like the silhouette of a person. Once I used the DZ to exclude those two blocks covering the mailbox, no more person detections.
Not really. “Person” tagging requires an event to be uploaded to the cloud for analysis. Motion is what triggers the event to be recorded, and based of your sensitivity settings and zone settings, it the amount and location of the detected motion then the event gets uploaded to the cloud.
What is tagged is maybe not the detected motion. If a vehicle is moving and triggers the motion detection and the event is eventually tagged as a vehicle is the same as an event where the detected motion is a swaying branch and the vehicle is parked in frame stationary in the driveway and is vehicle tagged.
Many thanks for the replies. There’s a good deal of meat here. Rather than respond prematurely I need to take some time and noodle what has been shared. Please forgive the attendant tardiness. I can say that several misconceptions have already been cleared up. Part of my problem has been that I’ve been thinking only of my immediate case, which is not representative of every possible case and possibly not representative even of the most typical cases.
Also, let me be explicit in saying that none of my comments should be understood as critical of, or reflecting negatively toward the Wyze AI. Because I understand something of the difficulties inherent in recognizing visual representations I’m inclined to be extremely charitable toward the designers and implementers of such systems.