Detection Zones with CamPlus - Detection OUTSIDE Zones

All may PanV3 cams do this and I have them stationary with Scan and Track off. I have had to pad my excluded DZ w\ extra boxes to keep it from tagging out of the DZ.

The only way to tell w\ your DZ screenshot is to see a screenshot of the Smart Detection AI tagged event that shows the object being tagged and the green Motion Tagging box that is marking the area within the Included DZ.

That LOGIC is flawed. Period… and if that is the case then it should be an OPTION TO DISABLE it or I just may pull the plug on Cam Plus as it appears this is when the false positives started.

And I definitely would call this a HUGE BUG.

I’ve REDUCED the DZ’s to the barest minimum to be of use… any further reduction would be a security issue, as in useless.

The only other corrective action I can see is would be to relocate the location of the camera, but the cable for the Light Socket that this camera controls is not long enough for that. Thats one of the reasons its positioned where it is at.

I think that “logic” needs some fine tuning, and turned off or user disable in the software some place.

I cannot argue with that. There has been much discussion within user circles about how this is reducing the effectiveness of the features.

Unfortunately, it can’t be classified as a bug since each feature is working as it has been designed. This one is a design conflict between features and has only been uncovered recently.

Not necessarily. Because of the overlap logic, only a small fraction of the total object needs to break into the DZ to be included and tagged. No longer does the object need to be fully within the DZ to produce a positive ID. But, it does make setting the DZ accurately a very difficult task in high motion areas. I have not seen any deterioration in accurately tagged objects within the DZ, but I have experienced an increase in tagged objects on the edge of the DZ that I wanted to be ignored.

I understand that if a portion of the object passing through the DZ is detected by the AI it will record and alert.

However, I can upload numerous videos showing that cars are in the street and they are fully excluded from the detection zone butt triggering an AI alert and recording.

I have tilted the camera down as far as possible to keep it from even seeing the street. Nonetheless, it’s still detected motion.

This is the meeting the needs I have for the camera. I set it to record continuously, but only alert me when there is motion. this allows me to go back and review for things going on but I do not need to be alerted for.

Also, the ring doorbell includes a look back feature. That is a user suitable length. meaning that if a detectable event occurs, the user can define a period up to 20 seconds in time before to add to the beginning of the captured event to add context to what is going on and to improve the usefulness of the video. You are also able to define the entire length of the video rather than having it stop when the detected motion stops.

So if a delivery person arrives, presses the ring doorbell then sits there motionless for a while waiting for someone to answer the door. It will continue to record rather than going into a cool down period and then you miss the end of the delivery.

I am a retired mechanical engineer that was a global director of hardware for a large company for decades. I will be glad to be part of a solution for these products.

1 Like

Actually, not even a part of the object has to go into the DZ. in my case, there is a turnaround at the end of the road by my house. when the headlights of the cars turning intrude into the DZ, they trigger the alert, the head rotates the capture the entire motion of the car driving back up the street.

So it appears that the AI is not separating “any motion” from detected objects. It is simply capturing based on any motion and then continues to do so if it sees a detectable object.

1 Like

It sounds like you are using a Pan Cam with the Track Motion on. Since the headlights produce a change in the pixelation shading within your DZ, the cam will turn to track that motion. When it does this, your set home position, for which the DZ is set, is no longer useful as it is now blocking out the same pixels but on a moving FOV. Since the cam continues to move to track that motion, it will invariably maintain that object within the Included DZ and tag the object.

Videos aren’t necessary. What is helpful is a screenshot of the actual object that shows the green Motion Tracking box within your DZ.

That is correct. As I explained in another thread, the actual physical object being fully inside the Excluded DZ is not the factor determining if it is tagged. It is the size of the rectangle that is being superimposed over that object by the cam. If any part of the rectangle overlaps the Included DZ, everything within that box is considered included and will be tagged. If, on your screenshot of the object within the DZ, there is any green Motion Tracking box placed by the cam caused by the object, the AI Engine is evaluating the object for tagging. The cam only allows you to see the the portion of the box that is within the Included DZ. The AI Engine evaluates the entire box.

Even though you may have your Event Recording settings restricted to Smart Detection Events, this is not restricting the cam from uploading all motion events. The cam has no way of knowing if the event is an AI event or not. It has no onboard AI. It uploads every motion event to the server. If there is a pixelation change meeting your sensitivity threshold, it uploads regardless of what is in the FOV. The server AI then evaluates every frame. If it positively identifies a Smart Detection object and tags it, it saves the video to your Events List. If it doesn’t, it deletes it. Since you have the Track Motion enabled, the cam will follow any motion, not just AI motion, and continue to upload throughout that process.

[quote=“SlabSlayer, post:8, topic:279003”]
Unfortunately, it can’t be classified as a bug since each feature is working as it has been designed. This one is a design conflict between features and has only been uncovered recently.[/quote]

I won’t agree with that at all. Its flawed and thus a bug… and thats all I have that would likely stay posted to say about it.

Nope… thats just using the flaw to work around / with the flaw and accepting the flaw as the correct action… Nope.

I will not agree to that being the correct operation mode…

One of a couple of things are going to happen:

  1. Wyze removes this fubar’d “logic.”

  2. I REMOVE the camera from the CamPlus AI and let the camera do its own detecting as that was fine till I turned on the CamPlus

  3. I REMOVE WYZE… and thats very very very VERY CLOSE to happening after this FORCED MANDATED FIRMWARE UPDATE! Let me tell you PO’d doesn’t cover the words I have for that move! There are nuns and priests in Italy 6K miles away who are on life support after my tirade! :slight_smile: :wink: I don’t do automatic updates on my distros for a good reason! They 9/10 includes bugs which borq up things! I let others do that testing.

I ditched Ring mostly for their crap not connecting and staying connected well… and no I have a dedicated AP for this stuff. Well located… ( I do RF related work for $$$) etc… wyze plugs sitting on the ground have worked flawlessly from the day installed…

Considering I just dropped $150 to scarf up the stuff likely going to disappear I want…it may all end up in the garbage.

YES you would be correct, I am not a happy camper… and I can tell you right now that telling my boss or our customers that the logic is great wonderful as designed. When the feedback is 180 degrees the opposite, would get me fired from that job!

So I will review what I plan to do here very shortly… I may just keep the thermostat… I’d really thought that with the success of the plugs last Xmas and over the last year or so wyze was the solution for most things, thermostat, sprinkler, add in the the security stuff… I even have work arounds for their “missing” sensors…Plus temp in area x = fan on, temp x = fan off etc…

So we shall see… ice cracking…

Let me provide some feedback…on this… I agree this info would be helpful…

BUT

IF you think I am posting that sort of stuff in a PUBLIC FORUM?!!?!? You are out of your mind! Never going to happen! Nope.

Now if wyze wants to provide a SECURE PRIVATE MEANS for this information to be provided… Then lets here about it… but till then… Nope.

As well as I don’t provide information which could locate me and considering that I can see similar devices its very possible neighbors and bad actors are present here… Again, nope.

Any way… I can tell from your replies what the issue is… This detection is just too large and the “magic” green box falls into the zones… and thus reducing the zones to stop this is the “solution.” Again, I will say no. That just accepts this is not a flaw… Wyze may consider this a non flaw, non bug, working as designed… These post show its clearly not.

So I am going to try a little, and I do MEAN LITTLE REDUCTION in the zone to see where this goes… I STILL THINK this is a REDUCTION TOO MUCH in the zones… but as this is still in sort of my “beta” camera wyze (pun! :slight_smile: :wink: :stuck_out_tongue: ) I will try it out for test…

We will see…

A bug is when a particular feature is not working as it is specifically designed. This, unfortunately, is working exactly as they designed it. Is the design flawed? Yes. I believe so. And I believe it is because of the specific type of AI object anchoring they are using.

The method Wyze currently uses is a tradeoff between fine boundary constraints and speed of detection.

The current AI Wyze developed in house appears to be a single stage AI model that uses basic bounding boxes to anchor objects. Single stage models are much faster than two stage models, however they seriously lack in the ability to distinguish the fine boundaries of objects or multiple grouped objects. But, because of their speed, they are preferred for real time applications such as Wyze is producing.

The Wyze AI uses an incredible amount of real estate when drawing that massive bounding box rectangle over the object rather than using only the pixels within which the object is residing. This is the heart of the flawed design. The bounding box used to anchor the object is much too large and significantly decreases the effectiveness of using the DZ while increasing the effectiveness of the Overlap logic. This results in the an increase of true AI detections in areas designated for no detection. If they were to tighten up the bounding box anchoring, it could very well improve DZ accuracy but may decrease Overlap logic accuracy.

There are multiple examples of better models to anchor objects rather than using bounding boxes which introduce non-object area into the Included DZ. But most of these require far more computational power and also more time to process. Wyze is recording video at 15 and 20 fps. The AI model applying the object anchoring logic needs to be able to keep up with that frame rate. Many of the most accurate AI models using far more sophisticated object anchoring logic only run at between 5 and 10 fps and would be far too slow to provide real time AI Object Detection. They also require far more server horsepower.

The issue being discussed here is far more complicated than “fix it, it’s broke”. There are layers upon layers of dynamic limiting factors that must be considered when even the smallest modifications are made to the logic. I certainly want the AI model to be better. But thru understanding even the smallest bits about the technology, its capabilities, and it’s complexities, I can begin to understand why it is proving difficult for Wyze to dial in.

You have already read my response to that Wyze decision. I saw your like :heart:. Thank you.

That is what Direct Messaging is for. User to User direct. Not public. Most users in high traffic residential areas don’t have cams placed in areas that aren’t already viewed by the public on a daily basis or contain security sensitive information. All my security cams are highly visible to the public. I want it that way. Seeing the cams is a much greater deterrent than having an attack dog.

But, again, that is a personal decision. I can respect that.

2 Likes

This is an example of a car during the day triggering an alert. It is completely outside of the DZ. However, it is in the FOV of the camera. In thought the whole point of an excluded zone was to keep from getting false alerts

Screenshot_20231104_100404_Wyze|255x500

You posted an example of your detection zone. An example of a motion trigger is an event with Motion Tagging turned on so we can see what the cam identified as the trigger (green box).

Like this, or even better the event video:

Click on the link to the screenshot directly below the visible picture

THanks! Please post the 13s event video that corresponds to 2023-11-04 09:50:05.

How do I save and upload a video clip?

Charles

1 Like

I UNDERSTAND YOUR PAIN! I do!

Unfortunately this is what wyze’s AI brain see…

The AI will detect the car, draw a big honking green box around it… IF ANY PART OF THIS MAGIC GREEN BOX CROSSES INTO A DZ! WOOTO RED ALERT!

See my very very CRUDE GREEN “art” :slight_smile: :wink: It crosses the DZ border with out permission and WOOT ! WOOT! Alert.

I feel your pain… Same BS here. The AI logic is flawed here, they have dug their heels i in and don’t see it that way… I am not sure what my plan is … yet…

2 Likes

As I explained above, it isn’t where the actual physical boundaries of the object are. What determines if it is “IN” the Included DZ is where the bounding box is that is superimposed over the object. That bounding box only shows the user the small portion of the box that is within the DZ (green Motion Tracking Box). However, the AI sees the entire bounding box, even the undrawn portion in the Excluded DZ. I added a yellow approximation box to demonstrate.

Should that bounding box overlap even slightly into the Included DZ, everything within that bounding box is fair game to be tagged by the AI Engine. This is their Overlap Logic.

This is why the vehicle is being tagged.

Screenshot_20231104_133459
Screenshot_20231104_133442

1 Like

And this is the point I and CEConti is trying to explain without getting through.

Basically to ACCOMMODATE this FLAW… the DZ would have to be SEVERELY, SEVERELY LIMITED.

To my new art work the purple box… DaVinci I ain’t and I hope the original poster doesn’t mind my use…

WyzePurread0a2c1f237058048e693ec2975032ed838c796

That PURPLE BOX is ABSOLUTELY PATHETIC! :angry: NO! I should be able to set it to something closer to the road. Similar to what CEConti has…

The POINT I and CEConti want is to DETECT POSSIBLE THREATS/ISSUES as they cross the boundary… and that boundary as you are showing it is way way to close.

What is generating this MAGIC HIDDEN BOUNDING BOX??? Your yellow box. HINT: Do NOT REPLY “proprietary” software. I don’t want to hear it. :angry: Nor does any one else… I am sure this is some AI proprietary logic… So lets ACCEPT THE FEEDBACK. ITS TOO BIG! What ever generates that needs to GENERATE A SMALLER BOUNDING BOX. Period. End.

We are TRYING TO PROVIDE CONSTRUCTIVE FEEDBACK, and getting STICK HEAD IN SAND RESPONSES…

Accept: Its is flawed, it needs attention, and lets get it fixed!

I will post this again, I designed some software with a UI… Got feedback from the USERS and the feedback I did not like… You know what happened?? THE BOSS SAID MAKE IT LIKE THE USERS ARE TELLING YOU! Period.

That is what should happen here. Listen to this feedback, and fix it.

I honestly don’t think you fully understand. I don’t work for Wyze. I am not a Wyze Employee. I am a Wyze customer. A user of Wyze cams just like you. I don’t design the cams. I don’t design the software. I just know how it works and how to effectively work with it. And I really don’t think you are getting my point throughout any of these many exchanges.

I am not suggesting to you or to anyone else that this is better than it was. I am explaining to you the way it is. How it works now. I didn’t program it nor do I have any ability to change it.

You have a choice. Work with it the way it is or move on. Ranting at other users who are explaining how it works doesn’t help in any way but to alienate them from responding to your posts in the future.

That’s because that is a cropped and enlarged clip. It is not the entire FOV as posted above. What FOV has only 24 boxes?

Yellow Box Answer had you read above:

True logic software answer: Who knows? Ask Wyze. It’s Proprietary Software They Developed. You have already read in my other posts that I too believe the use of a rectangular bounding box introduces too much non-object area into the Included DZ. Did you forget that post replying directly to you?
:point_down::point_down::point_down::point_down::point_down::point_down::point_down::point_down::point_down::point_down::point_down::point_down::point_down::point_down:

:point_up::point_up::point_up::point_up::point_up::point_up::point_up::point_up::point_up::point_up::point_up::point_up::point_up::point_up::point_up:

Then you are doing it in the wrong place. The definition of insanity is doing the same thing over and over again and expecting different results.

Wyze doesn’t monitor open forum threads for feedback. The forum is for user to user discussion, exchanging of ideas and information, and to help each other deal with issues. The forum is Moderated by User Volunteers. This is primarily a User to User Forum. There are only very select few topics and categories that Wyze does actively monitor for user feedback or participate in. Those are the only hopes that you will have that Wyze might read and consider your feedback for future software development. This thread or any of the others you have been posting in aren’t one of those threads.

That is likely becoming the direction … 20 alerts a night, or other times ( Nope, I am not turning off notifications, whats the point of that!!! ) :roll_eyes: for crap that is not an alert! I don’t give a whip about the neighbors or them coming and going. Thats why the DZ is set to exclude that…

No, I understand quite well at how it works… as I suggested at the start…. It was my guess and correctly that something in the backend was looking wider than the green box that it shows.

I was correct. This is flawed… The math which generates this generates it too big. Either shirnk that math, or look at the DZ and go? Hmm… only trips 10% DZ… Nope… Or DZ edge detection and how far into the DZ , but if the majority of the MAGIC YELLOW BOX that we can’t see is OUTSIDE THE DZ then its not an alert/trigger.

As for the forums… well… if you are a “Forum Maven” who appointed you that? Wyze. So you have some means to give them the feedback, thus help to get it fixed, from here I would surmise. And honestly WYZE ITSELF should HAVE SUPPORT STAFF PAID TO HAVE THEIR FINGERS AND EYEBALLS HERE! Nope… I don’t do twitdiot or dorqbook etc… FORUMS is what I use… ML’s, UseNet, dig, gopher…

I’ve tried to convey that your “work with it” suggestion is not a solution. Its a HUGE HUGE COMPROMISE to the purpose and use of the cameras. Thats what the point of that purple box was/is… extreme hyperbole… Basically to work with it… I and the other user would have to so diminish the DZ’s that it would be basically that purple box. Whats the point of that? It would NOW MISS 80% of stuff I WANT DETECTED!

That is not a solution.