I don’t have a vacuum, but I’m assuming the ability to virtually map out areas of your home exists. I think it would be cool to have a feature that shows the movement of a trigger on a map as it passes through each camera zone. It could push sales on pan cams.
That could be a feature for motion sensors as well I guess.
Sounds like a bit of a niche ask. You can of course post it in the “Wishlist” forum but it takes quite a few votes to even get considered.
Most robot vacuums map out your home, I think some will even show you a map in the app and let you watch where the vacuum is.
I mean I get what you’re saying, say if you had a dog and wanted to see a map of where they were all day, would be neat, I just don’t see it as selling cams or making money for Wyze. Heck the Pan cams don’t even really know where they’re pointing so accuracy would be off anyway.
If I understand what you’re suggesting, then this seems like taking what they already do in Multi-Camera Timeline View and translating that to a map-like (overhead 2D) display for moving object tracking. This video (starting at 3:36) describes a feature of the Multi-Camera Timeline View:
In order to generate a map with a feature like this, I imagine a camera might need lidar or something similar (which I understand Wyze Robot Vacuum uses to map rooms), so that would involve a hardware update. If you’re thinking along those lines and want to add to a Wishlist topic as @dave27 suggests, then—since you mention cameras with a pan feature and tagged your topic with cam-pan-v3—this one might be appropriate:
Welcome to the Forum, @Izzo88!
Sounds similar to this request:
If I understand the request in the topic you linked, then it seems like Wyze essentially did this with Multi-Camera Timeline View, at least as far as the “[o]ne main feature” in that initial (only at this point) post is concerned, with the stipulations that the cameras don’t actually “know” where they are with respect to some sort of virtual map and can’t predict the path of a moving object or person and communicate that to another camera. The cameras just detect motion as they “see” it and then Wyze’s servers arrange them on the Monitoring tab based on trigger time, right?
The primary difference is the onus of cohesion. I.e., who/what makes sense of the event(s) based on a trigger(s). It’s on the user to make sense of multiple views, events and timelines under Multi-Cam Timeline vs the system creating single-tracked event(s) involving multiple cams in the wishlist request. There are pros and cons either way and I don’t really have a preference. In fact, I could live without both.