ENDED - Wyze AI Team Reddit AMA - 7/14/22 1:00PM Pacific

Questions I’d love to ask (going to be updating this list to remind me of my personal ?'s I want ask):

  • Last summer there was a job listing for “Software Development Engineer (Edge AI)” and that position is no longer showing as open. Presumably it was filled? Are you guys now actively developing an Edge AI solution we can look forward to especially since Steve McIrvin recently said he’s been pushing for a shift away from Cloud-only paradigms?

  • What is your hopeful timeline/rough-estimate on when the AI might be able to alert us to only moving vehicles instead of stationary ones?

  • What are your future considerations for RTSP? Will future devices have this? Will it be possible to have this as an included toggled option like some other brands offer if future cameras have more memory in them?

  • Steve McIrvin mentioned you “have specific products on the horizon that will address a lot of the value that RTSP can provide”…can you elaborate on any of this? Is it involving things like Multiple Exposure High Dynamic Range?

  • Steve McIrvin mentioned you plan to do something similar to MaxDrive. Will you have that drive able to also process AI stuff locally faster so that we can get notifications faster (since cloud AI processing causes a few seconds delay for notifications)?

  • Do you have any plans to speed up the arrival AI notifications quicker? We know part of the delay is sending the events to the cloud AI to process first, but I’m told that some competitors who do the same thing get notifications a little quicker. What can you do to improve the timing on this?

  • Will the Matter Initiative have any influence over how you handle AI or anything else related to your team?

  • How is progress coming related to the AI taking Detection Zones into account? Right now, if motion triggers an event within the detection zone (ie: branch waving in the wind), the AI will analyze the entire video, including detections of people/pets/vehicles in blocked out zones, right? Are you making any progress having the AI ignore parts of the video in blocked-out zones without excluding relevant detections?
    More discussion on this here: Wyze AI communication - March 2021 - #50 by carverofchoice and your response in that thread Wyze AI communication - March 2021 - #60 by WyzeShawn was that you were in the process of internal testing of this, but that it was difficult with lots of training required and that you would share more info once you had more concrete results from your test. Can you share more now?

  • Excluding things with traditional security cameras, is there anything else your AI team is working on for improving or adding to other products? Example: what about a Robot Vacuum that can skip certain objects on the ground which the Lidar can’t see or distinguish (pet waste, cords, small items that would get tangled or clogged, etc)?

  • What can you tell us about Webview progress? Any additions or improvements with that being made soon (Other than Firefox compatibility)?

  • Can you tell us any more about your progress with the Wyze Anything Recognition (or “Smart Vision” as you said in October 2021 AMA it will now be called)? Progress made this year, ETA on launch, qualifications/limitations, improvements on it that we might see? Will there be any automation rules/triggers with it too?

  • Will Friendly Faces be able to arm and disarm the HMS in the future or just Cam Plus Pro?

  • Will friendly faces ever be expanded to allow more than 10 faces?

  • You told us that face recognition needs a face to cover 300 pixels to maintain a 90% confidence interval, which is about 6 feet away. What is being done to expand this distance? Steve McIrvin mentioned you will have some 2K cameras in the future and there is “a super innovative product coming that lets a standard 1080p sensor have a ton of more detail on the things you’re most interested in”…will those allow us to have faces detected more accurately farther away so we don’t need cameras within 6 feet everywhere for this to work well? What are your preliminary test results showing for any of this?

  • Has anyone made progress on making it possible to add or remove multiple faces at once (right now we have to add or delete uncategorized face events one at a time in the Face recognition section…this takes a long time to sort them all one at a time when we’d like to just select all the faces for the same person at once to add them to an existing face profile, or delete them all at once if they’re not a face we want saved)?

  • In a previous AMA you told us that you were “working on optimizing the pipeline to recognizing all faces” in an event instead of just the first face in the event, and that you’d be able to run AI models on the entire video instead of just to the first detection. How is the progress on this (both the pipeline updates and analyzing the entire event)?

  • Do you have any Video Playback Scrubbing updates coming up that you can tell us about?

  • In a Fix-it-Friday announcement, we were told you are working on a solution to show us what the AI detected and tagged as an object. Can you elaborate more on this, please?

  • When can we expect the other cameras (besides V2’s) to allow us to have AI detections and individual AI notifications selected separately? Such as to detect multiple objects without being forced to get notifications for all of them (I might want to detect both person and pet, but only get notifications for person, and not every time my cat crosses the camera…but still be able to search events later for my cats). Right now only V2’s allow this, and we were told months ago that V3’s should be getting a firmware update for this, can you tell us the status on this AI notification improvement?

  • What are the chances of getting an upgrade to the detection zone (for the AI, motion, etc)? This second version of the D.Z. being a grid is a HUGE improvement over the original singular box detection zone, but it is still not as precise as many competitors’ methods allow and as many of us would really prefer. We can’t do precise lines at angles, etc. This could also be more important if you come out with higher resolution cameras as your VP mentioned in his AMA that you would have some cameras in the future with higher resolution (in that case, each D.Z. square would potentially be covering twice as many or even more pixels now).

More to come as I think up more questions :slight_smile:


Community members asked me to make sure the following get asked:

  • Another community member who couldn’t attend this wanted me to ask: “Does the feedback actually help when submitting the correct feedback? I have spider webs that will be labeled as people all the time and submitting the correct feedback for the last 8 months or so has not really improved much.”

  • Are you making any progress on being able to ignore lighting changes as “Motion detection”?


Tagging @WyzeShawn to get a preview of a bunch of the questions I’m about to ask him and his team this week in the Reddit AMA so they can be prepared with some good answers. :smiley: Should be more than enough great questions for them to select from and keep busy during the AMA. :slight_smile:

4 Likes