Wyze AI communication - March 2021

Hey all, hope you have a beautiful Tuesday. We’ve started a communication to let you know what’s happening in Wyze AI. This is the 1st post of the series. Feel free to leave your comments and feedback below.

Summary

  • AI model improves recognizing Pet vs Person.
  • A new Face Recognition system is in testing right now.
  • We’re looking for more baby and pet videos.

Big things first

As you may know, Wyze is testing its in-house Face Recognition technology in a closed group right now. We’ve received lots of feedback during the Stage 1 testing. We are currently developing the Stage 2 Face Recognition service. This version will contain improvements including grouping similar faces together and a new model with better face recognition accuracy. This time, the model is trained based on our proposed patent-pending method.

If you want to take a closer look at how Wyze Face Recognition works, you can refer to our publication here: https://arxiv.org/pdf/2101.05419.pdf

AI Model Updates

This month, the Wyze AI model update mainly focused on improving Pet detection performance. Specifically:

  • We added tens of thousands of examples in the pet category to train the model to recognize fluffy creatures better.
  • For Person and Vehicle detection, the average precision is slightly improved.
  • For Package detection, we’re adding new testing set with additional new scenes from user-tagged feedback videos.

AI Data Bounty Hunter

As Wyze is training the new models and building new AI features, we are looking for example data listed below. If you can find a few videos matching these descriptions in your list and are willing to send them to us, it will be super helpful to accelerate the improvement speed!

  • Videos with babies inside the frame
  • Videos with audio of a dog barking
  • Videos with audio of a baby crying
  • Videos with packages inside the frame
  • Videos with animals inside the frame including but not only limited to dogs, cats, birds, and bears (we hope you don’t have a bear in your home but we have actually received videos of this before).

If there is anything you want Wyze AI to develop, please fill out this survey.

FAQ

These are the latest FAQ in our community:

Q: How do I activate the Cam Plus services?

A: You can check the licenses you have from services.wyze.com. If there is a license for your account, please open your Wyze app and go to Account > Services > Cam Plus to assign the license to your camera.

Q: What if I cannot see my camera on the license assignment page?

A: Please make sure your device is not already using another license.

Q: How do I enable the Package Detection?

A: Package detection is designed to notify you when your package is delivered to your front door. By default, the package detection is set to OFF to avoid a flood of notifications. You can enable this feature through Account > Service > Cam Plus > Choose device > Turn the feature on. We are also working on a new UI to make this feature easier to access.

Q: How can I resolve the issue of “Error 06” when viewing my Event videos?

A: This is an identified issue and we have been fixing the cloud and firmware components of this problem. For Wyze Cam v2 and Pan, the fix was included in today’s firmware release (4.X.6.241). This is a complex issue and we are continuing to investigate other causes.

Q: Why am I still receiving a lot of motion notifications?

A: If you are receiving regular motion notifications and don’t want to, please go to Device > Settings > Notifications > and turn off the “All Other Motion Events” toggle.

Q: Why isn’t my Person Detection as accurate as I’d expect?

A: Even though Person Detection performs well in most cases, we are also seeing a couple of things that can cause the model to make inaccurate predictions. Factors like lighting changes, time changes, uncommon objects, etc. can cause false detections. If you have encountered this problem (especially when the Person Detection worked well before but is less accurate recently), please try to adjust the angle of your camera slightly to see if the problem is resolved. In addition to the iteration of new models, we are also working on a solution that will let you know what caused the false detection.

Q: How can I enroll in the AI feature testing like with Pet detection and Face Recognition?

A: The test crew is currently closed for now and we’re working on the next generation of the services. If you would like to help, you can submit videos with clear images of faces or pets inside the home to us. Thank you for your assistance!

20 Likes

I love this post, thank you for the updates and links to more interesting information.

I am curious about this. I have been able to set up and create facial groups, and it works a little bit, but it has not been very good so far and it hardly ever recognizes faces. So far I have 4 profiles, and it doesn’t sort videos with them very well or seem to do much. I hope this improves drastically and that there can be some triggers or exclusions based on identified facial profiles. I am guessing I don’t count as part of the testing group even though I’ve had some access to it, because I have not been notified of anything or how to help (though willing).

Your pet AI has become drastically better of the last couple months with one exception. Your AI is terrible at identifying my black cats as pets. It thinks black cats are people. I’ve submitted MANY, MANY corrections to this, but it often identifies my black cats as humans outside. It can recognize all my other cats correctly as pets, but it thinks black ones are human. Is this due to the racial bias phenomenon in AI detection that most companies struggle with? Many AI’s have a strong bias and weakness when it comes to humans with darker skin, and I am wondering if any of those AI weaknesses are related to why the AI can’t figure out that a black cat is not human? (Please nobody turn this into a political or racially charged controversial issue) I am sincerely asking why the AI has such troubles figuring out that a black cat is not human, but it can get it right for my other (orange, greyish, etc) pets? Why does it struggle with the dark moving ones and how can this be resolved? This is especially a big problem at night (less so during the day, but still a problem in the day sometimes too).

4 Likes

I have many many boxes laying around as as it is a very needed and much used feature I would be willing to facilitate some videos to send for development of the package detection. I know in some settings far fewer packages are delivered so having more videos in those lights situations could help the training.

I’ll definitely send in more pet videos too. I forget about that one quite often.

1 Like

At night, I often get a Person detected when a car drives by casting shadows on my yard. But I submit those videos, so I’m confident it will improve.

4 Likes

from the research I have done on this it is mainly due to two things. one is lack of training data as for training model data used for things such as amazons rekognition. they are comprised mainly of light skinned people so the bias will largely slide in there. and it is exacerbated by depth perception in darker colors. both with skin and fur thus making smaller differences in Caucasian faces far more “complex” and giving more details to distinguish from but seeing darker skin as more monotone and not having nearly as many details to pull from to see differences. It is odd though that black cats are being seen as a person…it seems like the overall shape would be enough to distinguish it there. …on that one, i’ve got nothing.

2 Likes

Thanks for sharing your thoughts!

For the face recognition, in addition to the v2 design, the team is also working on a couple of techniques to enhance the face capturing, including the deblurring, correction and etc. We are faced with a couple of challenges here. like the captured face is not frontal face or it is too vague because the person is moving, Teach the AI model to recognize an object in general is one thing, teach it to recognize each single individual in that object catebory is totally another thing. We’re working on the improvement right now.

In terms of the biases, like Bam mentioned, the training data plays a huge part in the problem. Wyze only uses the feedback video to train it’s model, so the bias existed for sure in the feedback video data set. To resolve that issue, on one side we’re reducing the biases from the model side, on the other side we are trying to collect more diversified data to train it.

4 Likes

Hey @WyzeShawn just letting you know this google form isn’t accessible

4 Likes

Thanks for the updates, Shawn! My person and vehicle detection is working pretty well. The package detection works as well but I only have one camera that I use it on. Pet detection is hit and miss but I’m sure it will be improved over time. I’m very excited to try facial detection when it is available! I think it will be very helpful on my front doorbell. I’ll try to submit more videos through the app :slight_smile:.

1 Like

Great update, Shawn.

I’ve not run wireshark to see what CamPlus is doing but I assume you do on-camera motion detection which then triggers a stream that is analyzed with your AI.

I realize AWS has face support in their Rekognition offering (along with package, dog, rock, etc) and you are already using Kenesis for ingestion, WebRTC, storage etc. so is the decision to roll your own recognition based on cost?

I know the whole xnor.ai thing threw a huge monkey wrench into things. I resisted “upgrading” my firmware because of that and I prefer on-premises recognition and full encryption before storage (Wyze CAN hand over unencrypted video to authorities) so the whole HomeKit Secure Video route is much more attractive.

Mimicking that capability on a home hub (like the new Sense v2 hub) would make me a lot happier.

1 Like

Thanks for the reminder! Just changed the access

3 Likes

For the AWS Rekognition, we’ve evaluated before. There are major drawback is that Rekognition is not specifically optimized for Wyze use cases and we cannot improve it’s performance.

We do value the privacy and security as the no.1 requirements for us to build the AI, no matter it is on the cloud and on the device. It’s the matter of time and resources we need to keep spending on it to make the solution better, but we’re on it.

1 Like

Had a chat with our scientist, he made another two points:

  1. in our data set there are lots of videos pointing downwards with people’s dark hair inside. that’s another reason why black cat are recognized as person
  2. the problem is much harder in the night vision of v2/pan as the picture is purely white/black
5 Likes

Thanks for the follow-up, that is interesting.

After my comment yesterday I decided I’d donate several more instances of the black cat-person situations, and while I did still find a bunch I could donate (that were originally identified as person, but I corrected and submitted as pet) to help give you guys some stuff to select from, I feel I should mention that I was surprised there weren’t as many to choose from as there used to be! On the 2 cameras it happens on a lot, I was lucky if I could find 1-2 per day. It used to be like dozens per day, so your latest recent updates to the AI have actually made an incredibly noticeable big difference!

Like I said, I still found some almost every day where it was misidentifying them, but it seems to have resolved 90% of the false-positives with the recent update. I’m impressed!

I used to have Alexa announce Person Detections on my back of house cam, but I eventually disabled it because it announced my cats a couple of times per hour. I used to submit lots of videos, but when I didn’t see a lot of changes happening for a couple of months, I slowed down on submitting corrections and just disabled the announcement temporarily (not complaining, I just knew I’d have to be patient).
Now, just a couple of weeks later I find that 90% of that has suddenly been resolved. Impressive! Well done Wyze! I guess I’ll re-enable my Alexa routine and go back to submitting more videos for you guys. I also have a baby, so I can probably help with some baby image and sound detection stuff…been using other companies that already do crying detection, but it will be good to have Wyze catch this. Also have a facial detection profile for my baby (Wyze recognized enough images to allow a profile for her face), so that was cool.

2 Likes

While Wyze will see what we all submit, I would love it if some people were willing to also post here what you submitted here to Wyze as things they can consider doing.

I’ve submitted several ideas, and I’m sure Wyze won’t really do all of them, but I figured ideas can’t hurt (it might lead to them to thoughts on something else they will do). I’ll share a few of my ideas with people here:

  • Keyword/phrase recognition. Maybe something like tell any cam with Cam Plus “Wyze, Activate HMS Security” and recognition of that key phrase can trigger a routine to enable the HMS security and send a push notification confirmation that it is secured (probably shouldn’t do verbal deactivation for security reasons though…just like how MyQ won’t allow Google Assistant to OPEN the garage because it’s a security risk, but does allow verbal commands to close and secure it). Wyze could expand this to some degree to have AI skills similar to Alexa, and just slowly build it out to have the AI focussed on routines and such for their ecosystem.
  • Could also make a special phrase like “Wyze, Remember this” to have the AI tag an event in a special way so you can easily find it later to download and save it, maybe something memorable someone said or something funny that happened. Then you don’t have to search through a bunch of junk to find the right event, you can easily pull up the right tag and be good to go (I save a ton of videos of funny and memorable things that happen with our family, and this would save some time finding the videos).
  • Musical instruments practice - particularly for the millions of students who are learning things like piano or in band classes, etc. They’re usually supposed to be practicing daily. My daughter has taken Piano, Trombone, and Percussion/drums, but she sometimes lies about practicing, or forgets to log when and how long (which her classes require). It would be nice to be able to quickly search the timeline for musical instruments and see if she practiced and when/how long. This could be marketed to schools to suggest to parents, or piano teachers, or tons of others. Just have it recognize the individual sound of common classical instruments (piano, drums, trombone, flute, violin, etc), especially those that are rarely used in modern radio music conglomerations…just solo sounds of the popular instruments. There are millions of kids this could work for. They say that nowadays 85% of children have played a musical instrument. If there are >74 Million kids in the US, that’s 62 Million that either do or have played a musical instrument and there will always be a new supply starting to learn every year in middle school/high school. Sounds like a smart business move to me, and something innovative competitors don’t offer yet.
  • Individual vehicle detection, similar to how facial detection profiles are done, allow me to register which vehicle belongs to my family (my car, my spouse’s car, etc). This can alert who just came home way faster than waiting for facial recognition, and give the AI enough time to process and alert me before the person is already standing in front of me.
  • Common Delivery vehicle recognition - UPS, FedEx, USPS, Amazon (company vehicles, not personal vehicles of contractors of course). Could allow us to get an alert about packages long before package recognition.

Can’t remember a couple of the others I submitted anymore…but I would be interested in other’s ideas. @davidnestico2001 care to share any ideas you were considering when you noticed the form wasn’t working?

2 Likes

this was an idea I threw into the survey as well. my thought was a step process to help make package detection more acurate and maybe even recognize them as they are being carried.

step one( vehicle detection) step two ( logo on said vehicle is of marked and know delivery service ups, fed ex etc…) step three look through video for person and package detection within the surrounding time. my thought is that using those increments you might be able to identify a package that is carried to a door but set on a step or porch in such a way that it is now out of view and thus no package detection alert. that type of detection could be labeled as “delivery vehicle” letting you know there might be a package that is out of view.

2 Likes

I like this because package detection worked REALLY well when I had a V3 watching my entire porch, but with the VDB, I was forced to choose between having it see most of my porch (but not a person’s face) or having it see people’s faces (but not see the packages on the porch)…so that is frustrating.

1 Like

I have both still :slight_smile: you never know

1 Like

So with this bounty thing, how are those of us unable to tag able to participate? I would love to help the AI but I and others can’t tag and have submitted logs about it. If We knew what else we could include in the logs we send to help. I at least would be glad to help.

1 Like

Perfect! Just filled it out.

1 Like