Person Detection Update: A New Experiment for Premium Features

That is definitely a very valid point. To be fair though, person detection was just a small part of it. XNOR was all about a wide-range of things implementing adaptive AI capabilities very efficiently. The local Person Detection wasn’t worth $200 million itself, and it’s likely Apple also had other ideas in mind.

Secondly, if it was so crazy complicated that only XNOR could pull it off, there wouldn’t be so many competitors also offering person, pet, vehicle detection, etc all for free locally. It may not be super simple to do, but it’s also obviously not as complicated as this implies if so many others can and do actually do it locally & free…especially since, as was already pointed out, the capability is already a feature of the SoC the cam is built on.

Still, the point is well taken. I mean, I couldn’t pretend I’d be able to pull it off…but they should be able to find someone who CAN do it.

3 Likes

Exactly. And you know what’s kind of nuts? The algorithms they built from scratch for the cloud version can’t be much different at all from what they would need to run in a local version on the cameras. (An image is an image and a human shaped blob is a human shaped blob.) Refine and/or limit the scope of that code enough and they’d have an “edge” solution again and save all this extra consumer stress…

3 Likes

All the talk of on camera PD being crazy complicated and only xnor having the expertise!
Is this really true?
Well, the T20 chip used in the camera has Human detection built in. See the data sheet previously mentioned. When chips have specific hardware features the manufacturer usually provides detailed information and guidance on use of the features together with examples of hardware schematics and code to use the features. There is nothing clever about following instructions and reading the manual.

So if you only use SD cards and don’t care about person detection then this topic doesn’t affect you?

In fairness they refined the human detection quite a bit further with the T800. Pity about the phone book bug.

2 Likes

The are a lot of message to read through and I may have missed it somewhere, but all I’ve seen is a per camera price suggestions per month or year… What I want to understand is the price to check the 12 sec clip for a person. Just one clip. Maybe someone has 100s of events a day and another maybe in the 10s. So if we are offsetting an accounting issue, whats the cost per event for the server time. If this was being done on-board the camera before and even if the new coding requires 10X the amount of processing, that doesn’t seem like a lot. In any case - I find it hard to understand that someone doesn’t have an idea of this cost. And I hope this is being run on GPU instead of CPU software - but sounding like AWS servers.

1 Like

Well of course it’s a lot. It costs infinitely more, because Wyze didn’t pay a thin dime of CPU / electricity when the processing was local to the camera. But yes, several people have asked what Wyze’s costs here are.

1 Like

I believe that’s correct.

Ingenic’s data sheet speaks of its “computer vision” when it refers to uses of the chip including face and human detection and license plate reading. I believe the references are for supported use cases, not actual built in functionality.

The T20 detailed technical docs discuss building human detection algorithms but make no mention of inbuilt functionality.

2 Likes

Who can tell without detailed documentation.
I read specification as the main heading before listing the computer vision items.
Whatever, usually the manufacturer provides the product designer with all the information, sample designs, even evaluation hardware and software.

1 Like

Hmm, even the T10 touted enough power for face recognition.

https://www.mips.com/blog/ingenic-t10-processor-mips-based-360-camera/

Meanwhile the traitorous lying scum at Xnor bragged about the great job they did for Wyze, while mentioning how easy it was to model because the T20 was a MIPS processor.

Wyze was clear from the beginning that they wanted an on-device solution on their $20 camera that did not require them to charge customers an additional cloud-service fee.

3 Likes

Exactly, they built the functionality not used existing functionality. Having done embedded design for years, specifically DSP design I am very familiar with reading chip specs. And nothing in the Ingenic processors would constitute inbuilt native functionality.

MIPS design is essentially a RISC chip. Great for using in imbedded projects.

2 Likes

Another thought: by default with motion alerts all these cameras are programmed to take a 12 second clip, upload it, and then do Jack Squat for a full 5 minute “cool down”. Do you mean to tell me that they couldn’t devise a local image processing algorithm of their own, no matter how childish and inefficient, that could manage to analyze a 12 second clip using over FIVE MINUTES of spare processing power on the camera? Maybe reduce live image resolution during that time? It just seems improbable… And yeah, 5 minutes is a long time to wait but some PD is better than none (arguably).

2 Likes

4.10.5.111 on both cameras. V2.11.41 on the app. Wyze services lists person detection (pilot) under enabled. I don’t have a subscriptions section inside account. And yes I get notifications that say person detected and there is the toggle filter for the clips.

1 Like

What do we do if we definitely received the email back in November with the promise and yet didn’t receive this new email you mention?

1 Like

Haha. Sorry I haven’t programmed for a good 30+ years… so I was not aware cappuccino is a programming language!! LOL.

My point was that I do know that AI algorithms can be condensed (hence the cappuccino comment) to trained static algorithms that would bypass the heavy CPU burden used by AI and then this could could be moved to ML coding that would have the smallest size + fast execution on the low end processor as used already in the WyzeCam.

I believe the original WyzeCam person detection used a very simple 5 or 6 point stick man algorithm. It certainly was not as good as the current system, but it did work well if the subject covered a greater percentage of the visual frame.

Anyways the current trained AI should be exported as a static algorithm and then moved back to the hardware for on board processing. This will resolve the entire cloud/pay issue.

Additionally, for some users like myself that have over 50 active IPs being used on my network (6 are WyzeCams) it will improve network burden by decreasing uploads to the cloud (with 6 cameras, I have hundreds a day).

1 Like

The old firmware will work with the new app?

I’ve avoided upgrading both to keep the old person detection but I’d upgrade the app if it still works with the old firmware.

It would seem to. I have old android and new iPad apps. Both receiving same of alerts and video clips.
You can always go back on android by downloading the old app.

I’m on CAM Plus since I was a CMC subscriber. I finally don’t feel like a second class citizen having to choose CMC over person detection. Thanks for that. I’m still getting alerts for persons detected outside of my detection zone. Wish that would be addressed. Also, the ability to create an irregular shaped detection zone would be nice.

2 Likes

You have the Pilot version of Person Detection. I assume everyone does now.