Wyze Cam v4 - Released 3/26/2024

That actually seems like a cool way for the features to work in tandem. I appreciate your explanation about this, even if it doesn’t jibe with how the Wyze Help Center tries to draw the distinction between the two (at least not the way I read it).

I don’t expect either to be a replacement for the other, and I don’t think they’re marketed as equivalents (which is one thing Wyze does get right), but it seems pretty obvious at this point that Edge AI on the Cam v4 is useless without the Cloud AI access that a Cam Plus (or better) subscription provides.

Absolutely.

Also, I need to apologize for not mentioning this sooner: I hope that you and the toddlers (and the rest of your family) are feeling better at this point and at least on the road to recovery if not fully convalesced.

1 Like

Yeah, I think publicly, including in that article, they’re defining their separation of full fledged “Edge AI” based on this distinction that is different from the lesser model incorporated on all the other cams:

Edge AI is only available on Wyze Cam v3 Pro, Wyze Cam v4, and Wyze Cam Floodlight Pro, and is powered by an on-board chip.

They can afford to have a dedicated chip for the Edge AI with a powerful enough model that gets a sufficient confidence interval all by itself. The others neither have a dedicated chip, nor have a strong enough model for a high enough confidence interval. That doesn’t mean they can’t do ANYTHING to improve info it sends to the cloud detections, but it does mean they absolutely can’t do accurate local detections by themselves. It also might not have sufficient resources to support doing so anyway.

Honestly, they shouldn’t even mention the local/edge AI in the V4 cam. They should probably just say that they found a way to help the V4 get faster AI Notifications than most previous models. That’s the only thing anyone is going to understand about it. Saying it has local/Edge AI causes a lot of confusion because people can’t fathom why it wouldn’t be free like every other company who has local/edge AI. I can’t think of any company that advertises local/edge AI but charges a subscription to unlock it. It sounds ridiculous and makes a person wonder what the reasonable rationale could possibly be for that.

1 Like

They do, though, so that cat’s out of the bag, and I think it’s incumbent upon them to do a better job of explaining the expected customer experience as a result. Saying things like “[a] subscription to Cam Plus is required on Wyze Cam v4 and Wyze Cam Floodlight Pro to access the full feature list of Edge AI” without defining this “full feature list” is not helpful at all and just leads to more questions than answers.

I might be such a person. :wink:

1 Like

I think this is just marketing speak. V3 Pro was probably advertised with “Edge AI” due to the new Ingenic T40 chip. Ingenic’s official website advertises it too. You can argue how accurate or useful is it, but their SoC are all capable of some level of local AI.

OG/OG-T uses Realtek Ameba
V3 & V3 Pan use Ingenic T31
V3 Pro uses Ingenic T40
V4 uses Ingenic T41

A few Tapo cameras / doorbells uses the same SoC as above and they can offer AI detection for free, but I can’t confirm if they are using local AI on the SoC to provide those features.

2 Likes

I know this is old, but I just thought about this. What if the V4 person detection logic is there, but it isn’t used for AI object notification? What is it for then? Smart Focus?

It’s still fresh on my mind, and I’m still disappointed with Wyze about this. :man_shrugging:

I think that’s a valid question, and I sort of wondered the same thing. In one of the Help Center articles I referred to above, there’s the mention of “the full feature list of Edge AI” but no actual enumeration of that list nor any further detail (that I can find) about it.

I think you’re asking good questions, but I don’t expect any direct and coherent answers from Wyze given their muddled messaging on the matter so far.

Smart focus seems to currently be just based on pixel change detection (motion) since there are certain cases where I have found it to follow movement other than an AI object.

I think it is basically just for making things faster for anyone who does choose to pay for Cam Plus. Then you get notifications much quicker (the local AI sends the push notification immediately), and the local AI can then tell the cloud AI what to give priority analysis to for quicker confirmation of the initial/tentative detection.

At least that is how I have long understood the primary point of the new local models coordination process when some employees brought it up 1-2 years ago.

1 Like

That doesn’t really comport with my understanding of Smart Focus. The Help Center article for Cam v4 describes it like this:

  • Smart Focus. Focus on what matters, while still viewing the bigger picture.

That doesn’t really say much, but a similar article about Cam v3 Pro provides a little more detail:

  • Smart Focus feature zooms in on motion on the Live stream, alongside the bigger picture.

That’s been my experience with Smart Focus on Cam v4, and it agrees with something @Antonius wrote:

Maybe it actually does compute into notifications for subscribers in some way, but, not being a subscriber, I can’t speak directly to that. In my limited experience, it just seems like a live view enhancement.

That’s probably the way it is now but think about the wasted opportunity here; you have a superior way to identify an object but still want to rely on an iffy pixel detection logic?

1 Like

I think there might be a misunderstanding. Those articles seem to say exactly what I did. :thinking: I’ll try to clarify what I am saying, and you correct me where/if I misunderstand you as well:

On all of Wyze’s wired cameras, “Motion” is solely determined by pixel changes. Motion sensitivity higher or lower just changes the threshold for what counts as motion…lower sensitivity means more pixels need to change between the previous frame and the current frame in order for something to be considered “motion” If the pixels don’t change, then there is no motion. This is why shadows and light count as “motion” even though nothing actually moves, because light/shadows cause pixel changes between frames and that is the definition of motion on most smart cams (unless they add PIR or Radar sensors, etc).

So, when those articles say the following, they are agreeing with what I said about pixel change detection (motion):

Smart Focus feature zooms in on motion on the Live stream, alongside the bigger picture.

When it says motion here, it means any pixel changes like I was saying. It will follow the biggest grouping of pixel changes. So, this might be a big tree branch waving around instead of the person walking around on the opposite side.

Smart Focus. Focus on what matters, while still viewing the bigger picture.

This is assuming that “what matters” is usually whatever has pixel changes between different frames (motion). This is usually true too. I mean who cares about the background things that are totally stationary and have been the exact same for years? You only care about what CHANGED and is different in the image. But it can’t track both the left side and right side at the same time, so it is possible it might track the wrong thing when it follows the grouping with the most pixel changes. This is why it still has a small window with the full picture overview.

Smart focus is useful, but it’s not perfect. It also often zooms in on my chest or other parts of my body and will cut off my face. Like, I respect that it doesn’t find me attractive, but it still hurts my feelings a little. :joy: j/k…I mean, it really does cut off my face sometimes, but doesn’t hurt my feelings. My point is just that it doesn’t specifically follow AI objects like a person, it simply follows the grouping of the most pixel changes, which isn’t necessarily always the most important thing.


Exactly. I have told them a few times it would be much more useful if smart focus followed AI objects instead of any motion. At the very least it should try to give priority to the face/head instead of cutting it off when close.

2 Likes

Oh, c’mon, man! When have I ever had any kind of misunderstanding or had to correct something I previously wrote?! It’s not like I’ve ever done that twice in the same topic or anything![1][2] :roll_eyes::crazy_face:

I actually agree with everything you wrote about motion detection for wired cameras, because that does comport with my understanding of things I’ve read. What didn’t make sense to me was when you seemed to connect Smart Focus to notifications for subscribers, but now that I’m re-reading that, I think maybe that’s not what you were doing at all.

I think maybe you were answering @p2788deal’s questions separately and that “it” didn’t mean “Smart Focus” in the second case, to wit:

I think the antecedent to the “it” here is “Edge AI”, but that’s not how I originally read it.

Given this context, I think we both could’ve been more precise and clear in our responses. That’s my current read of the situation, anyway.

That’s the AutoRude™ feature. I think I read somewhere that it’s still in β, but don’t quote me on that. :innocent:

It just hurts your face! Another senseless victim of cam violence. :candle::pray:

That’s a great suggestion. They should also let us “Cam Nekkid” plebes use the features that are already on-board within the camera, while they’re at it. :wink:


  1. ↩︎

  2. ↩︎

2 Likes

Oh yeah, simply misunderstanding then. :+1:

My response was simply to clarify that the local AI doesn’t do anything with Smart focus at this time, though I wish it did.

2 Likes

Was the last comment by Tex2 deleted simply because it was critical? He was professional and everything he said was accurate. Deleting such a comment further validates his criticisms about Wyze.

@robberstea The comment was deleted because the same comment was cross posted in multiple threads in the forum. This is a violation our Community Guidelines. One copy of the post was retained which you can find here.

3 Likes