Wyze audio/video compression codec used by wyze

Unfortunately, this subject is more complicated, and we probably need a link to see the YouTube video you saw.

This is a user-to-user forum, so you are mostly talking to fellow users, but there are a few Wyze people that intermittently view some of the posts.

The selection of a CODEC usually determines whether you can hear or not hear an audio stream.

For quality, it is often the hardware or firmware on the device (camera), and whether it can send it faithfully.

Wyze cams have been known for issues with the latter. In the days of the V2, the audio was pretty faithful, but the cost-effective processor couldn’t keep up. The result was you had sound delays – you would see something happen on screen, and it may be seconds before you heard the audio for that action.

With the V3 the cost-effective processor was improved, but they started out with several other problems. However, they have fixed many in firmware as the camera developed.

As far as the SD card is concerned, neither live view nor Events ever ‘stole’ from it as far as I know (you couldn’t depend on it being there). Nor does the camera get anything from the cloud unless you are off-premises. The SD card is a separate system, and the cloud is not for any sends to and from your own local network. Maybe they took video from the SD card when it was installed, but took it from live view otherwise? We need to see what YouTube video you speak of to understand what they did.

Anything ‘normally’ mentioning audio CODECS on the SD card probably is talking about the fact that RAW SD card recording uses the “A-Law” CODEC. So if you view the card from a PC, you need an A-Law CODEC to hear the audio. If you are viewing or saving a clip from the app, that is converted to the more common “AAC” audio format, and is easy to hear. But that isn’t quality - that is whether you can hear it or not.

So a link to the YouTube video you watched would definitely help to understand what you saw. :slight_smile:

1 Like