Scale cheats in order not to appear inconsistent

When the scale takes the first measurement, it locks that in and keeps reusing it for a while (I haven’t tested how long). You keep getting the same number even after you lose some weight in the bathroom or drink some water. I imagine the idea is to make the scale appear consistent so that if you keep stepping on and off, it will always show the same number.

The problem with that behavior, is it makes me stop trusting the scale. How do I know it’s anywhere near accurate if I can’t take multiple measurements and compare? When I do that, I’m typically making sure it’s sitting on a perfectly flat spot and not wobbling. But in this case, I just feel the scale is so inaccurate, the Wyze engineers decided to cover it up with this trick.

I have a Wyze scale S, and have had it since around it came out, and have not experienced this. If I step on the scale while holding something, then step on it again without holding that item, the measurement changes. In my scales history I have many instances of different measurements within a few hours, and never feel like it’s trying to “cheat”.

2 Likes

My Wyze scale was activated April 23,2020

I weigh myself every Sunday, and the readings vary depending on how much ice cream I have eaten, or how sick I have been. Just got on the scale and weighed myself. Picked up my ~12 pound pup and I weighed about 11 pounds more. Put her back on the floor and weighed myself again and had the same reading without the dog.

Have you tried resetting the scale?

(https://support.wyze.com/hc/en-us/articles/4407326064027-How-do-I-factory-reset-Wyze-Scale-S-)

To factory reset the scale:

  1. Tap on the scale surface to wake it up.
  2. On the back of the scale, press the reset button ~5 seconds until CLr displays.
  3. When CLr displays, your scale is reset

Why most smart scales take into account considerations like fluctuation ranges, average/median, range overlap, and anchoring

I can’t speak for Wyze or what Wyze does, but I can tell you what I learned about why MOST, if not all digital scales, particularly smart scales do a small degree of consistency compensation within a small range of weight and sometimes time as OP is hypothesizing here. I can’t even personally confirm that Wyze scales for sure do this, but I do know that most do and I will explain why based on my hyper-fixated research I did on this a few years ago when I was deciding on a smart scale (and eventually settled on choosing the original Wyze scale after comparing tons of them),

As I said, I learned that this is actually fairly common practice for the vast majority, if not all smart scales to do this within a small range of weight shift between 2 close measurements --like when the new measurement is only a few tenths of a pound different than a measurement it took a few minutes ago, and I also strongly dislike it, though I understand it to a degree.

From what I understand, part of the problem is that a weight measurement can change slightly depending on several factors, including where you place your feet on the scale between 2 measurements, how you distribute your weight and how steady you are .–believe it or not, nobody ever actually “stands still”…if you pay REALLY close attention while standing still and looking downward, you will notice your breathing slightly shifting your weight, and your body automatically compensating for balance as you almost imperceptibly sway around in slight movements. These movements are even more dramatic when you are not actually trying to pay close attention to them and still them as much as possible. All of these things actually cause the weight measurement to constantly fluctuate a LITTLE bit, and the more sensitive and accurate a scale is, the more fluctuation there will be. Ironically, in the past, manufacturers who allowed truly accurate digital scale readings found that people HATED the inconsistency and constant flux, and wanted precision…hence why most digital scales “lock in” a number to tell you what your weight is. Some of them might measure the fluctuation while you stand there for X seconds and then remove the outliers and give you the average it had during that period or various other ways to determine what number to show you. This appealed to people and increased satisfaction. But it didn’t necessarily resolve the inconsistency issues between measurements. A person would step on and off the scale to measure multiple times in a row without having changed anything other than their new foot placement/positioning, and slightly different distribution of weight on the sensors, and would get a slightly different weight. Then people would get very upset that the scale “sucks” because the measurement was slightly different by fractions of pounds even though they hadn’t added or removed any weight since the previous measurement…but the scale also didn’t lie about the slight weight variance/difference of a tenth of a pound or 2 up or down. But because people don’t understand how these things works, they became upset that the scales showed a slightly different weight instead of being exactly consistent 100% of the time. And the more sensitive the sensors, the more likely there was slight variance this way instead of exact precision because it would care about weight distribution, body sway and everything else that created varying pressure shifts that are indistinguishable from weight changes because of the way gravity works. That leaves manufacturers with a dilemma…if they ACTUALLY show people 100% exactly what the scale is detecting as precisely as possible, people won’t buy their product and will believe it is a worse and cheaper product, when the problem is actually their misunderstanding of physics, not the device. They see the scale can’t make up it’s mind, that it is constantly fluctuating while you stand there, or constantly giving inconsistent readings between near measurements when you have not actually added or removed any mass between those measurements.

So to compensate for all of this, it is now common practice for a lot of digital scales to have a compensatory algorithm in 2 ways, instead of just 1. They will now measure for a while and then after a set interval they will “lock in” a weight measurement that they feel best represents the small fluctuations during that time period. Secondly, many scales will also ANCHOR subsequent “close” measurements to that recent measurement in order to maintain consistency, and that reported measurement is actually just as accurate as the previous one in a way because both actually had a range of measurement, both of the ranges overlapped, so the scale may decide that the new measurement average is only slightly different due to new foot placement or different weight distribution or movement/swaying or whatever, but most likely still the same weight as the previous measurement because their ranges are almost identical. Let’s consider a fictional example on a fictional scale:

A person, let’s call them “Pumpkin” (my orange cat Pumpkin is sitting next to me, so that’s the name that came to mind) buys the fictional Smart Scale “Fex” (I was thinking of something I would like to step on, and how I have often complained about FedEx screwing up deliveries, so this made up name is inspired by that).

Pumpkin steps on the Fex scale and while the measurements sway a lot at first, they eventually become less dramatic and the less dramatic shifts up and down have a weight range between 135.35 to 135.89, with the AVERAGE of all measurements in that range being 135.54, so pretty close to 135.5 (Also, I said average, not the exact median or midpoint of the range which midpoint would be 135.62…which could be another way some scales COULD choose for which weight measurement to give instead).

Pumpkin takes a drink from his water bottle, but only swallows about 1.6oz of water (roughly 0.1 lbs of stationary weight) before seeing his friend Aurora and calls her over to come check out the progress he’s made (whether that is progress in gaining muscle weight, or losing fat or whatever Pumpkin’s goal is, we don’t know). Pumpkin excitedly gets back on the Fex scale to show Aurora his progress, and in this new position maybe even with his heart beating a little faster, and his body maybe even swaying a bit more, the Fex scale measures the range of his weight to be 135.29 to 136.08, and the average is actually around 135.69. You would think it should give the new “locked in” measurement as 135.7, and Pumpking expected it would say 135.6 (just 0.1lbs more than the first measurement since he drank that much water), but for some reason it still says “135.5” as if Pumpking didn’t drink any water…and now he feels a little outraged and lied to, knowing he definitely drank some water, and this Fex scale didn’t take it into account.

The Fex scale didn’t exactly lie either though. It didn’t show Pumpkin the full ranges it measured, it didn’t show Pumpkin how his weight distribution affected the readings, or his swaying, etc. It didn’t explain to him the range, median, average, or any of that. It noticed this was almost obviously the same individual as before and that the ranges were VERY similar to the previous reading. Enough to nearly be identical. It anchored the new measurement to be within the set variance tolerance and showed that measurement.

If Pumpkin measures a 3rd and 4th time, this time without adding or removing any mass to his body, the scale’s average measurement might actually be lower than it’s second measurement. It might have the range and average actually be down as low as the first reading, and possibly lower! He would definitely be outraged if he drank some water and then got a LOWER weight measurement shown to him…but if the scale did show him a lower weight after he drank a little water, it wouldn’t be lying either. the range and average can fluctuate a good amount depending on several factors. So in order to prevent a situation where Pumpkin drinks a little water (adds mass), then weighs in and has a range with an average slightly LOWER than his previous measurement, Fex Scale anchors to a previous measurement that is within the set tolerance levels.

From what I read, this kind of measurement “anchoring” within small tolerance amounts and within short periods of time, do not actually significantly negatively impact “accuracy” and in some ways are just as accurate. In a way that seems ridiculous, but they point out that their locked in measurement is already an estimation, not an exact precise number due to the fluctuation. If we want precision we would have to have a scale give us the full range of measurements over a period of seconds, instead of a locked-in average or median or whatever else. They argue that anchoring is no more inaccurate than other methods scales use for determining weight, just that the algorithm has been updated to take another range of measurements into account. It’s even been suggested that some scales with RAM can actually store the full range of a measurement, not just the stored average it showed us, and when a second measurement is taken within a few minutes, if the second range of measurement overlaps the first range of measurement with enough of an overlap, then it may be more accurate to combine the 2 ranges and take the average of them both together. I am sure there are scales that do this. Instead of achoring, they are merging, and the new measurement will be the most accurate. It is also more likely to be similar to the first measurement, but SOMETIMES could change if it is dramatic enough. Scales will all ignore previous measurements when the change between it and the second measurement is large enough, but that variance point can vary by device.

Again, to be VERY clear, I am not speaking for Wyze. I have NO IDEA what algorythms Wyze has implemented in their scale measurements. I am simply explaining that MOST scales apparently do these things, and the ones that don’t do them get criticized and poorly reviewed and cast aside as terrible inconsistent devices that nobody should buy. So most of them do it because people demand it, and companies make what people say they want.

Sorry so long, but I had similar concerns a few years ago and looked into this, and thought I’d share what I learned. Who knows, technology progresses and changes quickly, and companies don’t give intricate details about exactly how they determine their individual measurements, so we’ll probably never know for sure. As for Wyze, I was very impressed with the Original scale and my tests with it. I now have a Scale X and love the extra data graphs. I never used a Scale-S but I believe all 3 scales were made by the same manufacturer, so they should all work fairly similarly, but I can’t be absolutely sure. Either way, however Wyze has chosen their measurement variance solutions, it is very difficult to find a scale, particularly a smart scale that hasn’t at least done some small degree of locking in a measurement range, anchoring, range overlap considerations, and other such things…at least most of the good ones come up with some kind of solutions related to them, even if implemented slightly differently…otherwise people on the other side of things will get extremely upset when their mass hasn’t changed, but the scale measurement is always changing by 0.1lbs or more every time they step on it.

2 Likes

I have seen this behavior on mine, but only if the fluctuation is with a pound or so. I hate it, though, because if I want to get an average, I have to pick up a glass of water or a full shampoo bottle, take a sufficiently higher measurement, then do the unmodified measurement again.

I understand why the vast majority of scales do this, but I wish it were disclosed, and I wish there were a way to disable this “feature”, as they might characterize this instead of what it really is, which is cheating.