Sum number error?

This is my simple function ...

var x = msg.payload;
var temp = x.toFixed(1);
var tempNumber = parseFloat(temp);
var z = (tempNumber + 0.3);
node.status(tempNumber + " - " + z);
global.set("var_temperaturaCamerina", z);
msg.payload = z;
return msg;

Look at the node status and the "z" value ...
Why? Why not 17.9 simply?

error

I tried to reproduce it, but here your code works as you expect....

So I would guess a node or system setting.

image

I tried it on windows and linux node red 2.2.

Try 17.6 ....

Try

var x = msg.payload;
var tempNumber = Math.round(x*10)/10;
var z = (tempNumber + 0.3);
node.status(tempNumber + " - " + z);
global.set("var_temperaturaCamerina", z);
msg.payload = z;
return msg;

or

var x = msg.payload;
var z = (x + 0.3);
node.status(x.toFixed(1) + " - " + z.toFixed(1));
global.set("var_temperaturaCamerina", z);
msg.payload = z;
return msg;

I see the same error with yours if temp is a float. The above codes do not show the error for me.

Your 2° example work fine for me .... :+1: (not the 1°)

The reason your original version does not do what you expect is rounding errors in the floating point arithmetic. Any time you do arithmetic there is the possibility of rounding errors. The solution, as you have seen, is to round to the required precision when you want to display it, not before.

1 Like

Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?

Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all .

When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.

Why do computers use such a stupid system?

It’s not stupid, just different. Decimal numbers cannot accurately represent a number like 1/3, so you have to round to something like 0.33 - and you don’t expect 0.33 + 0.33 + 0.33 to add up to 1, either - do you?

Computers use binary numbers because they’re faster at dealing with those, and because for most calculations, a tiny error in the 17th decimal place doesn’t matter at all since the numbers you work with aren’t round (or that precise) anyway.

What can I do to avoid this problem?

That depends on what kind of calculations you’re doing.

  • If you really need your results to add up exactly, especially when you work with money: use a special decimal datatype.
  • If you just don’t want to see all those extra decimal places: simply format your result rounded to a fixed number of decimal places when displaying it.
  • If you have no decimal datatype available, an alternative is to work with integers, e.g. do money calculations entirely in cents. But this is more work and has some drawbacks.

Why do other calculations like 0.1 + 0.4 work correctly?

In that case, the result (0.5) can be represented exactly as a floating-point number, and it’s possible for rounding errors in the input numbers to cancel each other out - But that can’t necessarily be relied upon (e.g. when those two numbers were stored in differently sized floating point representations first, the rounding errors might not offset each other).

In other cases like 0.1 + 0.3, the result actually isn’t really 0.4, but close enough that 0.4 is the shortest number that is closer to the result than to any other floating-point number. Many languages then display that number instead of converting the actual result back to the closest decimal fraction.

2 Likes

Ok, thank you for explanation .....

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.