Discrepancies between single and double precision sum over time
2 次查看(过去 30 天)
显示 更早的评论
I attempted to isolate and better understand an issue happening in a more complex model. It came down to an internal "clock" we have to count elapsed time. For a sample time of 0.05, the clock implementation would just be a sum coupled with a delay. However, we noticed a considerable cumulative error between single and double precision operations. The order of magnitude of the differences seen in the scope below seems to be way higher than the ones due to the difference in precision between single and double. There's a 1s drift after merely 2100 seconds.
Something else that I am confused about is why the difference seems to shift direction (t≈1000s and t≈4100s). Any insights would be appreciated.
0 个评论
采纳的回答
Jan
2021-12-16
This is the expected behaviour. Remember that the sum is an instable numerical operation.
d = zeros(1,1e7);
s = zeros(1, 1e7, 'single');
for k = 2:1e7
d(k) = d(k-1) + 0.05;
s(k) = s(k-1) + single(0.05);
end
plot(d - s)
The rounding error accumulate. Single precision means about 7 valid digits. So the magnitude of the rounding effects is in the expected range.
5 个评论
Jan
2021-12-17
编辑:Jan
2021-12-17
Remember, that the values have a limited precision.
single(1e7) + single(0.05) - single(1e7)
This is 0, not 0.05, because in single precision the values 1e7 and 1e7+0.05 are represented by the same number.
single(1e7) - single(1e7) + single(0.05)
This replies 0.05, because the addition on the left replies 0 and the 0.05 is not rounded away anymore.
From a certain point adding 0.05 does not change the value anymore.
single(1048576) + single(0.05) == single(1048576) % TRUE
Stephen23
2021-12-17
编辑:Stephen23
2021-12-17
"The part I don't get is why the rounding error does not accumulate monotonically?"
Why should it?
The error (difference between the decimal values that you probably expect vs the actual binary values) is not constant, but depends on both addends... one of which is continuously changing. So the binary amount that you are actually adding changes, because the values that you are adding change.
If there was a simple monotonic linear relationship then all binary floating point error could be compensated for using a trivial offset after any calcuation. But that is definitely not the case.
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Logical 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!