Embedded Coder - difference in results between the original model and generated C code

17 次查看(过去 30 天)
Hello.
I created a simple model in SIMULINK (plese see the attachment and the picture below). I used ODE 5 (Dortmand-Prince) fixed step solver with step size 0.01 s.
Then I performed a C code generation using Embedded Coder. Later I used Software-in-the-Loop to investigate the numerical equivalency/similarity between the outputs from original SIMULINK model and generated C code. Especially I am interested in "Output 4" and "sin" function (they are indicated using blue marks). From the verification stage I concluded that there exists a numerical diferences between both results. The obtained results are presented below (blue arrows indicates these mismatches). The model operates on "single" numeric type number. In that way in the generated C code function "sinf" is used. My goal is to minimize these differences. There is no possibility to use "double" because the code will be implemented on ARM Cortex M4 target hardware. I think that the resulting code should give exactly the same results as the original model. Is it possible to drive these errors to 0? I am not sure what is wrong with the code generator settings. Maybe this is the typical output in such a case?
Kind regards,
Mariusz

采纳的回答

Andy Bartlett
Andy Bartlett 2023-10-2
The IEEE 754 floating-point standard RECOMMENDS that languages implement sin to have correct rounding.
Correct rounding means that if the ideal infinite precision result was between two representable values in the floating-point type being used, then correct rounding would output one of those two representable values. The specific direction up or down of the value output should depend on the rounding mode floor, ceiling, nearests-ties-to-even, etc. currently in effect. Round nearests-ties-to-even is the mode most commonly in effect for floating-point units.
Implementations wouldn't normally compute the exact value, but instead compute something close enough to make a "good" rounding decision on what to output. It may not always be "perfect".
These good but not perfect implementations could vary by implementor. Thus it would not be shocking for find cases where one implementation had rounded up while another had rounded down.
For example, suppose the input to sin was
single(0.499999791)
the output sin ideally should be
0.479425355526204...
the two nearest representable values in single are
single(0.479425341)
single(0.479425371)
The nearest answer is the latter which is what MATLAB outputs.
Now other implementations
visual studio on windows 64 bit intel
gcc on windows 64 bit intel
gcc on linux 64 bit intel
gcc on ARM Cortex-M4 with FPU hardware
gcc on ARM Cortex-M4 without software emulation of floating-point
might be slightly different and round to the former value instead.
If it is just a matter of differences in rounding, then difference should just be the distance between representable values.
In MATLAB, eps(value) give the distance from one representable value to the next representable value away from zero in that type.
So you could check this by doing
absErr = abs( output1 - output2 );
epsVec1 = eps(output1); % critical that output1 is type of interest, i.e. single
epsVec2 = eps(output2); % critical that output2 is type of interest, i.e. single
epsVec = max( epsVec1, epsVec2 ); % small chance two eps are different so use bigger of two
errEpsRatio = absErr ./ epsVec;
plot(errEpsRatio,'r.'), shg
If any of the plotted dots are higher than 1, then it's not just a minor rounding choice.
If all the red dots in the plot are value 0 or 1, then it's just a rounding difference in the implementations.
In my testing of MATLAB vs Visual Studio sinf on Windows Intel 64 bit, I saw ratios of 1 but nothing higher.
A rounding difference in the implementation of these floating-point C library functions is something you may have to live with.
Keep in mind that the implementation on ARM may match MATLAB, Visual Studio, or neither.
  2 个评论
Mariusz Jacewicz
Mariusz Jacewicz 2023-10-3
Dear Sir.
Thank you kindly for this detailed explanation. These remarks helped me a lot in the understanding of that issue. I checked the abovementioned problem using your code and also considered other possible cases with my simple model. For the single block with "sin" trigonometric function, I also observed that the resulting signal can be 0 or 1 and nothing higher (so you are correct). However, the next question is as follows: let's use the output number three and compare the results between SIMULINK and C. Then the function is a bit more complicated:
Out3(t) = 1 * sin(t) * (sin(t) * cos(t) * 13 * sin(t)).
We have multiplications of several functions. Below I present the resulting graph (time on horizontal axis). The first plot is just the difference "output1 - output1". The second subplot is ""errEpsRatio generated using your code.
From the second graph it might be observed that the errEpsRatio = {0, 1, 2, 3, 4, 5, 6}. It seems quite logical because the rounding errors are taken into account several times. Please let me know if this result is wrong.
Once again thank you for your time and valuable comments.
Andy Bartlett
Andy Bartlett 2023-10-3
编辑:Andy Bartlett 2023-10-3
Hi,
Yes, the graph you showed for compound operations is to be expected.
Individual errors will add together and the net error may get bigger or smaller if the errors have opposite signs.
Multiplications will amplify or attenuate individual errors depending on whether the multiplicative term is greater than one or less than one.
The eps for the final output may change too.
Eps doubles in size as you move from one power of two band to another
For example, consider the power of two band represented by the half open interval
[8, 16)
the eps for all values in this interval equals eps(8) for double, eps(single(8)) for single, etc.
For the next power of two band
[16, 32)
the eps is twice as big eps(16) = 2*eps(8)
The difference in eps for the original individual source of error and the eps for the final output value of a compound equation can increase or decrease the errorEpsRatio.
Algorithms should be designed to be robust to the accumulation of small errors.
Even simple steps like never depending on exact equality of floating-point numbers can go a long way toward making code robust. For example, use >= instead of ==. Set the stopping criteria for an iterative solving algorithm to provide many eps of tolerance. The tolerance band should be small enough to meet the accuracy requirements of the higher level applications, but not so tight the iterative search never completes due to accumulation of rounding errors. Search for books on Numerical Analysis if you'd like to learn more.
In Simulink, consider writing your tests to provide a rich variety of numeric values for inputs. You may even wish to add a noise source to your test inputs to make sure your algorithms are not overly sensitive to numerics for a few particular inputs. When you test the outputs of your algorithm, use tolerances based on the application.
For example, suppose your applications is aiming an antenna and needs to achieve pointing accuracy of +/- 0.005 radians. For a big angle like 2*pi in single precision, 0.005 radians represents over 10000 eps. For angles closer to zero 0.005 radians is many more eps than that. So if the final algorithm result is accurate to a few dozen eps of error, then the application level criteria of +/- 0.005 radians is easily met.
Embedded Coder and other MathWorks Coders do try hard to keep the numerics between simulation and generated code as close as is reasonably possible, but some target specific details in floating-point math such as the implementation details of sinf are not always feasible to avoid. But, if you have a numerically robust design for your algorithm and you've exercised that design with rich testing, then these sources of very small individual disagreements should not lead to significant surprises.

请先登录,再进行评论。

更多回答(0 个)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by