Questions about infinity in matlab

49 次查看(过去 30 天)
Hi all,
In matlab it seems that as long as a number is greater than 1e309 it will be considered inf,but my calculations tend to generate numbers larger than 1e309, but they are actually finite and not really infinite.This caused me a very distressing problem, for example, we all know that 0 * (1e310) = 0, or 0.1 * (1e310) = 1e309, but in matlab inside their results are NAN and inf, which is very unreasonable, is not it?
I know that 1e309 is undoubtedly a very large number, but this number is only the number generated by the intermediate process of my code, it is not the final result, for example, my final result is 1e309/1e300, no doubt anyone who has studied elementary mathematics knows it is 1e9, it is finite, but matlab behaves badly, it thinks it is inf.
Anyway, this behavior of matlab makes my code often appear inf or nan which makes the program crash, I don't know what to do? Does anyone have a good solution?
  5 个评论
ma Jack
ma Jack 2022-10-26
Sir I am sorry for the trouble my code has caused you.The following two sentences have no special meaning, I just want to print out their values:
En(pp+1,Ga_2/(4*E*E))
((abs(rx)*E)^(2*pp))
These two are the core of my problem, with the parameters I set, the first term it tends to 0 and the second term is a large number (the computer thinks it's inf), thus eventually leading to NAN.
ma Jack
ma Jack 2022-10-26
I did also overlook factorial(200), which also has a fatal impact.

请先登录,再进行评论。

采纳的回答

Jan
Jan 2022-10-25
Welcome to the world of the IEEE754 floating point format. This is not used only in Matlab, but in many other programming languages also and it is implemented in all CPUs I know (which contain a floating point unit). This format is the world wide standard.
It uses 64 bit per element and can store numbers up to 1e309, als you have observed. Remember, that for all numbers stored in "double" format, the number of valid digits is about 16. So 1e309 is a rounded number only.
You can work with arbitrary precision instead, see e.g. the symbolic toolbox and VPA . Java's BigInt can be used from Matlab also. See also:

更多回答(2 个)

John D'Errico
John D'Errico 2022-10-25
编辑:John D'Errico 2022-10-25
What to do? Learn to work with large numbers like that. Sorry, but you just do. A common solution is to use logs. Don't compute those large numbers at all, but compute their logs!
For example, consider a factorial.
factorial(500)
ans = Inf
But the log of that factorial? There are two ways to do this. First, you can just do
sum(log(1:500))
ans = 2.6113e+03
gammaln(501)
ans = 2.6113e+03
The second recognizes that the gamma function, thus gamma(n) is the same as factorial(n-1). And therefore you can compute the log of the factorial using the gammaln function, which computes the natural log of the gamma function.
Essentially, you need to learn how to work with large numbers. Similarly,
nchoosek(2000,1000)
ans = Inf
But you can easily enough compute the log of that same binomial coefficient as:
binln = @(N,K) gammaln(N+1) - gammaln(K+1) - gammaln(N-K+1);
So is it correct for small arguments? (The warning message here comes from nchoosek, not gammaln.)
[binln(100,40),log(nchoosek(100,40))]
Warning: Result may not be exact. Coefficient has a maximum relative error of 1.2434e-14, corresponding to absolute error 170927519286259.
ans = 1×2
64.7906 64.7906
As you see, they agree. But binln has no problems at all for very much larger arguments, literally as large as you wish.
binln(200000,100000)
ans = 1.3862e+05
Anyway, once you have the logs of a pair of large numbers which you know lie in the numerator and denominator of a fraction, subtract them, and only then, if necessary, exponentiate to get the result you need.
Next, a common reason why numbers get big is you are using series approximations, but trying to push the convergence of that series just too far. Sorry, but this is not at all uncommon. People push things too far, without really understanding the numerical issues. For example, the simple Taylor series for say sin(x) is globally convergent in theory. Theory is great, but often useless in practice. In practice, you cannot take enough terms in that series to get convergence for say x = 100 (radians). Just look at the terms in the series. You are raising x to extremely large powers, and if x is large, things get bad long before they ever get good.
syms x
taylor(sin(x),'order',22)
ans = 
You will see massive subtractive cancellation, and simply will not be able to compute the values for that series for large absolute values of x. It would be a complete waste of time to even try that in double precision arithmetic. Instead, you can do things like the use of range reduction methods, where you can now compute a solution for at least reasonable values of x. For example:
N = 1:10;
SinApprox = @(x) sum((-1).^(N-1).*x.^(2*N-1)./factorial(2*N-1));
X = 100;
Xhat = 100 - 32*pi
Xhat = -0.5310
format long g
[SinApprox(X),SinApprox(Xhat),sin(100)]
ans = 1×3
1.0e+00 * -7.94697857233433e+20 -0.506365641109755 -0.506365641109759
So while I would avoid trying to evaluate SinApprox(100), SinApprox(-0.5310) is quite well behaved. And that was just a simple range reduction trick.
Another idea is to scale your variables. Too often we see people thinking that whatever units they want to work in is ok. In fact of course that is just wrong. Would you really want to measure the distance to the nearest star in millimeters? In nanometers? Of course not! Use the appropriate units so your numbers are well scaled, well behaved. Pick units that make everything near 1 in magnitude, and you will often be happier. So the distances between the planets in the solar sytem are often measured as multiples of the distance from the earth to the sun, thus an astronomical unit. The distances between nearby stars are measured in light years, not feet or meters. But wavelengths of light are often measured in nanometers. The mass of other stars themselves are measured in terms of relative solar masses, so multiples of the mass of our sun. The use of appropriate units often makes any associated computations much better behaved.
In the end, what you need to do is simply just learn the techniques to work with such problems. You may learn some of them in a numerical analysis course, though I doubt most numerical analysis courses really cover this sort of thing explicitly or in any real depth. As well, in almost all cases, going to higher precision arithmetic is not needed, though at times, it can be a fix. I usually call such things a crutch, and say they are best avoided and IMHO only used as a very last resort. (Despite the fact that I have written several versions of high/variable precision arithmetic myself for use by those who really want them.)

Steven Lord
Steven Lord 2022-10-25
In matlab it seems that as long as a number is greater than 1e309 it will be considered inf,
Not just in MATLAB. This is standard IEEE double precision behavior.
but my calculations tend to generate numbers larger than 1e309, but they are actually finite and not really infinite.
You may consider them to be finite, but they are too large to be stored as a finite value in double precision.
This caused me a very distressing problem, for example, we all know that 0 * (1e310) = 0, or 0.1 * (1e310) = 1e309,
In variable precision arithmetic or in higher than double precision, yes. In double precision a number greater than realmax overflows to inf and by the definition of multiplication in double precision 0 times inf results in NaN.
but in matlab inside their results are NAN and inf, which is very unreasonable, is not it?
No, it is not unreasonable. It is standard double precision behavior.
I know that 1e309 is undoubtedly a very large number, but this number is only the number generated by the intermediate process of my code, it is not the final result, for example, my final result is 1e309/1e300, no doubt anyone who has studied elementary mathematics knows it is 1e9, it is finite, but matlab behaves badly, it thinks it is inf.
In exact arithmetic or higher than double precision "anyone ... knows it is 1e9". In double precision it is infinity.
One solution is to avoid computing such large intermediate values. For example, if you were to compute binomial coefficients using the factorial notation you could compute , , and but for large n and k you'd lose precision or overflow to inf.
n = 200;
k = 10;
fn = factorial(n)
fn = Inf
fk = factorial(k)
fk = 3628800
fnk = factorial(n-k)
fnk = Inf
nCk1 = fn/(fk*fnk)
nCk1 = NaN
Instead you could recognize that almost all the elements of and will cancel out and avoid creating such large numerators and denominators.
numerator = prod((n-k+1):n)
numerator = 8.1470e+22
denominator = prod(1:k)
denominator = 3628800
nCk2 = numerator ./ denominator
nCk2 = 2.2451e+16
This is roughly what nchoosek does.
nCk3 = nchoosek(n, k)
Warning: Result may not be exact. Coefficient has a maximum relative error of 2.2204e-16, corresponding to absolute error 5.
nCk3 = 2.2451e+16
Alternately you could work with higher than double precision via Symbolic Math Toolbox, though that likely will be slower than double precision.
nCk4 = nchoosek(sym(n), sym(k))
nCk4 = 
22451004309013280
  1 个评论
Jan
Jan 2022-10-25
Just a note: Even prod((n-k+1):n) and prod(1:k) can overflow, but share a lot of prime factors. There are other alogorithms, which are less fragile:
a = n - k;
b = a + 1;
for i = 2:k
b = b + (b * a) / i; % Integer values only
end

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Special Functions 的更多信息

产品


版本

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by