How to write log base e in MATLAB?

Screenshot (19).png
I have attached a picture of what i am trying to type in MATLAB. I get an error when i put loge(14-y), so im assuming im typing it wrong and MATLAB cannot understand what i am looking for. Any help would be great, Thank you

4 个评论

You don't have to define the base. Just write log(14-y).
In matlab , log(x) means ln(x).
In matlab, the general log (without radix) is equivalent to natural logarithm (e). Generally, the legal radix in matlab is only 2 and 10
@jinhu - I'm sorry, but that is a meaningless statement. There is no "legal" radix. If you are thinking 2 is a valid radix, since numbers are stored in binary form, you would be vaguely correct. But 10 would simply not apply, since MATLAB only uses a base of 10 to display the numbers. Nothing is stored as a decimal. Anyway, any log computation has essentially nothing at all to do with the way the numbers are stored internally anyway.
If you are talking about syntactic legality, there are THREE syntactically legal log bases: 2, 10, and e, since we have the functions log2, log10, and log in MATLAB. And, if I had to make a bet, I would seriously bet that the log2 and log10 functions merely encode the simple:
log(x,b) = log(x)/log(b)
They might special case certain values of x, so when x==10, you want log10(10) to be exactly 1.
log10(10) == 1
ans = logical
1
Weirdly, it appears that on the X64 architecture, the instruction is FYL2X which is Y*log2(X) , with the logic
So you would pre-load the "Y" with 1/(log 2 of the base) and then the instruction would calculate log 2 of the value and multiply it by Y.
There is a related instruction y ∗ log2 (x +1)
But other than that... there does not appear to be any log base e instruction.

请先登录,再进行评论。

 采纳的回答

The log function does exactly what you want.
log(14 - y)
If you want a base 10 log, you use log10. There is also a log2 function, which gives a base 2 log. Other bases are achieved using the simple relation
log(X)/log(b)
produces a log to the base b. You could write a function for it as:
logb = @(X,b) log(X)/log(b);
logb(9,3)
ans =
2
which is as expected.
Finally, there is the reallog function, which also does the natural log, but it produces an error when the log would have been a complex number.
log(-2)
ans =
0.693147180559945 + 3.14159265358979i
reallog(-2)
Error using reallog
Reallog produced complex result.
But for the more normal case, reallog does the same thing as log.
log(2)
ans =
0.693147180559945
reallog(2)
ans =
0.693147180559945

11 个评论

Yes but i still cannot put log base e. that is what i an trying to type in. I can put in log base any number but it wont work with e
"Yes but i still cannot put log base e. that is what i an trying to type in."
So far two people have told you to use log, in order to "write log base e in MATLAB" as you asked in your question.
Lets take a look at why they might tell you to use log: its documentation is entitled "Natural logarithm" and the first sentence on that page explains "Y = log(X) returns the natural logarithm ln(x) of each element in array X."
Lets look up the term natural logarithm: Wikipedia defines it as "The natural logarithm of a number is its logarithm to the base of the mathematical constant e..."
whilst Wolfram Mathematics defines it as "The natural logarithm is the logarithm having base e..."
"I can put in log base any number but it wont work with e"
Actually MATLAB does not have a "universal" logarithm function that works with any arbitrary base. You seem to be under the impression that you can simply write logXXX for some number XXX and MATLAB will use base XXX. It might be nice, but such thing does not exist (nor is it likely to).
If for some reason you really need it spelt out:
Log = @(x,base) log(x) ./ log(base);
%then
Loge = @(x) Log(x, exp(1));
%which works out the same as
%Loge = @(x) log(x)
I find it mildly interesting that
isequal(log(exp(1)),1)
ans = logical
1
because exp(1) is not exactly e and I'm sure that log(x) is not exactly ln(x). But the algorithm used by log, which is not discussed on the doc page, returns exactly 1 for the floating point number exp(1). Unless the input exp(1) is treated as a special case, that seems like a very clever implementation of log. Maybe I'm easily impressed.
loge = log(sym(exp(1),'e'))
loge = 
vpa(loge)
ans = 
0.99999999999999994681762293394109
double(ans)
ans = 1
ans - 1
ans = 0
double(loge-1)
ans = -5.3182e-17
which is to say that if you take the exact binary fraction that is double precision exp(1), then log() of it is within eps(1) and therefore it is proper for log(exp(1)) to come out as exactly 0 in double precision.
I think you meant that log(exp(1)) is exactly 1 in double precision.
log(exp(1))
ans = 1
Not sure why the stress on "is." I certainly wasn't saying that it is not proper.
It seems like you're suggesting that numerical log just does what it does, and its algorithm is such that the floating point representation of the result of applying that alogrithm to exp(1) is exactly 1. Correct?
If exp(1) is not exactly e and therefore log(exp(1)) should not be exactly 1 then log(exp(1)) would have to be some other number. The implication of suggesting that perhaps exp(1) is treated as a special case for log(), is that you believe that the "correct" log for the binary double precision value exp(1) should be at least 1 representable number more or one representable number less than 1.
format hex
expone = exp(1)
expone =
4005bf0a8b145769
exponeminus = typecast(typecast(expone, 'uint64') - uint64(1), 'double')
exponeminus =
4005bf0a8b145768
exponeplus = typecast(typecast(expone, 'uint64') + uint64(1), 'double')
exponeplus =
4005bf0a8b14576a
s_expone = sym(expone, 'e') %convert to exact binary fraction
s_expone = 
s_exponeminus = sym(exponeminus, 'e')
s_exponeminus = 
s_exponeplus = sym(exponeplus, 'e')
s_exponeplus = 
log_s_expone = log(s_expone)
log_s_expone = 
log_s_exponeplus = log(s_exponeplus)
log_s_exponeplus = 
log_s_exponeminus = log(s_exponeminus)
log_s_exponeminus = 
d_log_s_expone = double(log_s_expone)
d_log_s_expone =
3ff0000000000000
d_log_s_exponeplus = double(log_s_exponeplus)
d_log_s_exponeplus =
3ff0000000000000
d_log_s_exponeminus = double(log_s_exponeminus)
d_log_s_exponeminus =
3feffffffffffffe
format long g
double(log_s_expone - 1), ans/eps(1)
ans =
-5.31823770660589e-17
ans =
-0.23951213353738
double(log_s_exponeplus - 1), ans/eps(1)
ans =
1.1018891328385e-16
ans =
0.496246748805505
double(log_s_exponeminus - 1), ans/eps(1)
ans =
-2.16553667415967e-16
ans =
-0.975271015880265
so the "actual" log of double precision exp(1) is -eps/4 different than exact 1.0, and the "actual" log of the next representable number beyond double precision exp(1) is +eps/2, and the "actual" log of the previous representable number before double precision exp(1) is -eps. Therefore double precision exp(1) is the most accurate choice for
format hex
one = 1
one =
3ff0000000000000
oneplus = typecast(typecast(one, 'uint64') + uint64(1), 'double')
oneplus =
3ff0000000000001
oneminus = typecast(typecast(one, 'uint64') - uint64(1), 'double')
oneminus =
3fefffffffffffff
format long g
s_one = sym(one, 'f')
s_one = 
1
s_oneplus = sym(oneplus, 'f')
s_oneplus = 
s_oneminus = sym(oneminus, 'f')
s_oneminus = 
exp_s_one = exp(s_one)
exp_s_one = 
e
exp_s_oneplus = exp(s_oneplus)
exp_s_oneplus = 
exp_s_oneminus = exp(s_oneminus)
exp_s_oneminus = 
exp_s_oneplus - exp_s_one, double(ans), ans/eps
ans = 
ans =
6.0357981467508e-16
ans =
2.71828182845905
exp_s_oneminus - exp_s_one, double(ans), ans/eps
ans = 
ans =
-3.0178990733754e-16
ans =
-1.35914091422952
exp(1) - exp_s_one, double(ans), ans/eps
ans = 
ans =
-1.44564689172925e-16
ans =
-0.651061480290117
This says that if you take the next representable number after 1, then exp() of that is about 2 3/4 eps higher than actual and that if you take the previous representable number before 1, then exp() of that is about -1 1/3 eps below real but that if you take binary exp(1) then it is about -2/3 eps off of the real
Therefore if you take the real theoretical then the closest double precision representation of its log is 1 exactly, not the next number after 1 and not the previous number before 1
So.. the algorithm for log() does not need to do any special treatment for binary exp(1) : log() of binary exp(1) is naturally exactly 1 to within round-off, in that exactly binary 1 is the closest representable number to the true log of binary exp(1)
That is, if you believe that log() is treating binary double precision exp(1) specially to get exactly 1.0, then it follows that there would have to be some other binary double precision number different from exactly 1, call it L, such that exp(L) is closer to binary double precision exp(1) than exp(1) gets. Or perhaps the claim is that , with some L not equal exactly to 1, is closer to mathematical than binary double precision exp(1) gets.
... what exactly is your notion that log() is having to treat binary double precision exp(1) specially to get exactly double precision 1 ?
I never said "that log() is having to treat binary double precision exp(1) specially to get exactly double precision 1" and I'm not saying it now.
syms x real
G=simplify(taylor(log2(x), x, 1.5, 'order', 31),'steps',50)
G = 
double((subs(G,x,1) - log2(sym(1)))/eps)
ans = 0.5013
double((subs(G,x,2) - log2(sym(2)))/eps)
ans = -0.2565
So if we taylor log2(x) over the range [1 2) then the maximum error is less than eps in that range.
Now for any given binary double precision number, break the representation up into exponent and mantissa. Substitute the mantissa 0x3ff for the actual mantissa -- which is equivalent to scaling the number by a power of 2 until it is in the range [1, 2) . Use the taylor'd log2 formula; the result will be less than eps() from what it should be. The log2 of the resulting value will be between 0 and 1 (to within eps). Now add to that log2 the integer difference between the actual binary exponent and 0x3ff : the result will be the log2 of the original number. You can then multiply that log2 of the original number by log2(exp(1)) to get the natural log of the number.
We took advantage of range restriction and the fact that log2() of the exponent scaling factor adds an integer to get a finite number of steps (at most 31) to accurately calculate natural log of a normalized floating point number.
No "clever" algorithm is required.
This is almost certainly not how natural log is calculated. Except possibly for special cases such as NaN or inf or denormalized numbers, MATLAB is very likely just going to call into MKL or similar, such as https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2023-1/v-ln.html
There is a paper about the IA-64 ("Itanium"); see https://www.cl.cam.ac.uk/~jrh13/papers/itj.pdf .... it does argument reduction and log2 much as I outlined, but there are apparently some additional optimizations.

请先登录,再进行评论。

更多回答(0 个)

类别

帮助中心File Exchange 中查找有关 Loops and Conditional Statements 的更多信息

标签

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by