Why does MATLAB even have single precision variables?

If you use one "single precision" variable and intend to use it at any time within any future calculations, all products and sums including that single variable will also be single variables. It almost seems like if you start using one variable in single precision, all of your outputs could potentially be single variables as well.
It sort of reminds me of the following imaginary scenario: a person gathering data has two sizes of paper on which to write numbers observed--one is ten times the size of the other. It takes more effort (time and energy) to haul around the larger sheet of paper, so you would only use it if you had very precise measurements to record.
For any who doubt that calculations with single precision variables are faster, try the following set of commands:
m1 = rand(999,999);
m2 = rand(size(m1));
tic
m3 = m1 .* m2;
toc
m1 = single(m1);
m2 = single(m2);
tic
m4 = m1 .* m2;
toc
My results indicate a decrease in calculation time of at least one order of magnitude, if not two or more.

6 个评论

You seem to have answered your own question. Isn't it natural that one might wish to trade precision for speed or to reduce memory consumption?
Which version, which OS, what hardware?
At the command line I get roughly 2:1 but in a function it's only about 35% reduction for 32-bit machine.
I'll admit I'm surprised somewhat; I thought w/ the Intel FPU it was essentially the same cost for either but I wasn't very successful in a quick search for modern architecture timings documentation to double-check.
Anyways, at least be sure you're using a functional form for the test.
I am using MATLAB R2014a, Windows 7 Enterprise, on PC with Intel Celeron E3400 dual processors. Did I use a functional form for the test? If not, please assist me to do so and/or explain why I could do so vs. what I have above...?
I recast the test snippet you gave as...
function dt=singdub
m1 = rand(999,999);
m2 = rand(size(m1));
dt=zeros(1,2);
tic
m3 = m1 .* m2;
dt(1)=toc;
m1 = single(m1);
m2 = single(m2);
tic
m4 = m1 .* m2;
dt(2)=toc;
>> dt=zeros(20,2);for i=1:20,dt(i,:)=singdub;end,[min(dt);max(dt);mean(dt)]
ans =
0.0104 0.0078
0.0122 0.0087
0.0110 0.0081
>> mn=mean(dt);(mn(1)/mn(2)-1)*100
ans =
34.5783
In all likelyhood, the performance of single vs double will depend on the processor, size of data being operated on, how the function doing the processing has been compiled (is it implemented using MMX, SSE, SSE2, SSE3 or plain FP), etc. There's only one way to know for sure, profile your specific code. Do not rely on profiling done by somebody on a completely different piece of code
I saved the function as the attached .m file. If a median processing time with double precision quantities were, say, five times longer than median single processing time, the following script would produce 400, would it not?
dt=zeros(20,2);for i=1:20,dt(i,:)=singdub;end,[min(dt);md=median(dt);(md(1)/md(2)-1)*100
Therefore, what is the point of calculating 400? To me, the more interesting quantity is the ratio of 5 to 1...

请先登录,再进行评论。

 采纳的回答

I'm not quite sure what you are asking really.
I am starting to use 'single' more and more often nowadays because we work with very large datasets which, while stored in file as integers, have to be converted to floating point in Matlab to do maths, but since a double takes twice the memory of a single this is an excessive use of memory quite often.
The extra precision that a double gives is rarely relevant and especially given that at the end of whatever calculation I am doing the final result will be converted back to integer anyway with the requisite loss of accuracy.
Propagation of the data type from the start through to its results is not exactly uncommon though. In C++ you generally have to cast to a different data type (or accept warnings in your code) if you want the result in a different type to the original variable.

2 个评论

I was asking the question because some opinions that "single" precision is useless exist. I have seen encouragement to essentially ignore it as standard practice.
That encouragement probably comes from people that have been bit by bad typecasting. If everyone used double then those problems wouldn't exist. Conformity to a norm takes away flexibility on the other hand.
It depends on your problem. If precision is an issue, then maybe even double is not enough. If you have a data intensive application then using single may be the way to go.

请先登录,再进行评论。

更多回答(1 个)

For most purposes, double precision is good because calculations are less likely to be affected by rounding errors (though it's still important to keep them in mind). However, reducing memory use is sometimes more important (for example if handling large images) and then single precision can be a better choice. This is a well-understood trade-off.
The more interesting question with respect to MATLAB is about type conversion rules. In C, for example, if a binary operator has a float (i.e. single) operand and a double operand, the float is converted to double before the operation is done. In MATLAB, on the other hand, the double is converted to a single. More generally, C operates using "promotion" of integer to floating and lower to higher precision, while MATLAB carries out "demotion" of scalar doubles to matching types.
I suspect that this is because MATLAB has a very strong sense of double as the normal, default, standard type. The assumption is therefore that if you are using some other type you have a good reason to do so, and so you want that type to propagate throughout the computation, rather than finding you have an accidental promotion to double. This seems sensible.

4 个评论

The reason C-based languages will implicitly convert a single to a double in a mixed type operation is that because the single to double conversion results in no loss of information whatsoever. The inverse is not true, hence why you have to explicitly tell the compiler you want to loose some information.
To me that sounds like the sensible way of doing things and I find matlab's behaviour frustrating. (Also probably because the C behaviour is so ingrained in my brain).
Say you have a large array X of type single, and you want to double every element. You almost certainly want the result to be single too, because if memory wasn't an issue you'd just have let X be double. You can write 2*X and MATLAB will do what you want. If you had to write single(2)*X how often would you forget, and how cumbersome would your code become?
I understand this. It also makes sense because if you multiply two quantities with differing numbers of significant digits together, the answer has the lesser number of significant digits--and MATLAB sort of mimics that behavior with its double/single multiplication (and addition) behavior, correct?
Yes, for a loose definition of "sorta'"... :) It maintains the same working precision of result...just as with a double, depending on the computations involved, what the actual precision of the result is may not be anything approaching the number of digits in the result. See the Matlab writeup and some of Cleve's other writings starting at
You'll likely find the couple of sections on accuracy and avoiding problems of particular interest and apropos to the question and the discussion.
I'll hypothesize the reason for Matlab beginning with and using double as default goes back to the basic idea of being a "MATrix LABoratory" application that was intended to be able to be used first pedagogically and then later more for actual applications. For the former, not having to worry needlessly about precision means can, for the most part, write stuff in Matlab that mimics the actual matrix operations and expect to get reasonable results.
It is also for applications wherein there are many who solve systems of stiff DEs or otherwise ill-conditioned systems of many equations wherein precision is everything or the result is likely nothing. We still regularly get postings from those whose problems are beyond even what double can do in straightforward application--just a week or so ago a query came up from an individual for which the condition number of his problem matrix is otoo 10E17 even after regularization. There are hard problems out there.
As for propogating thru without requiring explicit casting, that's a (relatively) recent feature...as recently as R12
>> x=single(pi);
>> y=2*x;
??? Error using ==> *
Function '*' not defined for variables of class 'single'.
>>
I don't know which release actually introduced single as a full-fledged numeric class, I have no versions between R12 and R2012b, a span of roughly 10 yr or so...

请先登录,再进行评论。

类别

帮助中心File Exchange 中查找有关 Logical 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by