"All depends..." :) On what you're compiling and how you're comparing to specific Matlab code.
At its heart for matrix operations Matlab uses optimized BLAS and similar libraries so if Matlab code is written to take advantage of them and that computation is the bottleneck you may indeed not find much in the way in speed improvement.
OTOH, if you write code that does a lot of dynamic memory reallocation as Matlab does transparently behind the scenes and convert that to static memory use and compile you may well see powers-of-ten improvements.
It's quite common and you'll find many postings as well the "mex-ing" compute-intensive functions is a significant performance boost which is simply compiling.
Matlab with time has improved the computation engine greatly with the JIT compiler--otoh, much of that performance gain has been "eaten up" by ever more complex functions and the introduction of the higher-level data abstractions.
All in all, the Matlab approach should be to first implement an algorithm in high level, easily readable code using the features of Matlab (primarily vectorization and preallocation) appropriately. Once one has a functional program, if performance is an issue, then use the profiler and begin to look at how to optimize the bottleneck(s). Generally one finds a small area is usually the prime culprit and seeing it can often lead to alternate ideas for the specific solution. Of course, one should never forget that far more often the real gains in performance do not come from tweaking on a few machine cycles here and there in an algorithm but a whole new and more compute-efficient algorithm itself.