This is really just a question about understanding basic optimization methods, and nonlinear regression methods. In fact, you can find a very good description of Levenberg-Marquardt in doc lsqnonlin. Simply doing a good read of that will give you much information. (Look under "Least-Squares (Model Fitting) Algorithms" in the docs for lsqnonlin.)
Some basic comments:
- No, lsqnonlin does not search "only" near the start point. It is an optimization scheme, based on a locally low order approximation to the nonlinear function that you are trying to fit. This is performed iteratively until it converges to a solution.
- There is no limit on how far the search can proceed. Iterations are continued as long as useful improvement in the objective is seen. (Thus the norm of the residual vector is seen to decrease.) When improvement in the resicuals is no longer possible, the search should be at least at a local minimizer of the error metric. Again, this is basic optimization.
- Must such a search always converge to a solution? Of course not. Divergence can occur, where some or all of the parameters can wander out towards infinity. Again, you would benefit greatly from reading and learning theory about basic optimization methodologies.
- Will the search find the globally best solution? Again, no. Any such search is no better than the quality of the starting values you chose to provide. If you start in the wrong place, or too far away? While the optimization should usually converse to SOME local minimizer, there is no assurance that it will find the one you might want to see.
So I would strongly recommend you do some outside reading on the subject of optimization. Thus optimization in general, and you might focus on a basic method used by lsqnonlin: Levenberg-Marquardt. (Reading about the other methods in lsqnonlin will probably get you in deeper than you really need to go at this point, without giving you much more usable information.)
So you might read this, but there are many very good texts on optimization to be found, and on nonlinear regression.
