Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
From | Gordon Hughes <G.A.Hughes@ed.ac.uk> |
To | statalist@hsphsun2.harvard.edu |
Subject | st: Mata performance monitoring |
Date | Mon, 07 Feb 2011 17:15:59 +0000 |
However, I would like to find out where the pay-off would be greatest for improving the code and reducing memory usage. Some systems offer tools that permit the user to track how much time and memory is consumed for each call to functions and sub-programs, so that one can compile statistics of what operations represent potential bottlenecks. In any ML program it is always calls to the likelihood function that use the most time, but I would like to find out whether significant time and/or memory is being used in other sub-programs.
I am not aware of similar tools for Mata or of ways in which the information could be compiled. Can anyone suggest how I might do this? As a corollary, is there any conventional wisdom about which of the optimisation methods is best under such circumstances. Conventional wisdom in the past would have pointed to Newton-Raphson (nr) but this assumed that one could program analytical gradients and Hessian. What is the most efficient for large datasets with numerical rather than analytical derivatives?
Gordon Hughes g.a.hughes@ed.ac.uk * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/