Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

st: uTrouble with the penalty of a Maximm penalized likelihood estimation (MPLE)


From   [email protected]
To   [email protected]
Subject   st: uTrouble with the penalty of a Maximm penalized likelihood estimation (MPLE)
Date   Sun, 5 Sep 2010 13:23:13 +0200 (CEST)

Hello everybody,
I am trying to use a MPLE for smoothing . My model  is based on 
B-splines, equally-spaced knots and difference penalties as suggested by
Eilers and Marx (1996, 2010). The optimal smoothing parameter Lambda
(which defines the influence of the difference penalty on the likelihood)
is determined by cross validation (cv). What should happen? Starting with
a Lambda of zero, there is no penalty on the ML and therefore overfitting.
As Lambda increases, the coefficients of the estimation should adapt
(compared to the situation with overfitting) as overfitting is now
penalized.
My problem is: I am starting cv with a lambda of zero:  overfitting. I
slightly increase lambda more and more. The loglikelihood of the ML
changes but the coefficients of the estimation do not change (to be more
precise: they change but only at the third or fourth position after
decimal point) and therefore over-fitting is still not appropriately
considered. And before there is a real influence of the penalty on my
coefficients/ estimation, the Lambda/penalty seems to get to large because
STATA complains that ML cannot converge ? even then the ?difficult?-option
is used.
Any ideas?
If you think it is helpfull to see my do/ado-files just tell me and I am
going to add it.

Thanks Daniel Koch

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index