Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

RE: st: RE: Log Likelihood for Linear Regression Models


From   "FEIVESON, ALAN H. (AL) (JSC-SK) (NASA)" <[email protected]>
To   "'[email protected]'" <[email protected]>
Subject   RE: st: RE: Log Likelihood for Linear Regression Models
Date   Thu, 30 Oct 2003 08:38:47 -0600

Leecht - By definition, the likelihood function is just the joint density of
the observations evaluated at their observed values. The log likelihhod is
the log of the likelihood function. For a set of independent observations of
a N(xbeta,sigma) random variable, the log likelihood is No. 1 in your
posting. That's because 1/sigma appears as a multiplier in the
normal(xbeta,sigma) density function. Look in any beginning mathematical
statistics book for a discussion on the normal density function. Without
1/sigma, it won't integrate to 1.

Al Feiveson



-----Original Message-----
From: leechtcn [mailto:[email protected]]
Sent: Thursday, October 30, 2003 8:31 AM
To: [email protected]
Subject: Re: st: RE: Log Likelihood for Linear Regression Models


Dear Al FEiveson,

   Thanks for your conments, but i am still lost. Can
you give me some references? I can just find No. 2 in
some textbooks!

thanks again

Leecht


--- "FEIVESON, ALAN H. (AL) (JSC-SK) (NASA)"
<[email protected]> wrote:
> Leecht - 
> 
> NO. 1 is the true log likelihood. The second is ok
> to use for a "log
> likelihood" for purposes of maximimization with
> respect to beta since
> log(sigma) is just an additive constant. But when
> estimating beta AND
> sigma,you need the other term.
> 
> Al FEiveson
> 
> -----Original Message-----
> From: leechtcn [mailto:[email protected]]
> Sent: Thursday, October 30, 2003 5:43 AM
> To: [email protected]
> Subject: st: Log Likelihood for Linear Regression
> Models
> 
> 
> Dear Listers,
> 
> I have asked this question before. I am posting it a
> second time in case you guys have not received it.
> 
> I am sorry for the all convinence caused!
> 
> I have a question concerning William Gould and
> William
> Sribney's "MAximium Likelihood Estimation" (1st
> edition):
> 
> 
> In its 29th page, the author write the the following
> lines:
> 
>    For instance, most people would write the log
> likelihood for the linear regression model as:
> 
>   LnL =
> SUM(Ln(Normden((yi-xi*beta)/sigma)))-ln(sigma)
> (1)
> 
> But in most econometrics textbooks, such as William
> Green, the log likelihood for a linear regression is
> only:
> 
>   LnL = SUM(Ln(Normden((yi-xi*beta)/sigma)))        
>  
> (2)
> 
> 
> that is, the last item is dropped
> 
> I have also tried to use (2) in stata, it will give
> "no concave" error message. In my Monte Carlo
> experiments, (1) always gives reasonable results.
> 
> Can somebody tell me why there is a difference
> between
> stata's log likelihood and those of the other
> textbooks?
> 
> thanks a lot
> 
> Leecht
> 
> 
> 
> __________________________________
> Do you Yahoo!?
> Yahoo! SiteBuilder - Free, easy-to-use web site
> design software
> http://sitebuilder.yahoo.com
> *
> *   For searches and help try:
> *  
> http://www.stata.com/support/faqs/res/findit.html
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
> *
> *   For searches and help try:
> *  
> http://www.stata.com/support/faqs/res/findit.html
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/


__________________________________
Do you Yahoo!?
The New Yahoo! Shopping - with improved product search
http://shopping.yahoo.com
*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/
*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index