Stata The Stata listserver
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

Re: st: Log Likelihood for Linear Regression Models


From   David Greenberg <[email protected]>
To   [email protected]
Subject   Re: st: Log Likelihood for Linear Regression Models
Date   Thu, 30 Oct 2003 22:52:29 -0500

Equation 1 is correct. The reason some writers drop the second term is this. For most purposes, the likelihood function or its log are not of interest in themselves. One might want to compare two of them, or one might want to find the values of parameters that will maximize the likelihood function. If one is comparing two likelihood functions by taking the difference, constant terms will drop out. If one is finding the values of parameters that will maximize the likelihood function or its log, one will take the first derivative and set it equal to zero. Constant terms will have first derivatives that are zero, no matter what the value of the constant. In your example, sigma represents the standard deviation in the population. It is simply a number, which is assumed to be known. For purposes of estimating the coefficients in a regression equation, it is irrelevant. It can be disregarded. One might as well drop it, at the potential cost of confusing some students. David Greenbe
rg, Sociology Department, New York University

----- Original Message -----
From: leechtcn <[email protected]>
Date: Thursday, October 30, 2003 6:42 am
Subject: st: Log Likelihood for Linear Regression Models

> Dear Listers,
> 
> I have asked this question before. I am posting it a
> second time in case you guys have not received it.
> 
> I am sorry for the all convinence caused!
> 
> I have a question concerning William Gould and William
> Sribney's "MAximium Likelihood Estimation" (1st
> edition):
> 
> 
> In its 29th page, the author write the the following
> lines:
> 
>   For instance, most people would write the log
> likelihood for the linear regression model as:
> 
>  LnL = SUM(Ln(Normden((yi-xi*beta)/sigma)))-ln(sigma)
> (1)
> 
> But in most econometrics textbooks, such as William
> Green, the log likelihood for a linear regression is
> only:
> 
>  LnL = SUM(Ln(Normden((yi-xi*beta)/sigma)))          
> (2)
> 
> 
> that is, the last item is dropped
> 
> I have also tried to use (2) in stata, it will give
> "no concave" error message. In my Monte Carlo
> experiments, (1) always gives reasonable results.
> 
> Can somebody tell me why there is a difference between
> stata's log likelihood and those of the other
> textbooks?
> 
> thanks a lot
> 
> Leecht
> 
> 
> 
> __________________________________
> Do you Yahoo!?
> Yahoo! SiteBuilder - Free, easy-to-use web site design software
> http://sitebuilder.yahoo.com
> *
> *   For searches and help try:
> *   http://www.stata.com/support/faqs/res/findit.html
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
> 

*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index