Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: OLS assumptions not met: transformation, gls, or glm as solutions?


From   Nick Cox <[email protected]>
To   [email protected]
Subject   Re: st: OLS assumptions not met: transformation, gls, or glm as solutions?
Date   Mon, 17 Dec 2012 18:46:46 +0000

GLM can mean the "general linear model" or "generalised linear
models", which are not identical sets.

Some statistical software has commands for the former  -- arguably in
Stata -regress- is general enough for one to be unnecessary -- but In
a Stata context -glm- (with -cmdname- notation) means the generalized
linear model command of that, for which conditional normal
distribution is but one allowed flavour for error behaviour; other
distribution families are well supported. Some of those families will
be heteroscedastic (e.g. the Poisson).

I don't know that any "research" is needed here. Reading the
documentation will suffice.

Nick

On Mon, Dec 17, 2012 at 5:32 PM, Laura R. <[email protected]> wrote:
> Thank you all for your help. I am still a bit confused, because now I
> read that also with GLM homoscedasticity and normality of residuals
> are assumptions that have to be met. But I will research further on
> that type of models in order to find out whether this works better in
> my case than OLS.
>
> Laura
>
>
>
> 2012/12/17 Ryan Kessler <[email protected]>:
>> The User's Guide is a great place to start. Maarten's point can also
>> be illustrated via simulation:
>>
>> capture program drop ols_sim
>> program define ols_sim, rclass
>>         version 12
>>         syntax [, NONCONstant robust]
>>         set obs 300
>>         tempvar y x
>>         gen `x'=1 in 1/100
>>         replace `x'=2 in 101/200
>>         replace `x'=3 in 201/300
>>
>>         if "`nonconstant'"!="" gen `y'=rnormal(`x',`x'^2) in 1/300
>>         else gen `y'=rnormal(`x',1) in 1/300
>>
>>         reg `y' `x', `robust'
>>         return scalar beta1=_b[`x']
>>         test `x'=1
>>         return scalar pv=r(p)
>> end
>>
>> clear
>> local reps=1000
>> cii `reps' `reps'*0.05
>> local v_lb=round(r(lb), 0.001)
>> local v_ub=round(r(ub), 0.001)
>>
>> simulate beta=r(beta1) pv=r(pv), reps(`reps'): ols_sim, nonconstant robust
>> qui count if pv <= 0.05
>> local rej_rate=`=r(N)'/`reps'
>> di "Rejection rate =`rej_rate'     [`v_lb',`v_ub']"
>>
>> Hope this helps!
>>
>> Ryan
>>
>> On Mon, Dec 17, 2012 at 10:27 AM, Maarten Buis <[email protected]> wrote:
>>> On Mon, Dec 17, 2012 at 4:17 PM, Carlo Lazzaro wrote:
>>>> The main meaning of my example is that you cannot be sure, after invoking
>>>> -robust-, that heteroskedasticity is automatically  removed. In other words,
>>>> homoskedasticity should be checked graphically even after - robust -.
>>>
>>> Robust standard errors do not change the coefficients, just the
>>> standard errors change. So the predicted values and residuals will
>>> also remain unchanged after you have specified the -vce(robust)-
>>> option. The whole point of robust standard errors is not that it
>>> "solves" in some way for heteroskedasticity, it just makes that
>>> "assumption" irrelevant. For more, see section 20.20 of the User's
>>> Guide.
>>>
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index