Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Comparing coefficients across sub-samples


From   "Fitzgerald, James" <[email protected]>
To   "<[email protected]>" <[email protected]>
Subject   Re: st: Comparing coefficients across sub-samples
Date   Fri, 3 Aug 2012 01:51:23 +0000

David, 

One final thing; if I use this formula how would I reference or justify my use of it? Is it a basic formula I'd find in an econometrics text, or does it have a specific author, or is it founded on a particular theorem? 

Any help you can give is much appreciated

James

On 3 Aug 2012, at 02:26, "David Hoaglin" <[email protected]> wrote:

> James,
> 
> The formula that Lisa gave is not Welch's t-test formula.  Welch's
> formula produces a t-test on the difference between the two underlying
> means, and the expression inside the square-root sign is (s1^2)/n1 +
> (s2^2)/n2, where s1^2 and s2^2 are the respective sample variances.
> (Those sample variances are also used in calculating the number of
> degrees of freedom for the t-test, which will generally not be an
> integer.)
> 
> In "Lisa's formula" the denominator is the square root of [se(B1)]^2 +
> [se(B2)]^2, which can be written var(b1) + var(b2).  It uses the fact
> that, since B1 and B2 come from different groups (and can be assumed
> to be independent), var(B1 - B2) = var(B1) + var(B2).  Lisa is
> replacing the true variances by their estimates from the two
> regressions (the squares of the respective standard errors) and
> referring z to the standard normal distribution.  The standard errors
> of B1 and B2 already incorporate n1 and n2, along with other features
> of the regression data.
> 
> David Hoaglin
> 
> On Thu, Aug 2, 2012 at 7:42 PM, Fitzgerald, James <[email protected]> wrote:
>> Lisa,
>> Do I need to divide the squared standard errors by n of each sample?
>> The formula you provided appears to be Welch's t-test formula, but Welch's formula would be:
>> z = (B1 - B2) / √(seB1^2/n1 + seB2^2/n2)
>> Welch, B. L. (1947). "The generalization of "Student's" problem when several different population variances are involved". Biometrika 34 (1–2): 28–35
>> Regards
>> James
>> 
>> 
>> ________________________________________
>> From: Lisa Marie Yarnell [[email protected]]
>> Sent: 01 August 2012 04:29
>> To: [email protected]; Fitzgerald, James
>> Subject: Re: st: Comparing coefficients across sub-samples
>> 
>> Hi James,
>> 
>> Typically the effect of a predictor in two different groups can be compared with the unstandardized beta. You can do a statistical test of the difference in the betas using the z-score formula below.  I usually just calculate the difference between unstandardized betas from two different models by hand, though Stata might have a command to do this for you.  Is that what you are looking for: the Stata command?
>> 
>>            (b1 – b2)                       b1 and b2 are the unstandardized regression weights that you want
>> z = --------------------                                    to test the difference between
>>      √(seb12 + seb22)                   seb1 and seb2are the standard errors of these unstandardized
>>      ↑                                                    regression weights, found next to the weights themselves
>> This is a square root sign!                      in your SPSS output.  Remember to square them.
>> Take the square root of the
>> entire value in parentheses.
>> 
>> In terms of comparing the *magnitude* of the effect in the two different subsamples, it is more correct to do this qualitatively by comparing the *standardized* beta for the variable of interest against effect size rules of thumb for small/medium/large (which sometimes differ by discipline, such as social sciences/education/engineering).  Just report the standardized beta as the effect size in each group; it would be a qualitative statement about the effect in each group.
>> 
>> Here are rules that I have:
>> Standardized regression coefficients:
>> * Keith’s (2006) rules for effects on school learning: .05 = too small to be considered meaningful, .above .05 = small but meaningful effect, .10 = moderate effect, .25 = large effect.
>> * Cohen’s (1988) rules of thumb: .10 = small, .30 = medium, >  (or equal to) .50 = large
>> 
>> Lisa
> 
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index