Notice: On March 31, it was **announced** that Statalist is moving from an email list to a **forum**. The old list will shut down on April 23, and its replacement, **statalist.org** is already up and running.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

From |
Maarten Buis <maartenlbuis@gmail.com> |

To |
statalist@hsphsun2.harvard.edu |

Subject |
st: Re: Yet another question regarding Stata Tip #87 |

Date |
Mon, 24 Sep 2012 11:07:30 +0200 |

--- Plummer, Lawrence A. asked: > In your tip #87, the additive results indicate “the marginal effect of > collgrad is thus larger for white women than for black women.” However, the > difference between the two marginal effects (0.114) is not statistically > significant based on the following test: > > lincom [0.black#1.collgrad - 0.black#0.collgrad] - [1.black#1.collgrad - > 1.black#0.collgrad]. > > How much does the non-significance matter? Is this not a test of how much > the effect of collgrad differs between black and white women in additive > terms? I'll split this question up into two parts: 1) Does such significance matter in general? 2) Why did I choose this example? Regarding question 1 the answer is yes, in as much as you would in general care about significance. The caveat refers to fact that many people rely on significance far too much. Significance is just one way(*) to quantify one type of uncertainty(**) about the coefficient. Significance is not evil, but it does not relieve the users of their responsibility to think carefully about their results. Too often I have the idea that people think that a result can only be "scientific" if it is statistically significant. To me "scientific" is just a style of argument that relies on a combination of observation and logic, so that reasonable people can have a productive discussion about it. So statistical significance can have a place in a scientific argument, but it is not strictly necessary, and it is certainly not the main criterium on which to judge the "scientificness" of an argument. Regarding question 2 the answer is just convenience. It is a dataset that all Stata users have easy access to, so they can replicate and play with my example, and it exhibits the pattern in the parameters I wanted to show. Best, Maarten (*) It is the probability of rejecting the null hypothesis when you should not (**) It assumes that the only source of uncertainty comes from the fact that you have drawn a random sample, so it is possible that you have drawn a weird sample by chance. This is (almost) always not the only reason why we are uncertain about our results. There can be situations where it is not even the main reason why we are uncertain about our coefficients, for example when we have census data and thus have the population and not a sample. --------------------------------- Maarten L. Buis WZB Reichpietschufer 50 10785 Berlin Germany http://www.maartenbuis.nl --------------------------------- * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/

**Follow-Ups**:**st: Re: Yet another question regarding Stata Tip #87***From:*Maarten Buis <maartenlbuis@gmail.com>

- Prev by Date:
**Re: st: Generate Start and End Dates for SMCL Log Reports** - Next by Date:
**st: prtest** - Previous by thread:
**st: Adding kwallis p value to multiple graphs** - Next by thread:
**st: Re: Yet another question regarding Stata Tip #87** - Index(es):