Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Estimating program impact

From   David Hoaglin <>
Subject   Re: st: Estimating program impact
Date   Wed, 6 Jun 2012 20:42:28 -0400

Dear Armen,

Usually baseline data are collected before an intervention has an
opportunity to affect the outcome variable.  Thus, it seems unlikely
that the difference at baseline is due to the intervention.

You mentioned a "control group."  If the experimental study used some
form of randomization to assign people (?) to the intervention group
and the control group, one would not expect the two groups to differ
at baseline.  (Random assignment does not guarantee absence of
differences at baseline for every possible assignment, but such
differences should be absent on average.)

If your study did not use random assignment (which may not have been
feasible), it would be better to use the term "comparison group" and
save "control group" for studies that are randomized.

In a non-randomized study, the difference at baseline means that the
two groups were not comparable at baseline.  You must take that lack
of comparability into account before you can reach any conclusion
about the effect of the program.  The difference of differences is one
way of doing that.  In some situations, it is appropriate to use the
baseline value of the outcome variable as a covariate (and adjust for

David Hoaglin

On Wed, Jun 6, 2012 at 6:10 PM, Armen Martirosyan
<> wrote:
> Dear Statalist
> This is more statistics question rather Stata but i would appreciate if
> somebody can share any thoughts. I am analyzing data from experimental
> study with one intervention and one control group. For several variables i
> have detected statistical difference between intervention and control
> group at baseline measurement but no any stat difference at end-line
> measurement. Can i infer that difference at baseline is actually  program
> effect or i still need to do difference-in-difference (DiD) analysis?. i
> think i can make that inference without DiD but would appreciate if
> somebody shares any thoughts on this or any previous experience with this
> type of results.

*   For searches and help try:

© Copyright 1996–2017 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index