Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: st: Testing equality of odds ratios outside of logistic regression


From   "Mak, Timothy" <[email protected]>
To   "[email protected]" <[email protected]>
Subject   RE: st: Testing equality of odds ratios outside of logistic regression
Date   Mon, 10 May 2010 12:52:34 +0100

Dear Michael, 

I'm not sure the proposed tests answer your question. You want to know (or 'test') whether several predictors predict the outcome equally well. But the tests offered test whether the odds ratios are the same. To me: Odds ratio are the same neither implies nor is implied by prediction being equally well. 

Prediction is usually assessed by validation: 

This requires a validation dataset and a training dataset. 
Suppose you use your data to estimate the following: 
Odds when predictor A +
Odds when predictor A -
Odds when predictor B +
Odds when predictor B - 
etc. 

Now you need to apply a decision rule to these odds: 
e.g. Predict outcome + if odds > 1; Predict outcome - otherwise

Now say you have a validation dataset obtained elsewhere. One possible assessment strategy is: Score 1 for each correct prediction; Score 0 for each incorrect one. Comparing the scores for the different predictors: then you have a comparison of prediction performance. 

The point for this rather pedantic example is that: To compare prediction, you need a few more things than an odds ratio: (1) You need a decision rule, and (2) you need a scoring system, and (3) you need a validation dataset. 

Cross-validation is often used to overcome (3). For example, leave-one-out cross-validation means that you repeat the above leaving out one observation at a time from the training dataset and use that remaining data-point as your validation data. 

Frequently, (1) and (2) can be assessed together by a scoring rule that says, e.g.: Score = -(Actual outcome - predicted probability)^2

Another thing is: you usually want a scoring system that reflects the cost of making a wrong decision. Sometimes, a false positive may be much worse than a false negative, or vice versa. Usually decision rules are designed to hold either the false negative or false positive probability at a fixed level, while minimizing the other. 

I hope this helps towards your question. 

Yours, 
Tim



-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of Steve Samuels
Sent: 10 May 2010 01:18
To: [email protected]
Subject: Re: st: Testing equality of odds ratios outside of logistic regression

Actually Michael M.'s response answers your question, because you did
not ask for a simultaneous test.

Steve

On Sun, May 9, 2010 at 8:16 PM, Steve Samuels <[email protected]> wrote:
> You can use -logistic- and test the interactions or -cc- with a test
> for homogeneity
>
> *****************
> use http://www.stata-press.com/data/r11/bdesop, clear
> des
> gen tobac20 = tobacco <=2
> xi: logistic case i.tobac20*i.alcohol [fw=freq]
> testparm _ItobXa*
> cc case tobac20 [fw=freq], by(alcohol) bd
> **********************
> or
> On Sun, May 9, 2010 at 7:49 PM, Michael I. Lichter <[email protected]> wrote:
>> I'm assisting on a paper where we examine the relationship between each of
>> four dichotomous predictors variables and one dichotomous outcome variable.
>> Prediction is our primary objective. The predictors are all measures of more
>> or less the same thing, and we want to know whether, *without controlling
>> for any of the others*, they predict the outcome equally well. We want to be
>> able to say, "If you could only pick one of these variables as a predictor
>> of the outcome, it wouldn't make any difference which one you selected."
>>
>> For each of the predictors we calculate a odds ratio and a corresponding
>> confidence interval. The odds ratios are very similar in magnitude and have
>> confidence intervals that overlap almost entirely. We did not do any formal
>> tests, not knowing of any offhand, and, because this isn't a central point,
>> we didn't think it was very important. When we reported that the odds ratios
>> were essentially equal, a reviewer objected that we had not tested for
>> equality. Any suggestions?
>>
>> In logistic regression, by the way, two of the four variables emerged as
>> significant predictors and two did not, controlling for the others. That is
>> of interest, but it doesn't answer my initial question. At least, I don't
>> think it does.
>> *
>> *   For searches and help try:
>> *   http://www.stata.com/help.cgi?search
>> *   http://www.stata.com/support/statalist/faq
>> *   http://www.ats.ucla.edu/stat/stata/
>>
>
>
>
> --
> Steven Samuels
> [email protected]
> 18 Cantine's Island
> Saugerties NY 12477
> USA
> Voice: 845-246-0774
> Fax:    206-202-4783
>



-- 
Steven Samuels
[email protected]
18 Cantine's Island
Saugerties NY 12477
USA
Voice: 845-246-0774
Fax:    206-202-4783

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index