Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

# RE: Re: st: Re: cutoff point for ROC curve

 From Joe Canner <[email protected]> To "[email protected]" <[email protected]> Subject RE: Re: st: Re: cutoff point for ROC curve Date Tue, 15 Oct 2013 13:50:57 +0000

```Mike,

As was discussed yesterday in a different thread, you can use -roccomp- to compare and plot multiple ROCs.  For example:

. logit outcome predictor1
. lroc
. predict xb_predictor1 if e(sample), xb
. logit outcome predictor2
. lroc
. predict xb_predictor2 if e(sample), xb
. roccomp outcome xb_predictor1 xb_predictor2, graph summary

Of course, you can also compare/plot more than two ROCs as desired; just repeat the -logit-, -lroc-, -predict-  sequence.

Regards,
Joe Canner
Johns Hopkins University School of Medicine

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of Michael Stewart
Sent: Tuesday, October 15, 2013 9:42 AM
To: statalist
Subject: Re: Re: st: Re: cutoff point for ROC curve

Dear Steve and Clyde,
Thank you very much for your time and advice.
I have one additional question and I was hoping to get advice.
If I have  multiple models, these there a way to draw multiple ROC curves in one graph , for better demonstration of the predictive abilities of different models.
Thank you again for your time and effort.

--
Thank you ,
Yours Sincerely,
Mike

On Mon, Oct 14, 2013 at 5:55 PM, Clyde Schechter <[email protected]> wrote:
> I would advise Michael Stewart not to seek some arbitrary formula for
> the optimal cut-off point.  He doesn't say what is being classified,
> but regardless, the substantive issue is the trade-off between two
> types of misclassification errors: false negatives and false
> positives.  Both types of error have consequences, usually different.
> To find an optimal cut-point requires assigning a loss to each type of
> error and then expressing the expected loss in terms of sensitivity,
> specificity and prevalence of the attribute being identified by the
> classification.  Then you pick the cut-off which minimizes the
> expected loss.
>
> My practical experience with this process is that people are often
> reluctant to quantify the losses associated with each type of error,
> because the losses are often of a qualitatively different nature.  For
> example, a missed diagnosis may lead to loss of life, whereas a false
> positive diagnosis may lead to unnecessary surgery.  How does one
> assign values to those?  Not easily.
>
> So it feels more comfortable to seize on some simple formula, such as
> the sum of sensitivity and specificity.  Nevertheless, if you don't
> really quantify and compare the losses associated with each type of
> error, applying some arbitrary formula will give you only the
> illusion, not the reality, of optimality.  One is simply optimizing an
> arbitrary quantity that bears no relation to the matter at hand.
>
> Clyde Schechter
> Dept. of Family & Social Medicine
> Albert Einstein College of Medicine
> Bronx, New York, USA
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/faqs/resources/statalist-faq/
> *   http://www.ats.ucla.edu/stat/stata/

.
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/faqs/resources/statalist-faq/
*   http://www.ats.ucla.edu/stat/stata/
```

© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index