Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: info stata


From   Roger Newson <[email protected]>
To   "[email protected]" <[email protected]>
Subject   Re: st: info stata
Date   Fri, 27 May 2011 11:52:59 +0100

In reply to your first question, Newson (2010) gives some guidelines for the validation, in a test set, of predictive scores developed in a training set.

In reply to your second question, the -senspec- package (downloadable from SSC) is probably what you are looking for.

I hope this helps.

Best wishes

Roger


References

Newson RB. Comparing the predictive power of survival models using Harrell’s c or Somers’ D. The Stata Journal 2010; 10(3): 339–358. Download pre-publication draft from
http://www.imperial.ac.uk/nhli/r.newson/papers.htm


Roger B Newson BSc MSc DPhil
Lecturer in Medical Statistics
Respiratory Epidemiology and Public Health Group
National Heart and Lung Institute
Imperial College London
Royal Brompton Campus
Room 33, Emmanuel Kaye Building
1B Manresa Road
London SW3 6LR
UNITED KINGDOM
Tel: +44 (0)20 7352 8121 ext 3381
Fax: +44 (0)20 7351 8322
Email: [email protected]
Web page: http://www.imperial.ac.uk/nhli/r.newson/
Departmental Web page:
http://www1.imperial.ac.uk/medicine/about/divisions/nhli/respiration/popgenetics/reph/

Opinions expressed are those of the author, not of the institution.

On 27/05/2011 11:47, [email protected] wrote:
Dear sirs,

I'm a master degree stundent and I'm analysing a dataset of a credit agency,
for the aim of developing a credit scoring model (using Stata for logistic
regression). I'm having trouble for two points and I will be grateful if you
could help me:

1. I have splitted my dataset in two samples: validation and training. After
having developed my model on the training and tested the performance trough
the estat classificacion, how do I work on the validation one? should I launch
on it my
model (logit launched on the training sample) and test its performance trough
the estat, then if it is ok continuine to work on the training sample, to
establish predicting probabilities of defaut (that is my dependent var: 0 or
1)? or should I work on the validation sample, after having tested its
performance?

2. does it exist a command in stata that enable me to list, other than the
classification table, both sensitivity and specificity for each observation in
the dataset?


Thank you very much for your attention,

Eleonora Moselli.
*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index