Sensitivity and specificity: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Joe Quick
m (subpages)
mNo edit summary
 
(35 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{subpages}}
{{subpages}}
The '''sensitivity and specificity''' of diagnostic tests are based on [[Bayes Theorem]] and defined as "measures for assessing the results of diagnostic and screening tests. Sensitivity represents the proportion of truly diseased persons in a screened population who are identified as being diseased by the test. It is a measure of the probability of correctly diagnosing a condition. Specificity is the proportion of truly nondiseased persons who are so identified by the screening test. It is a measure of the probability of correctly identifying a nondiseased person. (From Last, Dictionary of Epidemiology, 2d ed)."<ref name="MeSH_SnSp">{{cite web |url=http://www.nlm.nih.gov/cgi/mesh/2007/MB_cgi?term=Sensitivity+and+Specificity |title=Sensitivity and specificity |accessdate=2007-12-09 |author=National Library of Mediicne |authorlink= |coauthors= |date= |format= |work= |publisher= |pages= |language= |archiveurl= |archivedate= |quote=}}</ref>
The '''sensitivity and specificity''' of [[diagnostic test]]s are based on [[Bayes Theorem]] and defined as "measures for assessing the results of diagnostic and screening tests. Sensitivity represents the proportion of truly diseased persons in a screened population who are identified as being diseased by the test. It is a measure of the probability of correctly diagnosing a condition. Specificity is the proportion of truly nondiseased persons who are so identified by the screening test. It is a measure of the probability of correctly identifying a nondiseased person. (From Last, Dictionary of Epidemiology, 2d ed)."<ref name="MeSH_SnSp">{{cite web |url=http://www.nlm.nih.gov/cgi/mesh/2007/MB_cgi?term=Sensitivity+and+Specificity |title=Sensitivity and specificity |accessdate=2007-12-09 |author=National Library of Mediicne |authorlink= |coauthors= |date= |format= |work= |publisher= |pages= |language= |archiveurl= |archivedate= |quote=}}</ref>


Successful application of sensitivity and specificity is an important part of practicing [[evidence-based medicine]].
Successful application of sensitivity and specificity is an important part of practicing [[evidence-based medicine]].
Line 17: Line 17:
|  || Total with disease|| Total without disease||
|  || Total with disease|| Total without disease||
|}
|}
Many of these calculations can be done at http://statpages.org/ctab2x2.html.


===Sensitivity and specificity===
===Sensitivity and specificity===
<!-- http://www.memory-alpha.org/en/wiki/Help:Math_markup -->
<!-- http://www.memory-alpha.org/en/wiki/Help:Math_markup -->


:<math>\mbox{Sensitivity of a test} =\left (\frac{\mbox{Total with a positive test}}{\mbox{Total }without\mbox{ disease}}\right ) = \left (\frac{\mbox{Cell A}}{\mbox{Cell A} + \mbox{Cell C}}\right )</math>
:<math>\mbox{Sensitivity of a test} =\left (\frac{\mbox{Total with a positive test}}{\mbox{Total }with\mbox{ disease}}\right ) = \left (\frac{\mbox{Cell A}}{\mbox{Cell A} + \mbox{Cell C}}\right )</math>


:<math>\mbox{Specificity of a test}=\left (\frac{\mbox{Total with a negative test}}{\mbox{Total }without\mbox{ disease}}\right ) = \left (\frac{\mbox{Cell D}}{\mbox{Cell B} + \mbox{Cell D}}\right )</math>
:<math>\mbox{Specificity of a test}=\left (\frac{\mbox{Total with a negative test}}{\mbox{Total }without\mbox{ disease}}\right ) = \left (\frac{\mbox{Cell D}}{\mbox{Cell B} + \mbox{Cell D}}\right )</math>
Line 32: Line 34:


:<math>\mbox{Negative predictive value}=\left (\frac{\mbox{Total }without\mbox{ disease and a negative test}}{\mbox{Total with a negative test}}\right ) = \left (\frac{\mbox{Cell D}}{\mbox{Cell C} + \mbox{Cell D}}\right )</math>
:<math>\mbox{Negative predictive value}=\left (\frac{\mbox{Total }without\mbox{ disease and a negative test}}{\mbox{Total with a negative test}}\right ) = \left (\frac{\mbox{Cell D}}{\mbox{Cell C} + \mbox{Cell D}}\right )</math>
==Summary statistics for diagnostic ability==
While simply reporting the accuracy of a test seems intuitive, the accuracy is heavily influenced by the prevalence of disease.<ref name="pmid7069920">{{cite journal |author=Harrell FE, Califf RM, Pryor DB, Lee KL, Rosati RA |title=Evaluating the yield of medical tests |journal=JAMA |volume=247 |issue=18 |pages=2543–6 |year=1982 |month=May |pmid=7069920 |doi= |url= |issn=}}</ref> For example, if the disease occurred with frequency of one in one thousand, then simply guessing that all patients do not have disease will yield an accuracy of over 99%, whereas if the disease frequency were 999 in one thousand, the same guess would yield an accuracy near 1%.
With the arrival of many biomarkers that may be expensive [[diagnostic test]]s, much research has addressed how to summarize the incremental value of a new expensive test to existing diagnostic methods.<ref name="pmid19487714">{{cite journal |author=Cook NR, Ridker PM |title=Advances in measuring the effect of individual predictors of cardiovascular risk: the role of reclassification measures |journal=Ann. Intern. Med. |volume=150 |issue=11 |pages=795–802 |year=2009 |month=June |pmid=19487714 |doi= |url=http://www.annals.org/cgi/pmidlookup?view=long&pmid=19487714 |issn=}}</ref><ref name="pmid19075211">{{cite journal |author=Cornell J, Mulrow CD, Localio AR |title=Diagnostic test accuracy and clinical decision making |journal=Ann. Intern. Med. |volume=149 |issue=12 |pages=904–6 |year=2008 |month=December |pmid=19075211 |doi= |url=http://www.annals.org/cgi/content/full/149/12/904 |issn=}}</ref><ref name="pmid17671959">{{cite journal |author=Cook NR |title=Comments on 'Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond' by M. J. Pencina et al., Statistics in Medicine (DOI: 10.1002/sim.2929) |journal=Stat Med |volume=27 |issue=2 |pages=191–5 |year=2008 |month=January |pmid=17671959 |doi=10.1002/sim.2987 |url=http://dx.doi.org/10.1002/sim.2987 |issn=}}</ref> The best method to compare diagnostic tests depends on whether the new test is to replace or add to the existing diagnostic test.<ref name="pmid20079607">{{cite journal| author=Hayen A, Macaskill P, Irwig L, Bossuyt P| title=Appropriate statistical methods are required to assess diagnostic tests for replacement, add-on, and triage. | journal=J Clin Epidemiol | year= 2010 | volume=  | issue=  | pages=  | pmid=20079607
| url=http://www.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=pubmed&tool=clinical.uthscsa.edu/cite&retmode=ref&cmd=prlinks&id=20079607 | doi=10.1016/j.jclinepi.2009.08.024 }} </ref>
===Area under the ROC curve===
{{main|Receiver operating characteristic curve}}
The area under the [[receiver operating characteristic curve]] (ROC curve), AROC, or c-index has been proposed. The c-index varies from 0 to 1 and a result of 0.5 indicates that the diagnostic test does not add to guessing.<ref name="pmid7063747">{{cite journal |author=Hanley JA, McNeil BJ |title=The meaning and use of the area under a receiver operating characteristic (ROC) curve |journal=Radiology |volume=143 |issue=1 |pages=29–36 |year=1982 |month=April |pmid=7063747 |doi= |url=http://radiology.rsnajnls.org/cgi/pmidlookup?view=long&pmid=7063747 |issn=}}</ref> Variations have been proposed.<ref name="pmid15900606">{{cite journal |author=Walter SD |title=The partial area under the summary ROC curve |journal=Stat Med |volume=24 |issue=13 |pages=2025–40 |year=2005 |month=July |pmid=15900606 |doi=10.1002/sim.2103 |url=http://dx.doi.org/10.1002/sim.2103 |issn=}}</ref><ref name="pmid18687288">{{cite journal |author=Bangdiwala SI, Haedo AS, Natal ML, Villaveces A |title=The agreement chart as an alternative to the receiver-operating characteristic curve for diagnostic tests |journal=J Clin Epidemiol |volume=61 |issue=9 |pages=866–74 |year=2008 |month=September |pmid=18687288 |doi=10.1016/j.jclinepi.2008.04.002 |url=http://linkinghub.elsevier.com/retrieve/pii/S0895-4356(08)00120-0 |issn=}}</ref>
===Bayes Information Criterion===
The Bayes Information Criterion has been proposed by Schwarz in 1978.<ref>Schwarz, G. (1978). [ftp://stat-ftp.berkeley.edu/pub/users/binyu/212A/papers/Schwarz_1978.pdf Estimating the dimension of a model].        Annals of Statistics 6, 461–464. {{doi|10.1214/aos/1176344136}} [http://scholar.google.com/scholar?as_q=&num=10&btnG=Search+Scholar&as_epq=Estimating+the+dimension+of+a+model&as_oq=&as_eq=&as_occt=any&as_sauthors=G+Schwarz&as_publication=Annals+of+Statistics&as_ylo=1978&as_yhi=&as_allsubj=all&hl=en&lr=&client=firefox-a Google Scholar]</ref>
===Diagnostic odds ratio===
The diagnostic odds ratio (DOR) is based on the [[likelihood ratio]]s.<ref name="pmid14615004">{{cite journal |author=Glas AS, Lijmer JG, Prins MH, Bonsel GJ, Bossuyt PM |title=The diagnostic odds ratio: a single indicator of test performance |journal=J Clin Epidemiol |volume=56 |issue=11 |pages=1129–35 |year=2003 |month=November |pmid=14615004 |doi= |url=http://linkinghub.elsevier.com/retrieve/pii/S089543560300177X |issn=}}</ref>
Whereas the [[likelihood ratio]] is:<ref name="urlAsk the EBM Expert! - Society of General and Internal Medicine (SGIM)">{{cite web |url=http://www.sgim.org/index.cfm?pageId=673 |title=Ask the EBM Expert! - Society of General and Internal Medicine (SGIM) |author=SGIM EBM Task Force and Interest Group |authorlink= |coauthors= |date=2009 |format= |work= |publisher=Society of General Internal Medicine |pages= |language= |archiveurl= |archivedate= |quote= |accessdate=}}</ref>
:<math>\text{Likelihood ratio} = \frac{\mbox{probability of test result with disease}}{\mbox{probability of same result without disease}}</math>
The diagnostic odds ratio is:<ref name="urlAsk the EBM Expert! - Society of General and Internal Medicine (SGIM)"/>
:<math>\text{Diagnostic odds ratio} = \frac{\mbox{odds of test result with disease}}{\mbox{odds of same result without disease}}</math>
Or the diagnostic odds ratio is:
:<math>\text{Diagnostic odds ratio} = \frac{\mbox{Likelihood ratio +}}{\mbox{Likelihood ratio -}}</math>
For example:
* If the sensitivity and specificity are 95% and 80%, respectively (or vice versa) then the DOR = 71.
* If the sensitivity and specificity are both 95%, then the DOR = 361.
"The DOR ranges from 0 to infinity, with higher values indicating better discriminatory test performance. A value of 1 means that a test does not discriminate between patients with the disorder and those without it...
The DOR does not depend on the prevalence of the disease."<ref name="pmid14615004"/>
===Sum of sensitivity and specificity===
This easy metric is called the Gain in Certainty:<ref name="pmid4014166">{{cite journal |author=Connell FA, Koepsell TD |title=Measures of gain in certainty from a diagnostic test |journal=Am. J. Epidemiol. |volume=121 |issue=5 |pages=744–53 |year=1985 |month=May |pmid=4014166 |doi= |url=http://aje.oxfordjournals.org/cgi/pmidlookup?view=long&pmid=4014166 |issn=}}</ref>
:<math>\mbox{Gain in Certainty} = \left (\mbox{sensitivity} + \mbox{specificity}\right )</math>
It varies from 0 to 2 and a result of 1 indicates that the diagnostic test does not add to guessing.
Similarly, Youden's ''J'' index (''J''*),  is:<ref name="pmid15405679">{{cite journal |author=Youden WJ |title=Index for rating diagnostic tests |journal=Cancer |volume=3 |issue=1 |pages=32–5 |year=1950 |month=January |pmid=15405679 |doi= |url= |issn=}}</ref>
:<math>\text{Youdens index} = \left (\mbox{sensitivity} + \mbox{specificity} \right ) - 1</math>
The index is derived from:
:<math>\text{Youdens index} = 1 - \left (\mbox{false positive rate} + \mbox{false negative rate} \right )</math>
===Number needed to diagnose===
The number needed to diagnose is:<ref>Bandolier (1996) [http://www.medicine.ox.ac.uk/bandolier/band27/b27-2.html How Good is that Test? II]</ref>
:<math>\text{Number Needed to Diagnose} = \frac{1}{\text{Sensitivity} - (1 - \text{Specificity})}</math>
:<math>\text{Number  Needed to Diagnose} = \frac{1}{\text{Youdens index}}</math>
===Predictiveness curve===
A graph of the predictiveness curve has been proposed.<ref>{{Cite journal | doi = 10.1093/aje/kwm305 | volume = 167 | issue = 3 | pages = 362-368 | last = Pepe | first = Margaret S. | coauthors = Ziding Feng, Ying Huang, Gary Longton, Ross Prentice, Ian M. Thompson, Yingye Zheng | title = Integrating the Predictiveness of a Marker with Its Performance as a Classifier | journal = Am. J. Epidemiol. | pmid=17982157 | accessdate = 2008-12-17 | date = 2008-02-01 | url = http://aje.oxfordjournals.org/cgi/content/abstract/167/3/362 }}</ref>
===Proportionate reduction in uncertainty score===
The proportionate reduction in uncertainty score (PRU) has been proposed.<ref name="pmid17158858">{{cite journal |author=Coulthard MG |title=Quantifying how tests reduce diagnostic uncertainty |journal=Arch. Dis. Child. |volume=92 |issue=5 |pages=404–8 |year=2007 |month=May |pmid=17158858 |doi=10.1136/adc.2006.111633 |url=http://adc.bmj.com/cgi/pmidlookup?view=long&pmid=17158858 |issn=}}</ref>
===Integrated sensitivity and specificity===
This measure has been proposed as an alternative to the area of the the [[receiver operating characteristic curve]].<ref name="pmid17569110">{{cite journal |author=Pencina MJ, D'Agostino RB, D'Agostino RB, Vasan RS |title=Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond |journal=Stat Med |volume=27 |issue=2 |pages=157–72; discussion 207–12 |year=2008 |month=January |pmid=17569110 |doi=10.1002/sim.2929 |url=http://dx.doi.org/10.1002/sim.2929 |issn=}}</ref>
===Reclassification tables===
{{Image|Reclassification table example.gif|right|350px|Reclassification table example for a test with binary outputs (e.g.  normal and abnormal)}}
This measure has been proposed as an alternative to the area of the the [[receiver operating characteristic curve]].<ref name="pmid19487714">{{cite journal |author=Cook NR, Ridker PM |title=Advances in measuring the effect of individual predictors of cardiovascular risk: the role of reclassification measures |journal=Ann. Intern. Med. |volume=150 |issue=11 |pages=795–802 |year=2009 |month=June |pmid=19487714 |doi= |url=http://www.annals.org/cgi/pmidlookup?view=long&pmid=19487714 |issn=}}</ref><ref name="pmid17569110">{{cite journal |author=Pencina MJ, D'Agostino RB, D'Agostino RB, Vasan RS |title=Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond |journal=Stat Med |volume=27 |issue=2 |pages=157–72; discussion 207–12 |year=2008 |month=January |pmid=17569110 |doi=10.1002/sim.2929 |url=http://dx.doi.org/10.1002/sim.2929 |issn=}}</ref> This method allows calculating a 'reclassification index' or 'reclassification rate', or 'net reclassification improvement' (NRI).<ref name="pmid17569110">{{cite journal |author=Pencina MJ, D'Agostino RB, D'Agostino RB, Vasan RS |title=Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond |journal=Stat Med |volume=27 |issue=2 |pages=157–72; discussion 207–12 |year=2008 |month=January |pmid=17569110 |doi=10.1002/sim.2929 |url=http://dx.doi.org/10.1002/sim.2929 |issn=}}</ref>
<math>\text{NRI} =\ \text{sum of:}</math>
<br/><br/>
:<math align="center"> \frac{\text{events reclassified higher} - \text{events reclassified  lower}}{\text{events}}</math>
<math>\text{and}\ </math>
<br/><br/>
:<math align="center>\frac{\text{nonevents  reclassified lower} - \text{nonevents reclassified higher}}{\text{nonevents}}</math>
The NRI is analogous to Youden's ''J'' index and the Gain in Certainty which are both functions of the sum of the sensitivity and specificity. In the special case of two diagnostic tests that have binary results (e.g. normal and abnormal), the NRI is the same the Gain in Certainty of the first test minus the Gain in Certainty of the second test, or alternatively stated, the change in the sum of the sensitivity and specificity:
<math>\text{NRI}{}_\text{for tests with binary outcomes} = \left(\text{Sensitivity} + \text{Specificity} \right){}_\text{Second test}\ -\ \left(\text{Sensitivity} + \text{Specificity}  \right){}_\text{First test}</math>
Both the NRI, Youden's ''J'', and the Gain in Certainty are measures that:
* Assume the importance of correctly classifying a abnormal patient is equally as important as correctly classifying a normal patient
* Sum two rates (sensitivity and specificity) rather than a weighted average the two rates based on the ratio of abnormal to normal patients.
** Summing helps compare two tests that were studied in settings with different prevalences of disease.
** However, the NRI may be seen as misleading as it is an ''index'' of reclassification and not a ''rate'' of reclassification. In the special case of a prevalence of disease of 50%, the ''index'' of reclassification is exactly double the ''rate'' of reclassification.
The clinical net reclassification improvement (CNRI) is a variation that is the NRI only for the subjects at intermediate risk of disease.<ref name="pmid17671959">{{cite journal |author=Cook NR |title=Comments on 'Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond' by M. J. Pencina et al., Statistics in Medicine (DOI: 10.1002/sim.2929) |journal=Stat Med |volume=27 |issue=2 |pages=191–5 |year=2008 |month=January |pmid=17671959 |doi=10.1002/sim.2987 |url=http://dx.doi.org/10.1002/sim.2987 |issn=}}</ref>
===Sequential scoring===
Sequential scoring has been proposed in order to isolate the effect of a new, expensive [[diagnostic test]].<ref name="pmid17729377">{{cite journal |author=Greenland S |title=The need for reorientation toward cost-effective prediction: comments on 'Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond' by M. J. Pencina et al., Statistics in Medicine (DOI: 10.1002/sim.2929) |journal=Stat Med |volume=27 |issue=2 |pages=199–206 |year=2008 |month=January |pmid=17729377 |doi=10.1002/sim.2995 |url=http://dx.doi.org/10.1002/sim.2995 |issn=}}</ref>
==Threats to validity of calculations==
Various biases incurred during the study and analysis of a diagnostic tests can affect the validity of the calculations. An example is [[spectrum bias]].
Poorly designed studies may overestimate the accuracy of a diagnostic test.<ref name="pmid10493205">{{cite journal |author=Lijmer JG, Mol BW, Heisterkamp S, ''et al'' |title=Empirical evidence of design-related bias in studies of diagnostic tests |journal=JAMA |volume=282 |issue=11 |pages=1061–6 |year=1999 |month=September |pmid=10493205 |doi= |url=http://jama.ama-assn.org/cgi/pmidlookup?view=long&pmid=10493205 |issn=}}</ref>


==References==
==References==
<references/>
{{reflist}}
 
[[Category:Reviewed Passed]][[Category:Suggestion Bot Tag]]

Latest revision as of 06:00, 17 October 2024

This article is a stub and thus not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

The sensitivity and specificity of diagnostic tests are based on Bayes Theorem and defined as "measures for assessing the results of diagnostic and screening tests. Sensitivity represents the proportion of truly diseased persons in a screened population who are identified as being diseased by the test. It is a measure of the probability of correctly diagnosing a condition. Specificity is the proportion of truly nondiseased persons who are so identified by the screening test. It is a measure of the probability of correctly identifying a nondiseased person. (From Last, Dictionary of Epidemiology, 2d ed)."[1]

Successful application of sensitivity and specificity is an important part of practicing evidence-based medicine.

Calculations

Two-by-two table for a diagnostic test
Disease
Present Absent
Test result Positive Cell A Cell B Total with a positive test
Negative Cell C Cell D Total with a negative test
Total with disease Total without disease

Many of these calculations can be done at http://statpages.org/ctab2x2.html.

Sensitivity and specificity

Predictive value of tests

The predictive values of diagnostic tests are defined as "in screening and diagnostic tests, the probability that a person with a positive test is a true positive (i.e., has the disease), is referred to as the predictive value of a positive test; whereas, the predictive value of a negative test is the probability that the person with a negative test does not have the disease. Predictive value is related to the sensitivity and specificity of the test."[2]

Summary statistics for diagnostic ability

While simply reporting the accuracy of a test seems intuitive, the accuracy is heavily influenced by the prevalence of disease.[3] For example, if the disease occurred with frequency of one in one thousand, then simply guessing that all patients do not have disease will yield an accuracy of over 99%, whereas if the disease frequency were 999 in one thousand, the same guess would yield an accuracy near 1%.

With the arrival of many biomarkers that may be expensive diagnostic tests, much research has addressed how to summarize the incremental value of a new expensive test to existing diagnostic methods.[4][5][6] The best method to compare diagnostic tests depends on whether the new test is to replace or add to the existing diagnostic test.[7]

Area under the ROC curve

For more information, see: Receiver operating characteristic curve.

The area under the receiver operating characteristic curve (ROC curve), AROC, or c-index has been proposed. The c-index varies from 0 to 1 and a result of 0.5 indicates that the diagnostic test does not add to guessing.[8] Variations have been proposed.[9][10]

Bayes Information Criterion

The Bayes Information Criterion has been proposed by Schwarz in 1978.[11]

Diagnostic odds ratio

The diagnostic odds ratio (DOR) is based on the likelihood ratios.[12]

Whereas the likelihood ratio is:[13]

The diagnostic odds ratio is:[13]

Or the diagnostic odds ratio is:

For example:

  • If the sensitivity and specificity are 95% and 80%, respectively (or vice versa) then the DOR = 71.
  • If the sensitivity and specificity are both 95%, then the DOR = 361.

"The DOR ranges from 0 to infinity, with higher values indicating better discriminatory test performance. A value of 1 means that a test does not discriminate between patients with the disorder and those without it... The DOR does not depend on the prevalence of the disease."[12]

Sum of sensitivity and specificity

This easy metric is called the Gain in Certainty:[14]

It varies from 0 to 2 and a result of 1 indicates that the diagnostic test does not add to guessing.

Similarly, Youden's J index (J*), is:[15]

The index is derived from:

Number needed to diagnose

The number needed to diagnose is:[16]

Predictiveness curve

A graph of the predictiveness curve has been proposed.[17]

Proportionate reduction in uncertainty score

The proportionate reduction in uncertainty score (PRU) has been proposed.[18]

Integrated sensitivity and specificity

This measure has been proposed as an alternative to the area of the the receiver operating characteristic curve.[19]

Reclassification tables

Reclassification table example for a test with binary outputs (e.g. normal and abnormal)

This measure has been proposed as an alternative to the area of the the receiver operating characteristic curve.[4][19] This method allows calculating a 'reclassification index' or 'reclassification rate', or 'net reclassification improvement' (NRI).[19]





The NRI is analogous to Youden's J index and the Gain in Certainty which are both functions of the sum of the sensitivity and specificity. In the special case of two diagnostic tests that have binary results (e.g. normal and abnormal), the NRI is the same the Gain in Certainty of the first test minus the Gain in Certainty of the second test, or alternatively stated, the change in the sum of the sensitivity and specificity:



Both the NRI, Youden's J, and the Gain in Certainty are measures that:

  • Assume the importance of correctly classifying a abnormal patient is equally as important as correctly classifying a normal patient
  • Sum two rates (sensitivity and specificity) rather than a weighted average the two rates based on the ratio of abnormal to normal patients.
    • Summing helps compare two tests that were studied in settings with different prevalences of disease.
    • However, the NRI may be seen as misleading as it is an index of reclassification and not a rate of reclassification. In the special case of a prevalence of disease of 50%, the index of reclassification is exactly double the rate of reclassification.

The clinical net reclassification improvement (CNRI) is a variation that is the NRI only for the subjects at intermediate risk of disease.[6]

Sequential scoring

Sequential scoring has been proposed in order to isolate the effect of a new, expensive diagnostic test.[20]

Threats to validity of calculations

Various biases incurred during the study and analysis of a diagnostic tests can affect the validity of the calculations. An example is spectrum bias.

Poorly designed studies may overestimate the accuracy of a diagnostic test.[21]

References

  1. National Library of Mediicne. Sensitivity and specificity. Retrieved on 2007-12-09.
  2. National Library of Mediicne. Predictive value of tests. Retrieved on 2007-12-09.
  3. Harrell FE, Califf RM, Pryor DB, Lee KL, Rosati RA (May 1982). "Evaluating the yield of medical tests". JAMA 247 (18): 2543–6. PMID 7069920[e]
  4. 4.0 4.1 Cook NR, Ridker PM (June 2009). "Advances in measuring the effect of individual predictors of cardiovascular risk: the role of reclassification measures". Ann. Intern. Med. 150 (11): 795–802. PMID 19487714[e]
  5. Cornell J, Mulrow CD, Localio AR (December 2008). "Diagnostic test accuracy and clinical decision making". Ann. Intern. Med. 149 (12): 904–6. PMID 19075211[e]
  6. 6.0 6.1 Cook NR (January 2008). "Comments on 'Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond' by M. J. Pencina et al., Statistics in Medicine (DOI: 10.1002/sim.2929)". Stat Med 27 (2): 191–5. DOI:10.1002/sim.2987. PMID 17671959. Research Blogging.
  7. Hayen A, Macaskill P, Irwig L, Bossuyt P (2010). "Appropriate statistical methods are required to assess diagnostic tests for replacement, add-on, and triage.". J Clin Epidemiol. DOI:10.1016/j.jclinepi.2009.08.024. PMID 20079607. Research Blogging.
  8. Hanley JA, McNeil BJ (April 1982). "The meaning and use of the area under a receiver operating characteristic (ROC) curve". Radiology 143 (1): 29–36. PMID 7063747[e]
  9. Walter SD (July 2005). "The partial area under the summary ROC curve". Stat Med 24 (13): 2025–40. DOI:10.1002/sim.2103. PMID 15900606. Research Blogging.
  10. Bangdiwala SI, Haedo AS, Natal ML, Villaveces A (September 2008). "The agreement chart as an alternative to the receiver-operating characteristic curve for diagnostic tests". J Clin Epidemiol 61 (9): 866–74. DOI:10.1016/j.jclinepi.2008.04.002. PMID 18687288. Research Blogging.
  11. Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics 6, 461–464. DOI:10.1214/aos/1176344136 Google Scholar
  12. 12.0 12.1 Glas AS, Lijmer JG, Prins MH, Bonsel GJ, Bossuyt PM (November 2003). "The diagnostic odds ratio: a single indicator of test performance". J Clin Epidemiol 56 (11): 1129–35. PMID 14615004[e]
  13. 13.0 13.1 SGIM EBM Task Force and Interest Group (2009). Ask the EBM Expert! - Society of General and Internal Medicine (SGIM). Society of General Internal Medicine.
  14. Connell FA, Koepsell TD (May 1985). "Measures of gain in certainty from a diagnostic test". Am. J. Epidemiol. 121 (5): 744–53. PMID 4014166[e]
  15. Youden WJ (January 1950). "Index for rating diagnostic tests". Cancer 3 (1): 32–5. PMID 15405679[e]
  16. Bandolier (1996) How Good is that Test? II
  17. Pepe, Margaret S.; Ziding Feng, Ying Huang, Gary Longton, Ross Prentice, Ian M. Thompson, Yingye Zheng (2008-02-01). "Integrating the Predictiveness of a Marker with Its Performance as a Classifier". Am. J. Epidemiol. 167 (3): 362-368. DOI:10.1093/aje/kwm305. PMID 17982157. Retrieved on 2008-12-17. Research Blogging.
  18. Coulthard MG (May 2007). "Quantifying how tests reduce diagnostic uncertainty". Arch. Dis. Child. 92 (5): 404–8. DOI:10.1136/adc.2006.111633. PMID 17158858. Research Blogging.
  19. 19.0 19.1 19.2 Pencina MJ, D'Agostino RB, D'Agostino RB, Vasan RS (January 2008). "Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond". Stat Med 27 (2): 157–72; discussion 207–12. DOI:10.1002/sim.2929. PMID 17569110. Research Blogging.
  20. Greenland S (January 2008). "The need for reorientation toward cost-effective prediction: comments on 'Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond' by M. J. Pencina et al., Statistics in Medicine (DOI: 10.1002/sim.2929)". Stat Med 27 (2): 199–206. DOI:10.1002/sim.2995. PMID 17729377. Research Blogging.
  21. Lijmer JG, Mol BW, Heisterkamp S, et al (September 1999). "Empirical evidence of design-related bias in studies of diagnostic tests". JAMA 282 (11): 1061–6. PMID 10493205[e]