What are sensitivity, specificity, Positive Predictive Value (PPV) and Negative Predictive Value (NPV) in relation to medical tests and lab results?

Medical Advisory BoardAll articles are reviewed for accuracy by our Medical Advisory Board
Educational purpose only • Exercise caution as content is pending human review
Article Review Status
Submitted
Under Review
Approved

Last updated: January 22, 2026View editorial policy

Personalize

Help us tailor your experience

Which best describes you? Your choice helps us use language that's most understandable for you.

Understanding Diagnostic Test Performance Metrics

Sensitivity and specificity are intrinsic test characteristics that measure how well a test identifies disease presence or absence, while PPV and NPV are probability estimates that tell you how likely disease is present or absent given a test result—critically, PPV and NPV vary dramatically with disease prevalence, making them context-dependent unlike sensitivity and specificity. 1

Core Definitions: What Each Metric Actually Measures

Sensitivity (True Positive Rate)

  • Sensitivity is the percentage of truly diseased patients who test positive, calculated as True Positives / (True Positives + False Negatives) 2, 1
  • This measures the test's ability to detect disease when it is actually present 1
  • Sensitivity is a characteristic of the test itself, not a probability statement about your patient 1, 3
  • A highly sensitive test minimizes false negatives, ensuring most diseased patients are identified 2

Specificity (True Negative Rate)

  • Specificity is the percentage of truly disease-free patients who test negative, calculated as True Negatives / (True Negatives + False Positives) 2, 1
  • This measures the test's ability to exclude disease when it is truly absent 1
  • Like sensitivity, specificity is an intrinsic test property independent of disease prevalence 1, 3
  • A highly specific test minimizes false positives, ensuring most disease-free patients are correctly identified 2

Positive Predictive Value (PPV)

  • PPV answers the clinical question: "If my patient tests positive, what is the probability they actually have the disease?" 1
  • Calculated as True Positives / (True Positives + False Positives) 2, 1
  • PPV provides information about disease probability in your patient, not about test performance 1, 3
  • Critical caveat: PPV increases dramatically as disease prevalence increases 1, 3, 4
  • In low-prevalence populations, even highly specific tests can have poor PPV, meaning many positive results are false positives 2, 1

Negative Predictive Value (NPV)

  • NPV answers: "If my patient tests negative, what is the probability they are truly disease-free?" 1
  • Calculated as True Negatives / (True Negatives + False Negatives) 2, 1
  • A high NPV means a negative test strongly suggests absence of disease 1
  • Critical caveat: NPV decreases as disease prevalence increases 1, 3, 4
  • In high-prevalence populations, negative results become less reliable for excluding disease 1

The Prevalence Problem: Why Context Matters

The most important pitfall to avoid is applying PPV and NPV values from one population to a different population with different disease prevalence 3:

  • Sensitivity and specificity remain relatively stable across populations (they are test characteristics) 1, 3
  • PPV and NPV vary dramatically with prevalence (they are probability estimates) 1, 3, 4
  • The American College of Chest Physicians excludes NPV calculations from studies with >80% prevalence and PPV calculations from studies with <20% prevalence because the estimates become unreliable 1
  • In a low-prevalence setting (5%), even a test with 95% specificity will have a low PPV, meaning most positive results are false positives 2, 1

Practical Example from C. difficile Testing

When testing 10,000 patients with 5% disease prevalence using a two-step approach (GDH screening followed by toxin confirmation), the sequential testing strategy dramatically improves PPV by reducing false positives, demonstrating how prevalence-aware testing algorithms optimize diagnostic accuracy 2

Clinical Application Algorithm

Step 1: Estimate Pre-Test Probability

Consider patient age, sex, clinical symptoms and severity, family history, and local disease prevalence in your specific practice setting 1

Step 2: Select Test Based on Clinical Goal

For Ruling OUT Disease (SnNOut: Sensitive test, Negative result, rules Out):

  • Choose tests with high sensitivity and high NPV 1
  • Example: D-dimer for pulmonary embolism has high NPV, allowing safe exclusion when negative 1
  • A negative result on a highly sensitive test effectively excludes disease 2, 1

For Ruling IN Disease (SpPIn: Specific test, Positive result, rules In):

  • Choose tests with high specificity and high PPV 1
  • A positive result on a highly specific test effectively confirms disease 2, 1
  • This minimizes false positives and prevents unnecessary treatment 2

Step 3: Interpret Results in Context

  • Always consider the pre-test probability when interpreting results 1
  • In low-prevalence settings, positive results may require confirmatory testing even with specific tests 2
  • In high-prevalence settings, negative results may require additional evaluation even with sensitive tests 1

Key Relationship: The Sensitivity-Specificity Trade-off

Sensitivity and specificity are inversely related—as you adjust the test threshold to increase one, the other decreases 2, 4, 5:

  • Lowering the threshold for "positive" increases sensitivity but decreases specificity 2
  • Raising the threshold increases specificity but decreases sensitivity 2
  • This is why some tests use intermediate zones or two cut-offs to optimize accuracy 2
  • The optimal threshold depends on whether false positives or false negatives carry greater clinical consequences 2

Common Pitfalls to Avoid

  1. Never apply published PPV/NPV values directly to your population if prevalence differs 3
  2. Don't confuse sensitivity/specificity (test characteristics) with PPV/NPV (probability estimates) 1, 3
  3. Remember that even "gold standard" reference tests like amyloid PET have ~5% discordance with quantitative measures, meaning no test achieves perfect 100% concordance 2
  4. When evaluating diagnostic studies, ensure the study population's prevalence matches your clinical setting 1
  5. Be aware that sensitivity and specificity can vary with disease prevalence due to patient spectrum effects, though this is less common than PPV/NPV variation 6

References

Guideline

Diagnostic Test Performance Metrics

Praxis Medical Insights: Practical Summaries of Clinical Guidelines, 2025

Guideline

Guideline Directed Topic Overview

Dr.Oracle Medical Advisory Board & Editors, 2025

Research

Variation of a test's sensitivity and specificity with disease prevalence.

CMAJ : Canadian Medical Association journal = journal de l'Association medicale canadienne, 2013

Professional Medical Disclaimer

This information is intended for healthcare professionals. Any medical decision-making should rely on clinical judgment and independently verified information. The content provided herein does not replace professional discretion and should be considered supplementary to established clinical guidelines. Healthcare providers should verify all information against primary literature and current practice standards before application in patient care. Dr.Oracle assumes no liability for clinical decisions based on this content.

Have a follow-up question?

Our Medical A.I. is used by practicing medical doctors at top research institutions around the world. Ask any follow up question and get world-class guideline-backed answers instantly.