What is the meaning and interpretation of terms like Confidence Interval (CI), Hazard Ratio (HR), p-value, Odds Ratio (OR), and Relative Risk (RR) in medical research papers?

Medical Advisory BoardAll articles are reviewed for accuracy by our Medical Advisory Board
Educational purpose only • Exercise caution as content is pending human review
Article Review Status
Submitted
Under Review
Approved

Last updated: January 18, 2026View editorial policy

Personalize

Help us tailor your experience

Which best describes you? Your choice helps us use language that's most understandable for you.

Understanding Statistical Terms in Medical Research

Statistical measures like confidence intervals, hazard ratios, p-values, odds ratios, and relative risk are essential tools that tell you whether a treatment actually works, how strong the effect is, and how confident you can be in the results.

P-Value: Probability of Results Occurring by Chance

  • A p-value represents the probability of obtaining results at least as extreme as those observed if there were truly no difference between groups (null hypothesis) 1
  • P < 0.05 means there is less than a 5% probability (5 out of 100 times) that the observed difference occurred by chance alone 1
  • The smaller the p-value, the stronger the evidence against the null hypothesis: p-values between 0.05-0.01 represent modest evidence, while p < 0.001 represents strong evidence 1
  • Common mistake: P < 0.05 does NOT mean the treatment definitely works; it means if there were no real effect, you'd only see results this extreme 5% of the time 2
  • P-values should be reported precisely to two decimal places when >0.01, three decimal places when <0.01, or as "p<0.001" for very small values 1

Confidence Interval (CI): Range of Plausible Values

  • A 95% confidence interval shows the range of values within which the true effect in the population likely resides 3
  • The confidence interval provides both the magnitude of effect AND the precision of that estimate—narrow intervals mean more precision, wide intervals mean more uncertainty 1
  • Point estimates (like a hazard ratio of 0.44) must always be accompanied by 95% confidence intervals to assess both statistical plausibility and clinical relevance 1
  • If a confidence interval for relative risk or hazard ratio crosses 1.0, the result is NOT statistically significant because 1.0 means "no effect" 4
  • Example: A 95% CI of 0.8 to 3.0 indicates substantial uncertainty—the lower limit suggests a potential 20% protective effect while the upper limit suggests up to a 3-fold increased risk 4

Relative Risk (RR): Probability Comparison Between Groups

  • Relative risk is the probability of an outcome occurring in one group (e.g., treatment) versus the probability in a comparison group (e.g., placebo) 5
  • RR = 1.0 means no difference between groups; RR < 1.0 means reduced risk (protective); RR > 1.0 means increased risk (harmful) 5
  • Example from research: RR 0.71 (95% CI 0.53 to 0.95) means the treatment group had 29% lower risk of intubation compared to control, and this is statistically significant because the CI doesn't cross 1.0 5
  • In meta-analyses, pooled relative risk with 95% CI and p-value are presented together, and when RR remains significant across sensitivity analyses, this supports a robust effect 1

Odds Ratio (OR): Odds Comparison Between Groups

  • An odds ratio represents the odds (probability of disease divided by 1 minus the probability) of an outcome according to an explanatory variable 5
  • OR interpretation is similar to RR: OR = 1.0 means no difference; OR < 1.0 means reduced odds; OR > 1.0 means increased odds 5
  • Example: OR = 1.29 (95% CI = 1.01,1.64, p = 0.041) means 29% higher odds of the outcome in the intervention group, statistically significant 5
  • ORs are commonly used in case-control studies and logistic regression analyses 5

Hazard Ratio (HR): Rate of Events Over Time

  • A hazard ratio measures the ratio of hazard rates (the rate at which an event occurs) for a given outcome between two groups over time 5
  • HR = 1.0 means equal rates; HR < 1.0 means lower rate (protective); HR > 1.0 means higher rate (harmful) 5
  • Example: HR 7.94 (95% CI 1.03,62.5, p = 0.05) means incomplete surgical resection had nearly 8 times the rate of death compared to complete resection 5
  • HRs are used in survival analysis and time-to-event outcomes like mortality or disease progression 5
  • Adjusted hazard ratios account for confounding variables, making them more reliable than unadjusted ratios 6

Reading Forest Plots and Tables

  • Forest plots display effect sizes (RR, OR, HR) as points with horizontal lines representing confidence intervals 5
  • If the confidence interval line crosses the vertical line at 1.0, the result is not statistically significant 5
  • The size of the square or point often represents the weight or sample size of that study 5
  • Tables typically show: number of events, total participants, effect measure with 95% CI, and p-value 5

Statistical vs. Clinical Significance

  • Statistical significance (p < 0.05) does NOT automatically mean clinical importance—a tiny effect can be statistically significant with large sample sizes but meaningless to patients 7
  • Clinical significance requires the magnitude of results to be larger than the minimal clinically important difference 7
  • Always examine both the p-value AND the confidence interval to determine if results are both statistically significant and clinically meaningful 1
  • Example: A treatment might reduce hospital stay by 0.5 days (p = 0.03), which is statistically significant but clinically trivial 5

Common Abbreviations on Graphs

  • RR = Relative Risk; OR = Odds Ratio; HR = Hazard Ratio; CI = Confidence Interval 5
  • k = number of studies (in meta-analyses) 5
  • n = number of participants 5
  • M = Mean; Mdn = Median; SD = Standard Deviation 5
  • IQR = Interquartile Range (25th to 75th percentile) 5
  • MD = Mean Difference; SMD = Standardized Mean Difference 5
  • H vs. L = High versus Low (comparison groups) 5
  • DR = Dose-Response 5

Critical Pitfalls to Avoid

  • Never interpret p ≥ 0.05 as "proof of no effect"—it only means insufficient evidence to reject the null hypothesis 2
  • Do not confuse statistical significance with clinical relevance—always assess whether the effect size matters to patient outcomes 7
  • Wide confidence intervals indicate high uncertainty, even if p < 0.05, suggesting results should be interpreted cautiously 4
  • Evaluate study quality and design, not just the p-value—a statistically significant result from a poorly designed study is unreliable 1
  • P-values and confidence intervals provide complementary information and should always be reported together 1

References

Guideline

Understanding Statistical Significance and Confidence Intervals

Praxis Medical Insights: Practical Summaries of Clinical Guidelines, 2025

Research

Statistical significance versus clinical relevance.

Nephrology, dialysis, transplantation : official publication of the European Dialysis and Transplant Association - European Renal Association, 2017

Guideline

Interpretation of Relative Risk

Praxis Medical Insights: Practical Summaries of Clinical Guidelines, 2025

Guideline

Guideline Directed Topic Overview

Dr.Oracle Medical Advisory Board & Editors, 2025

Professional Medical Disclaimer

This information is intended for healthcare professionals. Any medical decision-making should rely on clinical judgment and independently verified information. The content provided herein does not replace professional discretion and should be considered supplementary to established clinical guidelines. Healthcare providers should verify all information against primary literature and current practice standards before application in patient care. Dr.Oracle assumes no liability for clinical decisions based on this content.

Have a follow-up question?

Our Medical A.I. is used by practicing medical doctors at top research institutions around the world. Ask any follow up question and get world-class guideline-backed answers instantly.