Decision Mechanics

Insight. Applied.

  • Services
    • Decision analysis
    • Big data analysis
    • Software development
  • Articles
  • Blog
  • Privacy
  • Hire us

Confidence intervals

August 24, 2020 By editor

Statistical confidence intervals are almost always misinterpreted. Consider the following statement.

"The prevalence of the disease P has a 95% confidence interval of 1% <= P <= 5%."

This is commonly taken to imply that there’s a 95% chance that the true prevalence is between 1% and 5%.

This isn’t the case.

Confidence intervals represent uncertainty about the interval, rather than the parameter of interest.

The correct interpretation of the confidence interval defined above is that if we collect many samples from the population and calculate confidence intervals from them, 95% of those confidence intervals will contain the true value of P.

In Bayesian statistics we generally calculate credible intervals which are compatible with the intuitive interpretation.

Filed Under: Data science Tagged With: confidence intervals, statistics

Absence of evidence is not evidence of absence

December 31, 2018 By editor

A statistical note published in the BMJ (formally the British Medical Journal) reminds us that

…trials that do not show a significant difference between the treatments being compared are often called "negative." This term wrongly implies that the study has shown that there is no difference, whereas usually all that has been shown is an absence of evidence of a difference. These are quite different statements.

Filed Under: Data analysis, Data science Tagged With: p-values, statistics

Statisticians publish guidance on p-values

March 8, 2016 By editor

American Statistical Association building

The American Statistical Association has taken the unprecedented step of issuing guidance on the use of p-values. Its statement was prompted by concerns that misuse of p-values was driving bad science.

Basic and Applied Social Psychology has refused to accept papers containing p-values.

A p-value is “informally” defined in the statement as

…the probability under a specified statistical model that a statistical summary of the data (for example, the sample mean difference between two compared groups) would be equal to or more extreme than its observed value.

Doesn’t really help much, but you have to applaud the attempt. After all, even scientists don’t really understand p-values.

Filed Under: Data science Tagged With: misuse, p-values, statistics

Search

Subscribe to blog via e-mail

Subscribe via RSS

Recent posts

  • Spreadsheet error delays opening of children’s hospital
  • 16,000 coronavirus cases missed by Excel
  • 20 cognitive biases that affect your decision-making
  • The science of decision-making and data
  • Confidence intervals

Copyright © 2021 · Decision Mechanics Limited · info@decisionmechanics.com