Post-hoc Analysis

Post-hoc Analysis

Post-hoc analysis, also known as “after the fact” analysis, is a statistical technique used in data science and research to explore and test hypotheses that were not specified before the data was collected. This method is often used to find patterns, relationships, or effects that were not initially considered or observed during the primary analysis.


Post-hoc analysis is a secondary analysis performed after the primary analysis of a dataset. It is used to investigate additional hypotheses or to clarify and interpret the results of the initial analysis. The term “post-hoc” is derived from Latin, meaning “after this”, indicating that these analyses are conducted after the data collection and primary analysis have been completed.

Why is it Important?

Post-hoc analysis is crucial in data science because it allows researchers to explore data beyond the initial hypotheses and discover unexpected patterns or relationships. It can provide additional insights that can enhance the understanding of the data, validate the results of the primary analysis, or even lead to new research questions.

How is it Used?

Post-hoc analysis is typically used after an ANOVA (Analysis of Variance) test when the results are significant, to determine which groups differ from each other. It can also be used in regression analysis to check the assumptions of the model, or in machine learning to understand the performance of a model beyond the primary metrics.


  1. In A/B Testing: If a company conducts an A/B test to determine which of two website designs leads to more conversions, a post-hoc analysis might be conducted to see if the design has different effects on different demographic groups.

  2. In Machine Learning: After training a classification model, a data scientist might perform a post-hoc analysis to understand which features are most important in the model’s predictions.


While post-hoc analysis can provide valuable insights, it also has limitations. The most significant is the risk of Type I errors, or false positives, due to multiple comparisons. This risk can be mitigated by using statistical methods like the Bonferroni correction. Additionally, findings from post-hoc analyses are considered exploratory and should be validated with further research.

  • ANOVA (Analysis of Variance): A statistical method used to compare the means of two or more groups.
  • Bonferroni Correction: A method used to adjust the significance level in multiple comparisons to control the family-wise error rate.
  • Type I Error: The incorrect rejection of a true null hypothesis, also known as a false positive.

Further Reading

This glossary entry was last updated on August 14, 2023.