Today I have learnt about common approaches to statistics:
- Probability is defined by Frequentist Statistics (FS) as the long-run frequency of events in a repeated, hypothetical infinite sequence of trials. It is founded on the concept of objective randomness.
Probability is viewed as a measure of belief or uncertainty in Bayesian statistics (BS). Using Bayes’ theorem, it incorporates prior beliefs and updates them based on new evidence. - Parameter Estimation:
FS: The emphasis is on estimating unknown fixed parameters from observed data. Maximum likelihood estimation (MLE) is used for this estimation.
BS: Bayesian inference generates a probability distribution for parameters by incorporating prior knowledge and updating it with observed data to produce a posterior distribution. - Testing hypotheses:
FS: Frequentist hypothesis testing entails making population parameter decisions based on sample data. The level of significance is frequently determined using p-values.
BS: Bayesian hypothesis testing compares the probabilities of various hypotheses given data. It makes decisions based on posterior probabilities and Bayes factors.
To account for this prior knowledge, I used a Bayesian t-test strategy because I am confident that the difference in average ages is approximately 7 and statistically significant. The results, however, revealed an intriguing discrepancy: the observed difference was located near the tail of the posterior distribution. This disparity bothered me, as it demonstrated how sensitive Bayesian analysis is to previous specifications.
I will be posting my results in next blog.