Joe computed a 95% confidence interval for µ from a specific random sample. His confidence interval was 10.1<µ<12.2. He claims that the probability that µ is in this interval 0.95. What is wrong with his claim? Explain.
Joe is incorrect. By definition, a 95% confidence interval is the concept where we would expect 95% of all the confidence intervals constructed to contain the true parameter. So that is why he is wrong. Hope this is useful for you