Respuesta :
Suppose [tex]D[/tex] is the event that a given patient has the disease, and [tex]P[/tex] is the event of a positive test result.
We're given that
[tex]\mathbb P(D)=0.01[/tex]
[tex]\mathbb P(P\mid D)=0.98[/tex]
[tex]\mathbb P(P^C\mid D^C)=0.95[/tex]
where [tex]A^C[/tex] denotes the complement of an event [tex]A[/tex].
a. We want to find [tex]\mathbb P(P^C)[/tex]. By the law of total probability, we have
[tex]\mathbb P(P^C)=\mathbb P(P^C\cap D)+\mathbb P(P^C\cap D^C)[/tex]
That is, in order for [tex]P^C[/tex] to occur, it must be the case that either [tex]D[/tex] also occurs, or [tex]D^C[/tex] does. Then from the definition of conditional probability we expand this as
[tex]\mathbb P(P^C)=\mathbb P(D)\mathbb P(P^C\mid D)+\mathbb P(D^C)\mathbb P(P^C\mid D^C)[/tex]
so we get
[tex]\mathbb P(P^C)=0.01\cdot0.02+0.99\cdot0.95=0.9407[/tex]
b. We want to find [tex]\mathbb P(D\mid P)[/tex]. Now, we can use Bayes' rule, but if you're like me and you find the formula a bit harder to remember, we can easily derive it.
By the definition of conditional probability,
[tex]\mathbb P(D\mid P)=\dfrac{\mathbb P(D\cap P)}{\mathbb P(P)}[/tex]
We have the probabilities of [tex]P[/tex]/[tex]P^C[/tex] occurring given that [tex]D[/tex]/[tex]D^C[/tex] occurs, but not vice versa. However, we can expand the probability in the numerator to get a probability in terms of [tex]P[/tex] being conditioned on [tex]D[/tex]:
[tex]\mathbb P(D\cap P)=\mathbb P(D)\mathbb P(P\mid D)[/tex]
Meanwhile, the law of total probability lets us rewrite the denominator as
[tex]\mathbb P(P)=\mathbb P(P\cap D)+\mathbb P(P\cap D^C)[/tex]
or in terms of conditional probabilities,
[tex]\mathbb P(P)=\mathbb P(D)\mathbb P(P\mid D)+\mathbb P(D^C)\mathbb P(P\mid D^C)[/tex]
so that
[tex]\mathbb P(D\mid P)=\dfrac{\mathbb P(D)\mathbb P(P\mid D)}{\mathbb P(D)\mathbb P(P\mid D)+\mathbb P(D^C)\mathbb P(P\mid D^C)}[/tex]
which is exactly what Bayes' rule states. So we get
[tex]\mathbb P(D\mid P)=\dfrac{0.01\cdot0.98}{0.01\cdot0.98+0.99\cdot0.05}\approx0.1653[/tex]
We're given that
[tex]\mathbb P(D)=0.01[/tex]
[tex]\mathbb P(P\mid D)=0.98[/tex]
[tex]\mathbb P(P^C\mid D^C)=0.95[/tex]
where [tex]A^C[/tex] denotes the complement of an event [tex]A[/tex].
a. We want to find [tex]\mathbb P(P^C)[/tex]. By the law of total probability, we have
[tex]\mathbb P(P^C)=\mathbb P(P^C\cap D)+\mathbb P(P^C\cap D^C)[/tex]
That is, in order for [tex]P^C[/tex] to occur, it must be the case that either [tex]D[/tex] also occurs, or [tex]D^C[/tex] does. Then from the definition of conditional probability we expand this as
[tex]\mathbb P(P^C)=\mathbb P(D)\mathbb P(P^C\mid D)+\mathbb P(D^C)\mathbb P(P^C\mid D^C)[/tex]
so we get
[tex]\mathbb P(P^C)=0.01\cdot0.02+0.99\cdot0.95=0.9407[/tex]
b. We want to find [tex]\mathbb P(D\mid P)[/tex]. Now, we can use Bayes' rule, but if you're like me and you find the formula a bit harder to remember, we can easily derive it.
By the definition of conditional probability,
[tex]\mathbb P(D\mid P)=\dfrac{\mathbb P(D\cap P)}{\mathbb P(P)}[/tex]
We have the probabilities of [tex]P[/tex]/[tex]P^C[/tex] occurring given that [tex]D[/tex]/[tex]D^C[/tex] occurs, but not vice versa. However, we can expand the probability in the numerator to get a probability in terms of [tex]P[/tex] being conditioned on [tex]D[/tex]:
[tex]\mathbb P(D\cap P)=\mathbb P(D)\mathbb P(P\mid D)[/tex]
Meanwhile, the law of total probability lets us rewrite the denominator as
[tex]\mathbb P(P)=\mathbb P(P\cap D)+\mathbb P(P\cap D^C)[/tex]
or in terms of conditional probabilities,
[tex]\mathbb P(P)=\mathbb P(D)\mathbb P(P\mid D)+\mathbb P(D^C)\mathbb P(P\mid D^C)[/tex]
so that
[tex]\mathbb P(D\mid P)=\dfrac{\mathbb P(D)\mathbb P(P\mid D)}{\mathbb P(D)\mathbb P(P\mid D)+\mathbb P(D^C)\mathbb P(P\mid D^C)}[/tex]
which is exactly what Bayes' rule states. So we get
[tex]\mathbb P(D\mid P)=\dfrac{0.01\cdot0.98}{0.01\cdot0.98+0.99\cdot0.05}\approx0.1653[/tex]