Hi! I am really struggling trying to understand the passages to answer the second question of this exercise. In particular, applying the Bayes rule I would have written \(p(y=1| x_1=1, x_2=1) = \frac {p(x_1=1|y=1) p(x_2=1|y=1) p(y=1}{p(x_1=1)p(x_2=1)}\), but according to this formula the result I get is different. Could someone make this clearer? Thanks!
Thanks! One thing I don't understand is why \(p(y=1|x_1=1, x_2=1)\) that is written at the denominator of the third line of your computation should be different from the one we are trying to compute, since the formulation is the same.
@younes said:
In your formula I don't really get how you go from \(p(x_{1}=1, x_{2}=1)\)
to \(p(x_{1}=1, x_{2}=1 | y=1)p(y=1) + p(x_{1}=1, x_{2}=1 | y=0)p(y=0) \)
Naive Bayes Classifier exercise - Mock Exam 2014
Hi! I am really struggling trying to understand the passages to answer the second question of this exercise. In particular, applying the Bayes rule I would have written \(p(y=1| x_1=1, x_2=1) = \frac {p(x_1=1|y=1) p(x_2=1|y=1) p(y=1}{p(x_1=1)p(x_2=1)}\), but according to this formula the result I get is different. Could someone make this clearer? Thanks!
Wait, this hasn't been covered in the lectures this year...
1
Applying Bayes Theorem, and conditional probability:
$$ \begin{align} p(y=1 \mid x_{1}=1, x_{2} =1) &= \frac{p(x_1=1, x_2=1\mid y=1)p(y=1)}{p(x_1=1,x_2=1)}\\ &= \frac{p(x_1=1, x_2=1\mid y=1)p(y=1)}{p(x_1=1,x_2=1 \mid y=1)p(y=1) + p(x_1=1,x_2=1 \mid y=0)p(y=0)}\\ &= \frac{p(x_1=1 \mid y=1)p(x_2=1 \mid y=1)p(y=1)}{p(x_1=1 \mid y=1)p(x_2=1 \mid y=1)p(y=1) + p(x_1=1 \mid y=0)p(x_2=1 \mid y=0)p(y=0)}\\ &= \frac{0.2 \times 0.5 \times 0.5}{0.2\times 0.5 \times 0.5 + 0.9\times 0.5 \times 0.5}\\ &= \frac{0.2}{0.2+ 0.9}\\ &= \frac{0.2}{1.1}\\ &\approx 0.18 \end{align}$$
1
Thanks! One thing I don't understand is why \(p(y=1|x_1=1, x_2=1)\) that is written at the denominator of the third line of your computation should be different from the one we are trying to compute, since the formulation is the same.
1
I updated the equations with the correct (denominator) probabilities.
1
Since you replied: can this be in the exam? We never talked about Naive Bayes Classifier / Bayes nets etc.
Yes. For this question you only need Bayes' Theorem and some notions from probability.
Bayes nets are not covered in CS-433 in Fall 2020. See also, http://oknoname.herokuapp.com/forum/topic/485/exam-bayes-net-2019/#c3
Hi @arnout,
In your formula I don't really get how you go from
\(p(x_{1}=1, x_{2}=1)\)
to
\(p(x_{1}=1, x_{2}=1 | y=1)p(y=1) + p(x_{1}=1, x_{2}=1 | y=0)p(y=0) \)
1
This is the (fundamental) law of total probability:
https://en.wikipedia.org/wiki/Law_of_total_probability
1
Add comment