Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

q_k in Jensen's Inequality [Lecture 11b EM]

Hey,

Since \(\sum_{k=1}^K{\pi_k = 1}\) mentioned in Lecture 11a GMM, and this also meets the requirement of the given non-negative weights q in Jensen's Inequality, I am wondering why we introduce another new variable \(q_k\) in EM instead of using \(r_k\) as \(q_k\) directly? In other words, what is the mathematically or ML meaning of \(q_k\) in EM?

Thanks!

Top comment

In the lecture \(r_k\) is only supposed to be positive so that putting it inside the logarithm makes sense, so you cannot take \(q_k\) as \(r_k\) you would need to normalize. Anyway this will not work for EM as you need your lower bound on the log-likelihood to be tight so that maximizing the lower bound in the maximization steps insures some progress.

To understand EM i would advise seeing this (http://www.columbia.edu/~mh2078/MachineLearningORFE/EM_Algorithm.pdf).

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification