A new proof of the equivalence of word2vec’s SGNS and Shifted PPMI

[removed section]

At the heart of the argument was Levy and Goldberg’s proof that minimizing the loss of Skip-gram negative sampling (SGNS) is effectively approximating a shifted PPMI matrix. Starting with the log-likelihood, they worked their way to local objective for each word-context pair and compare its derivative to zero to arrive at a function of PPMI. One might rightly question if the loss function is essential in this proof or there is a deeper link between the two formalizations?

My answer: Yes, there is. Continue reading