# A Critique of a Critique of Word Similarity Datasets: Sanity Check or Unnecessary Confusion?

Batchkarov et al. (2016) is one of evaluation/methodology papers much needed in NLP and I hope we’ll have more of them. But I think w.r.t. statistical methodology, the paper is troublesome or at least not good enough for ACL. In this short report, I explain why.

Critical evaluation of word similarity datasets is very important for computational lexical semantics. This short report concerns the sanity check proposed in Batchkarov et al. (2016) to evaluate several popular datasets such as MC, RG and MEN — the first two reportedly failed. I argue that this test is unstable, offers no added insight, and needs major revision in order to fulfill its purported goal.

# What’s wrong with McNemar’s test?

A quick note from EACL: some papers related to LSDSem workshop (Bugert et al. 2017; Zhou et al. 2015) use McNemar’s test to establish statistical significance and I find it very odd.

McNemar’s test examine “marginal (probability) homogeneity” which in our case is whether two systems yield (statistically) the same performance. According to the source code I found on Github, the way it works is:

1. Obtain predictions of System 1 and System 2
2. Compare them to gold labels to fill this table:
Sys1 correct Sys1 wrong a b c d
3. Compute the test statistics: $\chi^2 = {(b-c)^2 \over b+c}$ and p-value
4. If p-value is less than a certain level (e.g. the magical 0.05), we reject the null hypothesis which is p(Sys1 correct) == p(Sys2 correct)

As it happens in the papers, the difference is statistically significant and therefore results are meaningful. Happy?