A Critique of a Critique of Word Similarity Datasets: Sanity Check or Unnecessary Confusion?

Batchkarov et al. (2016) is one of evaluation/methodology papers much needed in NLP and I hope we’ll have more of them. But I think w.r.t. statistical methodology, the paper is troublesome or at least not good enough for ACL. In this short report, I explain why.

Critical evaluation of word similarity datasets is very important for computational lexical semantics. This short report concerns the sanity check proposed in Batchkarov et al. (2016) to evaluate several popular datasets such as MC, RG and MEN — the first two reportedly failed. I argue that this test is unstable, offers no added insight, and needs major revision in order to fulfill its purported goal.

Phrase Detectives caught me by surprise

Screen Shot 2017-07-13 at 16.48.42

I got 52% play Phrase Detective on Facebook. How could I get a PhD in Natural Language Processing?

Just kidding, I’m not worrying at all about graduation but just a bit surprised by some features of the game. I’m studying the possibility of running a crowd-sourcing task on coreference resolution so I’m very much interested in how to do crowd-sourcing properly. Please tell me what you think in the comment section! Continue reading

Notes on machine learning and exceptions

Statistical machine learning has been the de-facto standard in NLP research and practice. However, its very success might be hiding its the problems. One such problem is exceptions.

Natural language is full of exceptions: idiomatic phrases that defy compositionality, irregular verbs and exceptions to grammatical rules, or unexpected events that, though not linguistic phenomena themselves, happen to be communicated via language. So far, statistical NLP has treated them as inconvenient oddity and, in most cases, swept them under the rug, hoping that they wouldn’t reduce F-score.

But a system doesn’t really understand language without handling exceptions and I will argue that (not) handling exceptions has important consequences to machine learning. Continue reading

What’s wrong with McNemar’s test?

A quick note from EACL: some papers related to LSDSem workshop (Bugert et al. 2017; Zhou et al. 2015) use McNemar’s test to establish statistical significance and I find it very odd.

McNemar’s test examine “marginal (probability) homogeneity” which in our case is whether two systems yield (statistically) the same performance. According to the source code I found on Github, the way it works is:

  1. Obtain predictions of System 1 and System 2
  2. Compare them to gold labels to fill this table:
    Sys1 correct Sys1 wrong
    Sys2 correct a b
    Sys2 wrong c d
  3. Compute the test statistics: \chi^2 = {(b-c)^2 \over b+c} and p-value
  4. If p-value is less than a certain level (e.g. the magical 0.05), we reject the null hypothesis which is p(Sys1 correct) == p(Sys2 correct)

As it happens in the papers, the difference is statistically significant and therefore results are meaningful. Happy?

Not so fast. Continue reading

A paper is the tip of an iceberg

I was reading Clark and Manning (2016) and studying their code. The contrast is just amazing.

This is what the paper has to say:

architecture

This is what I found after 1 hour of reading a JSON file and writing down all layers of the neural net (the file is data/models/all_pairs/architecture.json, created when you run the experiment):

deep-coref.png
Without the source code, this would be a replication nightmare for sure.

References

Clark, K., & Manning, C. D. (2016). Improving Coreference Resolution by Learning Entity-Level Distributed Representations. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 643–653. http://doi.org/10.18653/v1/P16-1061

Long paper accepted for EACL 2017

Title: Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing

Conference: EACL 2017 (European Chapter of the Association for Computational Linguistics), at Valencia, 3-7 April 2017.

Abstract:
Error propagation is a common problem in NLP. Reinforcement learning explores erroneous states during training and can therefore be more robust when mistakes are made early in a process. In this paper, we apply reinforcement learning to greedy dependency parsing which is known to suffer from error propagation. Reinforcement learning improves accuracy of both labeled and unlabeled dependencies of the Stanford Neural Dependency Parser, a high performance greedy parser, while maintaining its efficiency. We investigate the portion of errors which are the result of error propagation and confirm that reinforcement learning reduces the occurrence of error propagation.

Full article: arXiv:1702.06794

Slides: view online

Hyperparameter tuning in SURFsara HPC Cloud

Hyperparameter tuning is difficult, not because it’s terribly complicated but obtaining enough resource is often not easy. I’m lucky enough to work at Vrije Universiteit and therefore can access the SURFsara HPC Cloud with not too much effort. Compared to Amazon EC2 (the only other cloud solution I have tried before), the functionality is rather basic but I think suits the needs of many researchers. Using the web interface or OpenNebula API, you can easily customize an image, attach hard drive, launch 10 instances and access any of them using a public key. What else do you need to run your experiments? Continue reading

Reproducing Chen & Manning (2014)

Neural dependency parsing is attractive for several reasons: first, distributed representation generalizes better, second, fast parsing unlocks new applications, and third, fast training means parsers can be co-trained with other NLP modules and integrated into a bigger system.

Chen & Manning (2014) from Stanford were the first to show that neural dependency parsing works and Google folks were quick to adopt this paradigm to improve the state-of-the-art (e.g. Weiss et al., 2015).

Though Stanford open-sourced their parser as part of CoreNLP, they didn’t release the code of their experiments. As anybody in academia probably knows, reproducing experiments is non-trivial, even extremely difficult at times. Since I have painstakingly gone through the process, I think it’s a good idea to share with you.

Continue reading