A paper is the tip of an iceberg

I was reading Clark and Manning (2016) and studying their code. The contrast is just amazing.

This is what the paper has to say:

architecture

This is what I found after 1 hour of reading a JSON file and writing down all layers of the neural net (the file is data/models/all_pairs/architecture.json, created when you run the experiment):

deep-coref.png
Without the source code, this would be a replication nightmare for sure.

References

Clark, K., & Manning, C. D. (2016). Improving Coreference Resolution by Learning Entity-Level Distributed Representations. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 643–653. http://doi.org/10.18653/v1/P16-1061

Long paper accepted for EACL 2017

Title: Tackling Error Propagation through Reinforcement Learning: A Case of Greedy Dependency Parsing

Conference: EACL 2017 (European Chapter of the Association for Computational Linguistics), at Valencia, 3-7 April 2017.

Abstract:
Error propagation is a common problem in NLP. Reinforcement learning explores erroneous states during training and can therefore be more robust when mistakes are made early in a process. In this paper, we apply reinforcement learning to greedy dependency parsing which is known to suffer from error propagation. Reinforcement learning improves accuracy of both labeled and unlabeled dependencies of the Stanford Neural Dependency Parser, a high performance greedy parser, while maintaining its efficiency. We investigate the portion of errors which are the result of error propagation and confirm that reinforcement learning reduces the occurrence of error propagation.

Full article: arXiv:1702.06794

Slides: view online