Intuition, knowledge base completion and natural language understanding

Everybody has experienced the gut feeling that we “know” something but can’t explain. It might be that a new classmate will be your best friend or there is something wrong in a situation. Human intuition plays an important role in our everyday lives as well as in business, science and technological innovation.

As clear from those examples, intuition is characterized by two event in order: (i) a thought about something (ii) a failed attempt to explain it. When compared to automated reasoning methodology, it stands in stark contrast: here the conclusion is reached without the reasoning chain that leads to it. It acts as a bridge that brings us out of the reasoning island.

Hinton (1986), 28 years ago, is an early paper that went along this direction. Using a neural network to learn the relationship between members of two families, Hinton showed that machine can answer questions such as “who is the father of X” not by the rules of logics but via transformation of distributed patterns. In this case, each person and relationship (e.g. father, mother, sister,…) is assigned a vector and via complex non-linear transformation in the neural network, a probability distribution of the correct answer is given.

Researches like this have enjoyed a surge of interest recently thanks to the same advances in hardware and training techniques that created the wave of deep learning. In 2013, Socher et al. published encouraging experimental results in learning two large knowledge bases WordNet and FreeBase.

Black lines denote relationships given in training, red lines denote relationships the model inferred. The dashed line denotes word vector sharing.

A “reasoning” example from Socher et al. (2013). Black lines denote relationships given in training, red lines denote relationships the model inferred. The dashed line denotes word vector sharing.

As illustrated in the diagram to the right, given facts that a person is named “Francesco Guicciardini” and is a historian born in Florence, Socher et al.’s model has inferred rightly that the person is an Italian man. It was researchers’ explanation that it might have based its decision on similar persons such as Francesco Patrizi and Matteo Rosselli.

We may replace the knowledge bases by a semantically annotated corpus and the task of knowledge base completion by linguistic parsing. Given the above information about Francesco Guicciardini and a sentence such as “In 1512, the Italian historian took a diplomatic mission to Spain”, we can use the same technique to infer that “the Italian historian” is actually Francesco Guicciardini.

This mode of “reasoning” has  great potential in natural language understanding. For accuracy, it provides a means to assign probabilities to different interpretations which are very often out of the reach of theorem provers. For speed, it avoids the exponential cost of logical reasoning and therefore enables the use of much larger knowledge bases. Moreover, it is a step towards more human-like natural language understanding. I look forward to seeing its deployment in the most advanced natural language understanding systems in near future.


Hinton, G. E. (1986). Learning distributed representations of concepts. In Proceedings of the eighth annual conference of the cognitive science society (Vol. 1, p. 12).

Socher, R., Chen, D., Manning, C. D., & Ng, A. Y. (2013). Reasoning With Neural Tensor Networks for Knowledge Base Completion. Advances in Neural Information Processing Systems, 926–934.


One thought on “Intuition, knowledge base completion and natural language understanding

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s