Title
Do NLP Models Know Numbers? Probing Numeracy in Embeddings
Abstract
The ability to understand and work with numbers (numeracy) is critical for many complex reasoning tasks. Currently, most NLP models treat numbers in text in the same way as other tokens---they embed them as distributed vectors. Is this enough to capture numeracy? We begin by investigating the numerical reasoning capabilities of a state-of-the-art question answering model on the DROP dataset. We find this model excels on questions that require numerical reasoning, i.e., it already captures numeracy. To understand how this capability emerges, we probe token embedding methods (e.g., BERT, GloVe) on synthetic list maximum, number decoding, and addition tasks. A surprising degree of numeracy is naturally present in standard embeddings. For example, GloVe and word2vec accurately encode magnitude for numbers up to 1,000. Furthermore, character-level embeddings are even more precise---ELMo captures numeracy the best for all pre-trained methods---but BERT, which uses sub-word units, is less exact.
Year
DOI
Venue
2019
10.18653/v1/D19-1534
EMNLP/IJCNLP (1)
DocType
Volume
Citations 
Conference
D19-1
2
PageRank 
References 
Authors
0.38
0
5
Name
Order
Citations
PageRank
Eric Wallace1187.45
Yizhong Wang2334.70
Sujian Li368359.24
Sameer Singh4106071.63
Matthew Gardner570438.49