Title
Representation and Pre-Activation of Lexical-Semantic Knowledge in Neural Language Models
Abstract
In this paper, we perform a systematic analysis of how closely the intermediate layers from LSTM and trans former language models correspond to human semantic knowledge. Furthermore, in order to make more meaningful comparisons with theories of human language comprehension in psycholinguistics, we focus on two key stages where the meaning of a particular target word may arise: immediately before the word’s presentation to the model (comparable to forward inferencing), and immediately after the word token has been input into the network. Our results indicate that the transformer models are better at capturing semantic knowledge relating to lexical concepts, both during word prediction and when retention is required.
Year
DOI
Venue
2021
10.18653/v1/2021.cmcl-1.25
CMLS
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Steven Derby100.34
Barry Devereux201.35
Paul Miller3272.31