Title
Finding Syntactic Representations in Neural Stacks.
Abstract
Neural network architectures have been augmented with differentiable stacks in order to introduce a bias toward learning hierarchy-sensitive regularities. It has, however, proven difficult to assess the degree to which such a bias is effective, as the operation of the differentiable stack is not always interpretable. In this paper, we attempt to detect the presence of latent representations of hierarchical structure through an exploration of the unsupervised learning of constituency structure. Using a technique due to Shen et al. (2018a,b), we extract syntactic trees from the pushing behavior of stack RNNs trained on language modeling and classification objectives. We find that our models produce parses that reflect natural language syntactic constituencies, demonstrating that stack RNNs do indeed infer linguistically relevant hierarchical structure.
Year
Venue
DocType
2019
BLACKBOXNLP WORKSHOP ON ANALYZING AND INTERPRETING NEURAL NETWORKS FOR NLP AT ACL 2019
Journal
Volume
Citations 
PageRank 
abs/1906.01594
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
William Merrill112.04
Lenny Khazan200.34
Noah Amsel311.02
Yiding Hao411.02
Simon Mendelsohn511.02
Robert Frank631.11