Title
Feature Optimization For Constituent Parsing Via Neural Networks
Abstract
The performance of discriminative constituent parsing relies crucially on feature engineering, and effective features usually have to be carefully selected through a painful manual process. In this paper, we propose to automatically learn a set of effective features via neural networks. Specifically, we build a feedforward neural network model, which takes as input a few primitive units (words, POS tags and certain contextual tokens) from the local context, induces the feature representation in the hidden layer and makes parsing predictions in the output layer. The network simultaneously learns the feature representation and the prediction model parameters using a back propagation algorithm. By pre-training the model on a large amount of automatically parsed data, and then fine-tuning on the manually annotated Treebank data, our parser achieves the highest F-1 score at 86.6% on Chinese Treebank 5.1, and a competitive F-1 score at 90.7% on English Treebank. More importantly, our parser generalizes well on cross-domain test sets, where we significantly outperform Berkeley parser by 3.4 points on average for Chinese and 2.5 points for English.
Year
Venue
Field
2015
PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1
Back propagation algorithm,Computer science,Feature engineering,Artificial intelligence,Natural language processing,Artificial neural network,Discriminative model,F1 score,Feedforward neural network,Pattern recognition,Speech recognition,Treebank,Parsing
DocType
Volume
Citations 
Conference
P15-1
7
PageRank 
References 
Authors
0.46
16
3
Name
Order
Citations
PageRank
Zhiguo Wang135424.64
Haitao Mi247923.40
Nianwen Xue31654117.65