Abstract | ||
---|---|---|
Dependencies among neighbouring labels in a sequence is an important source of information for sequence labeling problems. However, only dependencies be- tween adjacent labels are commonly exploited in practice because of the high computational complexity of typical inference algorithms when longer distance dependencies are taken into account. In this paper, we show that it is possible to design efficient inference algorithms for a conditional random field using features that depend on long consecutive label sequences (high-order features), as long as the number of distinct label sequences used in the features is small. This leads to efficient learning algorithms for these conditional random fields. We show ex- perimentally that exploiting dependencies using high-order features can lead to substantial performance improvements for some problems and discuss conditions under which high-order features can be effective. |
Year | Venue | Field |
---|---|---|
2009 | NIPS | Conditional random field,Sequence labeling,Pattern recognition,Computer science,Inference,Stochastic process,Image segmentation,Artificial intelligence,Inference engine,Dependency theory (database theory),Machine learning,Computational complexity theory |
DocType | Citations | PageRank |
Conference | 16 | 0.70 |
References | Authors | |
11 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Nan Ye | 1 | 149 | 12.60 |
Wee Sun Lee | 2 | 3325 | 382.37 |
hai leong chieu | 3 | 760 | 51.41 |
Dan Wu | 4 | 2318 | 272.22 |