Abstract | ||
---|---|---|
Texts can be distinguished in terms of their content, function, structure or layout (Brinker, 1992; Bateman et al., 2001; Joachims, 2002; Power et al., 2003). These reference points do not open necessarily orthogonal perspectives on text classification. As part of explorative data analysis, text classification aims at automatically dividing sets of textual objects into classes of maximum internal homogeneity and external heterogeneity. This paper deals with classifying texts into text types whose instances serve more or less homogeneous functions. Other than mainstream approaches, which rely on the vector space model (Sebastiani, 2002) or some of its descendants (Baeza-Yates and Ribeiro-Neto, 1999) and, thus, on content-related lexical features, we solely refer to structural dierentiae. That is, we explore patterns of text structure as determinants of class membership. Our starting point are tree-like text representations which induce feature vectors and tree kernels. These kernels are utilized in supervised learning based on cross-validation as a method of model selection (Hastie et al., 2001) by example of a corpus of press communication. For a subset of categories we show that classification can be performed very well by structural dierentia only. |
Year | Venue | Keywords |
---|---|---|
2007 | LDV Forum | feature vector,model selection,supervised learning,data analysis,vector space model,cross validation |
DocType | Volume | Issue |
Journal | 22 | 2 |
Citations | PageRank | References |
3 | 0.42 | 17 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Alexander Mehler | 1 | 186 | 36.63 |
Peter Geibel | 2 | 286 | 26.62 |
Olga Pustylnikov | 3 | 14 | 2.48 |