Abstract | ||
---|---|---|
We study the problem of Minimum Feedback Arc-Set in Tournaments (MFAST) in a
setting where each edge from the input graph is given for a cost, and
computations are assumed to be done for free. We show that a recent PTAS by
Kenyon and Schudy from 2007 in a standard computation model (input for free,
computation costs) can be converted into a algorithm that requires observing
only a sample of the input, with a similar approximation guarantee.
Viewed as a machine learning problem, MFAST can be viewed as a supervised
learning problem with a finite input distribution consisting of ${n\choose 2}$
pairs of elements, endowed with an inconsistent (agnostic) preference function.
The learner samples preferences and outputs a concept (a permutation), while
trying to minimize the loss, defined as the number of mistakes of the output
permutation summed over the entire preference space. The algorithm we present
constitutes an active learning algorithm, because the preferences used are
chosen adaptively by the learner.
Viewed as a noisy sorting problem, our work continues that of Braverman and
Mossel from 2008 on noisy sorting without resampling. Their work presents an
algorithm with an $O(n\log n)$ query complexity for an exact optimal solution
(with high probability), but for random input coming from certain interesting
distribution families. In our case, the input is worst case, and the query
complexity is $O(n\poly(\log n,\eps^{-1}))$ for error of $(1+\eps)$.
From a practical standpoint, our work provides a sound solution to an open
problem addressed by machine learning practitioners, especially in information
retrieval, who use pairwise preferences as labels for the task of learning to
rank items, but wish to avoid obtaining labels for the quadratically many
preference pairs, without compromising low error bounds. |
Year | Venue | Keywords |
---|---|---|
2010 | Clinical Orthopaedics and Related Research | information retrieval,machine learning,supervised learning,learning to rank,active learning,computer model |
Field | DocType | Volume |
Query optimization,Learning to rank,Instance-based learning,Stability (learning theory),Active learning (machine learning),Query expansion,Computer science,Theoretical computer science,Computational learning theory,Feedback arc set | Journal | abs/1011.0 |
Citations | PageRank | References |
0 | 0.34 | 5 |
Authors | ||
1 |