Abstract | ||
---|---|---|
We describe an alternative construction of an existing canonical representation for definite Horn theories, the Guigues-Duquenne basis (or GD basis), which minimizes a natural notion of implicational size. We extend the canonical representation to general Horn, by providing a reduction from definite to general Horn CNF. Using these tools, we provide a new, simpler validation of the classic Horn query learning algorithm of Angluin, Frazier, and Pitt, and we prove that this algorithm always outputs the GD basis regardless of the counterexamples it receives. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1007/s10994-011-5248-5 | Machine Learning |
Keywords | Field | DocType |
Query learning,Horn formulas,Canonical representation | Query learning,Discrete mathematics,Canonical form,Counterexample,Learnability,Mathematics | Journal |
Volume | Issue | ISSN |
85 | 3 | 0885-6125 |
Citations | PageRank | References |
15 | 0.88 | 23 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Marta Arias | 1 | 165 | 15.62 |
José L. Balcázar | 2 | 701 | 62.06 |