Abstract | ||
---|---|---|
In this paper, some recent ideas will be presented about making machine learning (ML) more effective through mechanisms of argumentation. In this sense, argument-based machine learning (ABML) is defined as a refinement of the usual definition of ML. In ABML, some learning examples are accompanied by arguments, that are expert's reasons for believing why these examples are as they are. Thus ABML provides a natural way of introducing domain-specific prior knowledge in a way that is different from the traditional, general background knowledge. The task of ABML is to find a theory that explains the "argumented" examples by making reference to the given reasons. ABML, so defined, is motivated by the following advantages in comparison with standard learning from examples: (1) arguments impose constraints over the space of possible hypotheses, thus reducing search complexity, and (2) induced theories should make more sense to the expert. Ways of realising ABML by extending some existing ML techniques are discussed, and the aforementioned advantages of ABML are demonstrated experimentally. |
Year | DOI | Venue |
---|---|---|
2007 | 10.1007/11875604_2 | Lecture Notes in Computer Science |
Keywords | Field | DocType |
machine learning,argumentation,rule learning,CN2,inductive logic programming | Inductive logic programming,Data mining,Computer science,Argumentation theory,Beam search,Artificial intelligence,Learning by example,Machine learning,Computational complexity theory,Information and Computer Science | Journal |
Volume | Issue | ISSN |
4203 | 10-15 | 0302-9743 |
Citations | PageRank | References |
2 | 0.45 | 10 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ivan Bratko | 1 | 1526 | 405.03 |
Martin Mozina | 2 | 19 | 4.67 |
Jure Zabkar | 3 | 18 | 4.36 |