Abstract | ||
---|---|---|
Most current textual reasoning models cannotlearn human-like reasoning process, and thus lack interpretability and logical accuracy. To help address this issue, we propose a novel reasoning model which learns to activate logic rules explicitly via deep reinforcement learning. It takes the form of Memory Networks but features a special memory that stores relational tuples, mimicking the “Image Schema” in human cognitive activities. We redefine textual reasoning as a sequential decision-making process modifying or retrieving from the memory, where logic rules serve as state-transition functions. Activating logic rules for reasoning involves two problems: variable binding and relation activating, and this is a first step to solve them jointly. Our model achieves an average error rate of 0.7% on bAbI-20, a widely-used synthetic reasoning benchmark, using less than 1k training samples and no supporting facts. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1016/j.neunet.2018.06.012 | Neural Networks |
Keywords | Field | DocType |
Natural language reasoning,Memory networks,Image schema,Logic rules,Reinforcement learning | Interpretability,Tuple,Word error rate,Image schema,Artificial intelligence,Cognition,Rule of inference,Machine learning,Mathematics,Reinforcement learning | Journal |
Volume | Issue | ISSN |
106 | 1 | 0893-6080 |
Citations | PageRank | References |
0 | 0.34 | 5 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yiqun Yao | 1 | 1 | 1.70 |
Jiaming Xu | 2 | 284 | 35.34 |
Jing Shi | 3 | 5 | 5.80 |
Bo Xu | 4 | 241 | 36.59 |