Abstract | ||
---|---|---|
In this paper, we present a new memory-augmented neural network called Gated Recurrent Unit with Memory Block (GRU-MB). Our architecture builds on the gated neural architecture of a Gated Recurrent Unit (GRU) and integrates an external memory block, similar to a Neural Turing Machine (NTM). GRU-MB interacts with the memory block using independent read and write gates that serve to decouple the memory from the central feedforward operation. This allows for regimented memory access and update, administering our network the ability to choose when to read from memory, update it, or simply ignore it. This capacity to act in detachment allows the network to shield the memory from noise and other distractions, while simultaneously using it to effectively retain and propagate information over an extended period of time. We evolve GRU-MB using neuroevolution and perform experiments on two different deep memory tasks. Results demonstrate that GRU-MB performs significantly faster and more accurately than traditional memory-based methods, and is robust to dramatic increases in the depth of these tasks. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1145/3071178.3071346 | GECCO |
Keywords | Field | DocType |
Neural Networks, Artificial intelligence, Machine Learning, Memory Augmented Neural Networks, Neuroevolution | Computer science,Recurrent neural network,Distributed memory,Data diffusion machine,Artificial intelligence,Memory map,Overlay,Distributed shared memory,Flat memory model,Machine learning,Auxiliary memory | Conference |
Citations | PageRank | References |
4 | 0.46 | 20 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Shauharda Khadka | 1 | 7 | 2.51 |
Jen Jen Chung | 2 | 21 | 9.92 |
kagan tumer | 3 | 1632 | 168.61 |