Abstract | ||
---|---|---|
Modelling the multimedia data such as text, images, or videos usually involves the analysis, prediction, or reconstruction of them. The recurrent neural network (RNN) is a powerful machine learning approach to modelling these data in a recursive way. As a variant, the long short-term memory (LSTM) extends the RNN with the ability to remember information for longer. Whilst one can increase the capacity of LSTM by widening or adding layers, additional parameters and runtime are usually required, which could make learning harder. We therefore propose a Tensor LSTM where the hidden states are tensorised as multidimensional arrays (tensors) and updated through a cross-layer convolution. As parameters are spatially shared within the tensor, we can efficiently widen the model without extra parameters by increasing the tensorised size; as deep computations of each time step are absorbed by temporal computations of the time series, we can implicitly deepen the model with little extra runtime by delaying the output. We show by experiments that our model is well-suited for various multimedia data modelling tasks, including text generation, text calculation, image classification, and video prediction. |
Year | DOI | Venue |
---|---|---|
2018 | 10.3390/sym10090370 | SYMMETRY-BASEL |
Keywords | Field | DocType |
multimedia data modelling,recurrent neural network (RNN),long short-term memory (LSTM),tensor,convolution,deep learning | Data modeling,Combinatorics,Recurrent neural network,Artificial intelligence,Mathematics | Journal |
Volume | Issue | Citations |
10 | 9 | 0 |
PageRank | References | Authors |
0.34 | 7 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhen He | 1 | 20 | 3.56 |
Shao-Bing Gao | 2 | 51 | 3.62 |
Liang Xiao | 3 | 39 | 6.64 |
Daxue Liu | 4 | 116 | 10.89 |
Han-gen He | 5 | 87 | 12.70 |