Title
Deep recurrent resnet for video super-resolution.
Abstract
In recent years, the performance of video super resolution has improved significantly with the help of convolutional neural networks (CNN). Most recent works based on CNN use optical flow to handle video frames. They first compensate the motion and perform multi-frame super-resolution based on aligned frames. However, this 2-step approach has the drawback that the first step can be a bottleneck in overall performance. In this paper, we present a different approach to solving video super-resolution problem without any use of optical flow or motion compensation. We adopt recent advances in a recurrent neural network called long-short-term memory (LSTM) and residual network to deal with consecutive video frames effectively. Compared to the single-frame method, our recurrent model gives superior performance and shows more temporally coherent results.
Year
Venue
Field
2017
Asia-Pacific Signal and Information Processing Association Annual Summit and Conference
Residual,Computer vision,Bottleneck,Convolutional neural network,Computer science,Motion compensation,Recurrent neural network,Artificial intelligence,Optical flow,Image resolution,Benchmark (computing)
DocType
ISSN
Citations 
Conference
2309-9402
2
PageRank 
References 
Authors
0.35
0
2
Name
Order
Citations
PageRank
Bee Lim12464.81
Kyoung Mu Lee23228153.84