Title
Improving the Performance of the PNLMS Algorithm Using l1 Norm Regularization.
Abstract
The proportionate normalized least mean square (PNLMS) algorithm and its variants are by far the most popular adaptive filters that are used to identify sparse systems. The convergence speed of the PNLMS algorithm, though very high initially, however, slows down at a later stage, even becoming worse than sparsity agnostic adaptive filters like the NLMS. In this paper, we address this problem by introducing a carefully constructed l1 norm (of the coefficients) penalty in the PNLMS cost function which favors sparsity. This results in certain zero attracting terms in the PNLMS weight update equation which help in the shrinkage of the coefficients, especially the inactive taps, thereby arresting the slowing down of convergence and also producing lesser steady state excess mean square error (EMSE). A rigorous convergence analysis of the proposed algorithm is presented that expresses the steady state mean square deviation of both the active and the inactive taps in terms of a zero attracting coefficient of the algorithm. The analysis reveals that further reduction of the EMSE is possible by deploying a variable step size (VSS) simultaneously with a variable zero attracting coefficient in the weight update process. Simulation results confirm superior performance of the proposed VSS zero attracting PNLMS algorithm over existing algorithms, especially in terms of having both higher convergence speed and lesser steady state EMSE simultaneously.
Year
Venue
DocType
2016
IEEE/ACM Trans. Audio, Speech & Language Processing
Journal
Volume
Issue
Citations 
24
7
0
PageRank 
References 
Authors
0.34
0
2
Name
Order
Citations
PageRank
Rajib Lochan Das1354.97
Mrityunjoy Chakraborty212428.63