Title
Information Potential For Some Probability Density Functions
Abstract
This paper is related to the information theoretic learning methodology, whose goal is to quantify global scalar descriptors (e.g., entropy) of a given probability density function (PDF). In this context, the core concept is the information potential (IP) S-[s](x) := integral(R) p(s) (t, x)dt, s > 0 of a PDF p(t, x) depending on a parameter x; it is naturally related to the Renyi and Tsallis entropies. We present several such PDF, viewed also as kernels of integral operators, for which a precise relation exists between S-[2](x) and the variance Var[p(t, x)]. For these PDF we determine explicitly the IP and the Shannon entropy. As an application to Information Theoretic Learning we determine two essential indices used in this theory: the expected value E[logp(t, x)] and the variance Var[logp(t, x)]. The latter is an index of the intrinsic shape of p(t, x) having more statistical power than kurtosis. For a sequence of B-spline functions, considered as kernels of Steklov operators and also as PDF, we investigate the sequence of IP and its asymptotic behaviour. Another special sequence of PDF consists of kernels of Kantorovich modifications of the classical Bernstein operators. Convexity properties and bounds of the associated IP, useful in Information Theoretic Learning, are discussed. Several examples and numerical computations illustrate the general results. (C) 2020 Elsevier Inc. All rights reserved.
Year
DOI
Venue
2021
10.1016/j.amc.2020.125578
APPLIED MATHEMATICS AND COMPUTATION
Keywords
DocType
Volume
Probability density function, Information potential, Entropy, Positive linear operators, B-spline functions
Journal
389
ISSN
Citations 
PageRank 
0096-3003
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Ana-Maria Acu154.02
Gülen Başcanbaz-Tunca200.34
Ioan Rasa3158.99