Title
Learning along a Channel: the Expectation part of Expectation-Maximisation.
Abstract
This paper first investigates a form of frequentist learning that is often called Maximal Likelihood Estimation (MLE). It is redescribed as a natural transformation from multisets to distributions that commutes with marginalisation and disintegration. It forms the basis for the next, main topic: learning of hidden states, which is reformulated as learning along a channel. This topic requires a fundamental look at what data is and what its validity is in a particular state. The paper distinguishes two forms, denoted as ‘M’ for ‘multiple states’ and ‘C’ for ‘copied states’. It is shown that M and C forms exist for validity of data, for learning from data, and for learning along a channel. This M/C distinction allows us to capture two completely different examples from the literature which both claim to be instances of Expectation-Maximisation.
Year
DOI
Venue
2019
10.1016/j.entcs.2019.09.008
Electronic Notes in Theoretical Computer Science
Keywords
Field
DocType
Probabilistic learning,Maximal Likelihood Estimation,latent variables,Expectation-Maximisation,learning along a channel
Mathematical economics,Frequentist inference,Computer science,Communication channel,Theoretical computer science
Journal
Volume
ISSN
Citations 
347
1571-0661
0
PageRank 
References 
Authors
0.34
0
1
Name
Order
Citations
PageRank
B. Jacobs11046100.09