Title
Using hidden nodes in Bayesian networks
Abstract
In the construction of a Bayesian network, it is always assumed that the variables starting from the same parent are conditionally independent. In practice, this assumption may not hold, and will give rise to incorrect inferences. In cases where some dependency is found between variables, we propose that the creation of a hidden node, which in effect models the dependency, can solve the problem. In order to determine the conditional probability matrices for the hidden node, we use a gradient descent method. The objective function to be minimised is the squared-error between the measured and computed values of the instantiated nodes. Both forward and backward propagation are used to compute the node probabilities. The error gradients can be treated as updating messages and can be propagated in any direction throughout any singly connected network. We used the simplest node-by-node creation approach for parents with more than two children. We tested our approach on two different networks in an endoscope guidance system and, in both cases, demonstrated improved results.
Year
DOI
Venue
1996
10.1016/0004-3702(95)00119-0
Artif. Intell.
Keywords
Field
DocType
bayesian network,hidden node,objective function,gradient descent method,conditional independence,conditional probability
Gradient method,Gradient descent,Conditional probability,Conditional independence,Inference,Bayesian network,Artificial intelligence,Backpropagation,Hidden node problem,Mathematics,Machine learning
Journal
Volume
Issue
ISSN
88
1-2
0004-3702
Citations 
PageRank 
References 
20
1.67
18
Authors
2
Name
Order
Citations
PageRank
C K Kwoh155946.55
Duncan Fyfe Gillies29717.86