Abstract | ||
---|---|---|
: (1) The outputs of a typical multi-output classification network do not satisfy theaxioms of probability; probabilities should be positive and sum to one. This problem canbe solved by treating the trained network as a preprocessor that produces a feature vectorthat can be further processed, for instance by classical statistical estimation techniques.(2) We find that in cases of interest, neural networks are (and should be) somewhat underdetermined because the training data is always... |
Year | Venue | Keywords |
---|---|---|
1990 | NIPS | neural-net output level,probability distribution,satisfiability,neural net,neural network |
Field | DocType | ISBN |
Training set,Feature vector,Softmax function,Computer science,Theoretical computer science,Probability distribution,Preprocessor,Artificial intelligence,Formalism (philosophy),Probability axioms,Artificial neural network,Machine learning | Conference | 1-55860-184-8 |
Citations | PageRank | References |
41 | 49.82 | 3 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
J. S. Denker | 1 | 3245 | 2524.81 |
Yann LeCun | 2 | 26090 | 3771.21 |