Title
(P, p) Retraining Policies
Abstract
Skills that are practiced infrequently need to be retrained. A retraining policy is optimal if it minimizes the cost of keeping the probability that the skill is learned within two bounds. The (P, p) policy is to retrain only when the probability that the skill is learned has dropped just above the lower bound, so that this probability is brought up just below the upper bound. For minimum assumptions on the cost function, a set of two easy-to-check conditions involving the relearning and forgetting functions guarantees the optimality of the (P, p) policy. The conditions hold for power functions proposed in the psychology of learning and forgetting but not for exponential functions.
Year
DOI
Venue
2007
10.1109/TSMCA.2007.902620
Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions
Keywords
Field
DocType
training,cost function,exponential functions,forgetting functions,lower bound,probability,relearning functions,retraining policy,upper bound,Dynamic programming,instruction,inventory management,memory,optimality
Dynamic programming,Forgetting,Power function,Mathematical optimization,Psychology of learning,Exponential function,Computer science,Upper and lower bounds,Artificial intelligence,Machine learning,Retraining
Journal
Volume
Issue
ISSN
37
5
1083-4427
Citations 
PageRank 
References 
0
0.34
3
Authors
1
Name
Order
Citations
PageRank
Konstantinos V. Katsikopoulos1739.68