Title
RankNEAT: Outperforming Stochastic Gradient Search in Preference Learning Tasks
Abstract
Stochastic gradient descent (SGD) is a premium optimization method for training neural networks, especially for learning objectively defined labels such as image objects and events. When a neural network is instead faced with subjectively defined labels-such as human demonstrations or annotations-SGD may struggle to explore the deceptive and noisy loss landscapes caused by the inherent bias and subjectivity of humans. While neural networks are often trained via preference learning algorithms in an effort to eliminate such data noise, the de facto training methods rely on gradient descent. Motivated by the lack of empirical studies on the impact of evolutionary search to the training of preference learners, we introduce the RankNEAT algorithm which learns to rank through neuroevolution of augmenting topologies. We test the hypothesis that RankNEAT outperforms traditional gradient-based preference learning within the affective computing domain, in particular predicting annotated player arousal from the game footage of three dissimilar games. RankNEAT yields superior performances compared to the gradient-based preference learner (RankNet) in the majority of experiments since its architecture optimization capacity acts as an efficient feature selection mechanism, thereby, eliminating overfitting. Results suggest that RankNEAT is a viable and highly efficient evolutionary alternative to preference learning.
Year
DOI
Venue
2022
10.1145/3512290.3528744
PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'22)
Keywords
DocType
Citations 
Preference learning, neuroevolution, NEAT, RankNet, vision transformers, stochastic gradient descent, affect modeling, computer, games
Conference
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Kosmas Pinitas100.34
Konstantinos Makantasis200.34
Antonios Liapis300.34
Georgios N. Yannakakis42332168.42