Title
Performance Evaluation Gaps In A Real-Time Strategy Game Between Human And Artificial Intelligence Players
Abstract
Since 2010, annual StarCraft artificial intelligence (AI) competitions have promoted the development of successful AI players for complex real-time strategy games. In these competitions, AI players are ranked based on their win ratio over thousands of head-to-head matches. Although simple and easily implemented, this evaluation scheme may less adequately help develop more human-competitive AI players. In this paper, we recruited 45 human StarCraft players at different expertise levels (expert/medium/novice) and asked them to play against the 18 top AI players selected from the five years of competitions (2011-2015). The results show that the human evaluations of AI players differ substantially from the current standard evaluation and ranking method. In fact, from a human standpoint, there has been little progress in the quality of StarCraft AI players over the years. It is even possible that AI-only tournaments can lead to AIs being created that are unacceptable competitors for humans. This paper is the first to systematically explore the human evaluation of AI players, the evolution of AI players, and the differences between human perception and tournament-based evaluations. The discoveries from this paper can support AI developers in game companies and AI tournament organizers to better incorporate the perspective of human users into their AI systems.
Year
DOI
Venue
2018
10.1109/ACCESS.2018.2800016
IEEE ACCESS
Keywords
Field
DocType
Video game, Starcraft, game, artificial intelligence, game AI competition, human factor, human computer interaction
Real-time strategy,Tournament,Ranking,Computer science,Artificial intelligence,Perception,Competitor analysis
Journal
Volume
ISSN
Citations 
6
2169-3536
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Man-Je Kim173.44
Kyung-Joong Kim231937.39
Seung-Jun Kim3100362.52
Anind Dey411484959.91