Title
Learning non-cooperative behaviour for dialogue agents.
Abstract
Non-cooperative dialogue behaviour for artificial agents (e.g. deception and information hiding) has been identified as important in a variety of application areas, including education and healthcare, but it has not yet been addressed using modern statistical approaches to dialogue agents. Deception has also been argued to be a requirement for high-order intentionality in AI. We develop and evaluate a statistical dialogue agent using Reinforcement Learning which learns to perform non-cooperative dialogue moves in order to complete its own objectives in a stochastic trading game with imperfect information. We show that, when given the ability to perform both cooperative and non-cooperative dialogue moves, such an agent can learn to bluff and to lie so as to win more games. For example, we show that a non-cooperative dialogue agent learns to win 10.5% more games than a strong rule-based adversary, when compared to an optimised agent which cannot perform non-cooperative moves. This work is the first to show how agents can learn to use dialogue in a non-cooperative way to meet their own goals.
Year
DOI
Venue
2014
10.3233/978-1-61499-419-0-999
Frontiers in Artificial Intelligence and Applications
Field
DocType
Volume
Bluff,Intentionality,Computer science,Deception,Information hiding,Artificial intelligence,Adversary,Perfect information,Machine learning,Reinforcement learning
Conference
263
ISSN
Citations 
PageRank 
0922-6389
0
0.34
References 
Authors
3
2
Name
Order
Citations
PageRank
Ioannis Efstathiou1152.12
Oliver Lemon2107286.38