Abstract | ||
---|---|---|
We propose a method to speed up reinforcement learning of policies for spoken dialogue systems. This is achieved by combining a coarse grained abstract representation of states and actions with learning only in frequently visited states. The value of unsampled states is approximated by a linear interpolation of known states. Experiments show that the proposed method effectively optimizes dialogue strategies for frequently visited dialogue states. |
Year | DOI | Venue |
---|---|---|
2004 | 10.1007/978-3-540-30211-7_1 | IJCNLP |
Keywords | Field | DocType |
known state,coarse grained abstract representation,linear interpolation,stable function approximation,fast reinforcement,dialogue state,dialogue system,unsampled state,dialogue policy,optimizes dialogue strategy,function approximation,reinforcement learning | Function approximation,Computer science,Artificial intelligence,Linear interpolation,Machine learning,Reinforcement learning,Speedup | Conference |
Volume | ISSN | ISBN |
3248 | 0302-9743 | 3-540-24475-1 |
Citations | PageRank | References |
4 | 0.51 | 9 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Matthias Denecke | 1 | 177 | 24.32 |
Kohji Dohsaka | 2 | 173 | 18.38 |
Mikio Nakano | 3 | 488 | 61.92 |