Abstract | ||
---|---|---|
Evaluation is an essential activity in HCI. Once a product is released, it may be of interest to continue its evaluation in an implicit and in the wild fashion on a large scale. In addition to data logs which can offer a view of the usage of the system, attitudinal data (e.g. satisfaction, emotional reaction) also contribute to understand the user's complete interaction. Compiling these data is a challenge when performing it implicitly and in the wild. This work proposes an approach to evaluate interactive systems with a large number of users (in the large), in real conditions (in the wild) and implicitly - without the users being aware that they are participating in the evaluation of the system.
|
Year | DOI | Venue |
---|---|---|
2019 | 10.1145/3335595.3336290 | Proceedings of the XX International Conference on Human Computer Interaction |
Keywords | Field | DocType |
Implicit evaluation, evaluation model, in-the-large, in-the-wild | Computer science,Human–computer interaction,Multimedia | Conference |
ISBN | Citations | PageRank |
978-1-4503-7176-6 | 0 | 0.34 |
References | Authors | |
0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Dino Babahmetović | 1 | 0 | 0.34 |
Cristina Manresa-Yee | 2 | 120 | 20.73 |