Abstract | ||
---|---|---|
Classic visual analysis relies on a single medium for displaying and interacting with data. Large-scale tiled display walls, virtual reality using head-mounted displays or CAVE systems, and collaborative touch screens have all been utilized for data exploration and analysis. We present our initial findings of combining numerous display environments and input modalities to create an interactive multi-modal display space that enables researchers to leverage various pieces of technology that will best suit specific sub-tasks. Our main contributions are 1) the deployment of an input server that interfaces with a wide array of interaction devices to create a single uniform stream of data usable by custom visual applications, and 2) three real-world use cases of leveraging multiple display environments in conjunction with one another to enhance scientific discovery and data dissemination. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1145/2992154.2996792 | ISS |
Keywords | DocType | Citations |
Multiple Display Environments, multi-user interaction, collaboration, input devices, large-scale displays, virtual reality, multi-touch screens, motion capture | Conference | 1 |
PageRank | References | Authors |
0.40 | 8 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Thomas Marrinan | 1 | 1 | 0.40 |
Arthur Nishimoto | 2 | 1 | 0.40 |
Joseph A. Insley | 3 | 215 | 40.86 |
Silvio Rizzi | 4 | 23 | 7.48 |
Andrew E. Johnson | 5 | 437 | 67.01 |
Michael E. Papka | 6 | 953 | 138.69 |