Title
A modular framework for collaborative multimodal annotation and visualization
Abstract
Artificial Intelligence (AI) research, including machine learning, computer vision, and natural language, requires large amounts of annotated datasets. The current research and development (R&D) pipeline involves each group collecting their own datasets using an annotation tool tailored specifically to their needs, followed by a series of engineering efforts in loading other external datasets and developing their own interfaces, often mimicking some components of existing annotation tools. In departure from the current paradigm, my research focuses on reducing inefficiencies by developing a unified web-based, fully configurable framework that enables researchers to set up an end-to-end R&D experience from dataset annotations to deployment with an application-specific AI backend. Extensible and customizable as required by individual projects, the framework has been successfully featured in a number of research efforts, including conversational AI, explainable AI, and commonsense grounding of language and vision. This submission outlines the various milestones-to-date and planned future work.
Year
DOI
Venue
2019
10.1145/3308557.3308730
Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion
Keywords
Field
DocType
HCI, commonsense grounding, conversational AI, explainable AI, language and vision, multimodal annotation
Software deployment,Annotation,Visualization,Computer science,Human–computer interaction,Natural language,Modular design
Conference
ISBN
Citations 
PageRank 
978-1-4503-6673-1
0
0.34
References 
Authors
2
1
Name
Order
Citations
PageRank
Chris Kim100.34