Title
Human-machine communication for assistive IoT technologies.
Abstract
Despite the phenomenal advances in the computational power and functionality of electronic systems, human-machine interaction has largely been limited to simple control panels, keyboard, mouse and display. Consequently, these systems either rely critically on close human guidance or operate almost independently from the user. An exemplar technology integrated tightly into our lives is the smartphone. However, the term \"smart\" is a misnomer, since it has fundamentally no intelligence to understand its user. The users still have to type, touch or speak (to some extent) to express their intentions in a form accessible to the phone. Hence, intelligent decision making is still almost entirely a human task. A life-changing experience can be achieved by transforming machines from passive tools to agents capable of understanding human physiology and what their user wants [1]. This can advance human capabilities in unimagined ways by building a symbiotic relationship to solve real world problems cooperatively. One of the high-impact application areas of this approach is assistive internet of things (IoT) technologies for physically challenged individuals. The Annual World Report on Disability reveals that 15% of the world population lives with disability, while 110 to 190 million of these people have difficulty in functioning [1]. Quality of life for this population can improve significantly if we can provide accessibility to smart devices, which provide sensory inputs and assist with everyday tasks. This work demonstrates that smart IoT devices open up the possibility to alleviate the burden on the user by equipping everyday objects, such as a wheelchair, with decision-making capabilities. Moving part of the intelligent decision making to smart IoT objects requires a robust mechanism for human-machine communication (HMC). To address this challenge, we present examples of multimodal HMC mechanisms, where the modalities are electroencephalogram (EEG), speech commands, and motion sensing. We also introduce an IoT co-simulation framework developed using a network simulator (OMNeT++) and a robot simulation platform Virtual Robot Experimentation Platform (V-REP). We show how this framework is used to evaluate the effectiveness of different HMC strategies using automated indoor navigation as a driver application.
Year
DOI
Venue
2016
10.1145/2968456.2974009
CODES+ISSS
Keywords
Field
DocType
human-machine communication,assistive IoT technologies,electronic systems,human-machine interaction,smartphone,intelligent decision making,human physiology,assistive Internet of Things,HMC,electroencephalogram,EEG,speech commands,motion sensing,network simulator,OMNeT++,robot simulation platform,virtual robot experimentation platform,V-REP
Modalities,Wheelchair,Population,Computer science,Internet of Things,Network simulation,Phone,Misnomer,Robot,Multimedia
Conference
ISBN
Citations 
PageRank 
978-1-5090-3590-8
0
0.34
References 
Authors
2
3
Name
Order
Citations
PageRank
Alexandra Porter181.62
Md Muztoba231.07
Ümit Y. Ogras320315.03