Abstract | ||
---|---|---|
Audio data from a microphone can be a rich source of information. The speech and audio processing community has explored using audio data to detect emotion, depression, Alzheimer's disease and even children's age, weight and height. The mobile community has looked at using smartphone based audio to detect coughing and other respiratory sounds and help predict students' GPA. However, audio data from these studies tends to be collected in more controlled environments using well placed, high quality microphones or from phone calls. Applying these kinds of analyses to continuous and in-the-wild audio could have tremendous applications, particularly in the context of health monitoring. As part of a health monitoring study, we use smartwatches to collect in-the-wild audio from real patients. In this paper we characterize the quality of the audio data we collected. Our findings include that the smartwatch based audio is good enough to discern speech and respiratory sounds. However, extracting these sounds is difficult because of the wide variety of noise in the signal and current tools perform poorly at dealing with this noise. We also find that the quality of the microphone allows annotators to differentiate the source of speech and coughing, which adds another level of complexity to analyzing this audio.
|
Year | DOI | Venue |
---|---|---|
2018 | 10.1145/3211960.3211977 | MobiSys '18: The 16th Annual International Conference on Mobile Systems, Applications, and Services
Munich
Germany
June, 2018 |
DocType | ISBN | Citations |
Conference | 978-1-4503-5842-2 | 1 |
PageRank | References | Authors |
0.40 | 0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Daniyal Liaqat | 1 | 8 | 3.60 |
Robert Wu | 2 | 1 | 0.73 |
Andrea Gershon | 3 | 1 | 0.40 |
Hisham Alshaer | 4 | 6 | 3.20 |
Frank Rudzicz | 5 | 231 | 44.82 |
Eyal de Lara | 6 | 1864 | 161.54 |