Abstract | ||
---|---|---|
The audio-based approach to video indexing described by the authors detects music and speech independently even when they occur simultaneously. The indexed video segments, when presented on the Video Sound Browser, let users randomly access the video. The Video in Time system provides different video condensation levels based on video structuring that can link the video segments and the director's intentions |
Year | DOI | Venue |
---|---|---|
1998 | 10.1109/93.713301 | MultiMedia, IEEE |
Keywords | Field | DocType |
indexing,multimedia systems,music,speech processing,video signal processing,Video Sound Browser,Video in Time system,audio-based approach,director intentions,indexed video segments,music detection,random video access,speech detection,video condensation levels,video handling,video indexing,video structuring | Video processing,Video capture,Computer science,Multiview Video Coding,Speech recognition,Video tracking,Smacker video,Video denoising,Multimedia,Video compression picture types,Uncompressed video | Journal |
Volume | Issue | ISSN |
5 | 3 | 1070-986X |
Citations | PageRank | References |
36 | 4.27 | 18 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kenichi Minami | 1 | 43 | 5.71 |
Akihito Akutsu | 2 | 308 | 77.61 |
H. Hamada | 3 | 39 | 5.77 |
Yoshinobu Tonomura | 4 | 554 | 149.46 |