Abstract | ||
---|---|---|
As frame rates and resolutions of video streams increase, a need for parallel video processing emerges. Most studies offload computation to the cloud, but this is not always possible. For example, solar-powered cameras can be deployed in locations away from power grids. A better choice is to process the data locally on embedded computers without raw video transmission through networks. Parallel computing alleviates the performance bottleneck of a single embedded computer but it degrades analysis accuracy because partitioning video streams breaks the continuity of motion. This paper presents a solution for maintaining accuracy in parallel video processing. A video stream is divided into multiple segments processed on different embedded computers. The segments overlap so that continuous motion can be detected. The system balances workload based on the speed of GPU and CPU to reduce execution time. Experimental results show up to 82.6% improvement in accuracy and 58% reduction in execution time. |
Year | Venue | Keywords |
---|---|---|
2017 | IEEE Global Conference on Signal and Information Processing | Video Processing,Parallel Processing,Embedded Computers,Workload Balance |
Field | DocType | ISSN |
Bottleneck,Video processing,Computer science,Workload,Video transmission,Frame rate,Execution time,Computation,Cloud computing,Embedded system | Conference | 2376-4066 |
Citations | PageRank | References |
0 | 0.34 | 7 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bo Fu | 1 | 1 | 0.73 |
Anup Mohan | 2 | 17 | 5.48 |
Yifan Li | 3 | 84 | 23.59 |
Sanghyun Cho | 4 | 0 | 0.34 |
Kent Gauen | 5 | 12 | 2.81 |
Yung-Hsiang Lu | 6 | 2165 | 161.51 |