Abstract | ||
---|---|---|
Considering the contradiction between limited resources in small devices and the high complexity of deep neural networks (DNNs), DNNs can hardly run on small devices such as smartphones and wearable devices. Therefore, offloading DNNs to computing units (fog/edge servers), where each unit executes a part of a DNN collaboratively, has gained increasing popularity. Notably, DNNs are generally in chained structure, while streaming tasks are the central part of artificial-intelligent applications. Thus it is crucial to reduce chained DNNs' delay for streaming tasks. Although existing works have advanced DNN offloading largely, the discussion about chained DNNs and streaming tasks is negligible. To address this issue, in this paper, we propose a layer-level offloading model called OCDST based on the analysis about them. After chained DNNs are offloaded to computing units, the involved units will handle streaming tasks as a pipeline. Consequently, the model significantly reduces the average task delay by paralleling each step in the pipeline. Moreover, the global optimal model solution is drawn by an improved depth-first search (DFS) algorithm, which utilizes DFS to achieve path establishment, calculation and record stages. Based on multi-threading programming and producer-consumer pattern, a program parallelization scheme is also devised to ensure the feasibility of the obtained optimum. Experimental results show that OCDST significantly outperforms recent works with higher inferring speed and faster response. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/GLOBECOM46510.2021.9685971 | 2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM) |
Keywords | DocType | ISSN |
DNN offloading, chained DNNs, streaming tasks | Conference | 2334-0983 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Guoliang Gao | 1 | 0 | 0.68 |
Liantao Wu | 2 | 0 | 1.69 |
Ziyu Shao | 3 | 216 | 22.69 |
Yang Yang | 4 | 612 | 174.82 |
Zhouyang Lin | 5 | 0 | 0.34 |