1 |
LI L L, LIU Z F, TSENG M L, et al. Enhancing the Lithium-ion battery life predictability using a hybrid method[J]. Applied Soft Computing, 2019, 74: 110-121. 10.1016/j.asoc.2018.10.014
|
2 |
ATAT R, LIU L J, CHEN H, et al. Enabling cyber-physical communication in 5G cellular networks: challenges, spatial spectrum sensing, and cyber-security[J]. IET Cyber-Physical Systems: Theory and Applications, 2017, 2(1): 49-54. 10.1049/iet-cps.2017.0010
|
3 |
LI C L, ZHU L Y, TANG H L, et al. Mobile user behavior based topology formation and optimization in ad hoc mobile cloud[J]. Journal of Systems and Software, 2019, 148: 132-147. 10.1016/j.jss.2018.11.005
|
4 |
NOVAK E, TANG Z F, LI Q. Ultrasound proximity networking on smart mobile devices for IoT applications[J]. IEEE Internet of Things Journal, 2019, 6(1): 399-409. 10.1109/jiot.2018.2848099
|
5 |
MAO Y Y, YOU C S, ZHANG J, et al. A survey on mobile edge computing: the communication perspective[J]. IEEE Communications Surveys and Tutorials, 2017, 19(4): 2322-2358. 10.1109/comst.2017.2745201
|
6 |
WANG S, ZHANG X, ZHANG Y, et al. A survey on mobile edge networks: convergence of computing, caching and communications[J]. IEEE Access, 2017, 5: 6757-6779. 10.1109/access.2017.2685434
|
7 |
ABBAS N, ZHANG Y, TAHERKORDI A, et al. Mobile edge computing: a survey[J]. IEEE Internet of Things Journal, 2018, 5(1): 450-465. 10.1109/jiot.2017.2750180
|
8 |
KENESHLOO Y, SHI T, RAMAKRISHNAN N, et al. Deep reinforcement learning for sequence-to-sequence models[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 31(7): 2469-2489.
|
9 |
MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level control through deep reinforcement learning[J]. Nature, 2015, 518(7540): 529-533. 10.1038/nature14236
|
10 |
LUONG N C, HOANG D T, GONG S M, et al. Applications of deep reinforcement learning in communications and networking: a survey[J]. IEEE Communications Surveys and Tutorials, 2019, 21(4): 3133-3174. 10.1109/comst.2019.2916583
|
11 |
KIRAN B R, SOBH I, TALPAERT V, et al. Deep reinforcement learning for autonomous driving: a survey[J/OL]. IEEE Transactions on Intelligent Transportation Systems. (2021-01-23) [2022-06-20]. . 10.1109/tits.2021.3054625
|
12 |
WAN Z Q, JIANG C, FAHAD M, et al. Robot-assisted pedestrian regulation based on deep reinforcement learning[J]. IEEE Transactions on Cybernetics, 2020, 50(4): 1669-1682. 10.1109/tcyb.2018.2878977
|
13 |
LIN X, WANG Y Z, XIE Q, et al. Task scheduling with dynamic voltage and frequency scaling for energy minimization in the mobile cloud computing environment[J]. IEEE Transactions on Services Computing, 2015, 8(2): 175-186. 10.1109/tsc.2014.2381227
|
14 |
MAHMOODI S E, UMA R N, SUBBALAKSHMI K P. Optimal joint scheduling and cloud offloading for mobile applications[J]. IEEE Transactions on Cloud Computing, 2019, 7(2): 301-313. 10.1109/tcc.2016.2560808
|
15 |
周业茂,李忠金,葛季栋,等. 移动云计算中基于延时传输的多目标工作流调度[J]. 软件学报, 2018, 29(11): 3306-3325. 10.13328/j.cnki.jos.005479
|
|
ZHOU Y M, LI Z J, GE J D, et al. Multi-objective workflow scheduling based on delay transmission in mobile cloud computing[J]. Journal of Software, 2018, 29(11): 3306-3325. 10.13328/j.cnki.jos.005479
|
16 |
SONG F H, XING H L, LUO S X, et al. A multiobjective computation offloading algorithm for mobile-edge computing[J]. IEEE Internet of Things Journal, 2020, 7(9): 8780-8799. 10.1109/jiot.2020.2996762
|
17 |
杨天,杨军. 移动边缘计算中的卸载决策与资源分配策略[J]. 计算机工程, 2021, 47(2): 19-25. 10.19678/j.issn.1000-3428.0058085
|
|
YANG T, YANG J. Offloading decision and resource allocation strategy in mobile edge computing[J]. Computer Engineering, 2021, 47(2): 19-25. 10.19678/j.issn.1000-3428.0058085
|
18 |
YANG L, ZHONG C Y, YANG Q H, et al. Task offloading for directed acyclic graph applications based on edge computing in Industrial Internet[J]. Information Sciences, 2020, 540: 51-68. 10.1016/j.ins.2020.06.001
|
19 |
WU Q, WU Z W, ZHUANG Y H, et al. Adaptive DAG tasks scheduling with deep reinforcement learning[C]// Proceedings of the 2018 International Conference on Algorithms and Architectures for Parallel Processing, LNTCS 11335. Cham: Springer, 2018: 477-490.
|
20 |
詹文翰,王瑾,朱清新,等. 移动边缘计算中基于深度强化学习的计算卸载调度方法[J]. 计算机应用研究, 2021, 38(1): 241-245, 263. 10.19734/j.issn.1001-3695.2019.10.0594
|
|
ZHAN W H, WANG J, ZHU Q X, et al. Deep reinforcement learning based offloading scheduling in mobile edge computing[J]. Application Research of Computers, 2021, 38(1): 241-245, 263. 10.19734/j.issn.1001-3695.2019.10.0594
|
21 |
YAN J, BI S Z, ZHANG Y J A. Offloading and resource allocation with general task graph in mobile edge computing: a deep reinforcement learning approach[J]. IEEE Transactions on Wireless Communications, 2020, 19(8): 5404-5419. 10.1109/twc.2020.2993071
|