1 |
ASADI A, WANG Q, MANCUSO V. A survey on device-to-device communication in cellular networks[J]. IEEE Communications Surveys & Tutorials, 2014, 16(4): 1801-1819. 10.1109/comst.2014.2319555
|
2 |
SHEN Q, SHAO W, FU X. D2D relay incenting and charging modes that are commercially compatible with B2D services[J]. IEEE Access, 2019, 7: 36446-36458. 10.1109/access.2019.2904090
|
3 |
HASHIM M F, ABDUL RAZAK N I. Ultra-dense networks: integration with device to device (D2D) communication[J]. Wireless Personal Communications, 2019, 106(2): 911-925. 10.1007/s11277-019-06195-3
|
4 |
PAWAR P, TRIVEDI A. Device-to-device communication based IoT system: benefits and challenges[J]. IETE Technical Review, 2019, 36(4): 362-374. 10.1080/02564602.2018.1476191
|
5 |
李余,何希平,唐亮贵. 基于终端直通通信的多用户计算卸载资源优化决策[J]. 计算机应用, 2022, 42(5): 1538-1546. 10.11772/j.issn.1001-9081.2021030458
|
|
LI Y, HE X P, TANG L G. Multi-user computation offloading and resource optimization policy based on device-to-device communication[J]. Journal of Computer Applications, 2022, 42(5): 1538-1546. 10.11772/j.issn.1001-9081.2021030458
|
6 |
TANG R, ZHAO J, QU H, et al. User-centric joint admission control and resource allocation for 5G D2D extreme mobile broadband: a sequential convex programming approach[J]. IEEE Communications Letters, 2017, 21(7): 1641-1644. 10.1109/lcomm.2017.2681664
|
7 |
尼俊红,申振涛,杨会峰. 蜂窝网络下基于max-min公平性的D2D功率分配[J]. 计算机应用, 2017, 37(4): 945-947. 10.11772/j.issn.1001-9081.2017.04.0945
|
|
NI J H, SHEN Z T, YANG H F. D2D power allocation based on max-min fairness underlying cellular systems[J]. Journal of Computer Applications, 2017, 37(4): 945-947. 10.11772/j.issn.1001-9081.2017.04.0945
|
8 |
LYU J, CHEW Y H, W-C WONG. A Stackelberg game model for overlay D2D transmission with heterogeneous rate requirements[J]. IEEE Transactions on Vehicular Technology, 2016, 65(10): 8461-8475. 10.1109/tvt.2015.2511924
|
9 |
YANG Z-Y, Y-W KUO. Efficient resource allocation algorithm for overlay D2D communication[J]. Computer Networks, 2017, 124: 61-71. 10.1016/j.comnet.2017.06.002
|
10 |
SWAIN S N, MISHRA S, MURTHY C S R. A novel spectrum reuse scheme for interference mitigation in a dense overlay D2D network [C]// Proceedings of the 2015 IEEE 26th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications. Piscataway: IEEE, 2015: 1201-1205. 10.1109/pimrc.2015.7343481
|
11 |
李中捷,谢东朋.异构蜂窝网络中联合功率控制的终端直通通信资源分配[J]. 计算机应用, 2018, 38(9): 2610-2615.
|
|
LI Z J, XIE D P. Joint power controlled resource allocation scheme for device-to-device communication in heterogeneous cellular networks[J]. Journal of Computer Applications, 2018, 38(9): 2610-2615.
|
12 |
ZAPPONE A, DI RENZO M, DEBBAH M. Wireless networks design in the era of deep learning: model-based, AI-based, or both?[J]. IEEE Transactions on Communications, 2019, 67(10): 7331-7376. 10.1109/tcomm.2019.2924010
|
13 |
ZHAO N, LIANG Y-C, NIYATO D, et al. Deep reinforcement learning for user association and resource allocation in heterogeneous cellular networks[J]. IEEE Transactions on Wireless Communications, 2019, 18(11): 5141-5152. 10.1109/twc.2019.2933417
|
14 |
NASIR Y S, GUO D. Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks[J]. IEEE Journal on Selected Areas in Communications, 2019, 37(10): 2239-2250. 10.1109/jsac.2019.2933973
|
15 |
TAN J, LIANG Y-C, ZHANG L, et al. Deep reinforcement learning for joint channel selection and power control in D2D networks[J]. IEEE Transactions on Wireless Communications, 2021, 20(2): 1363-1378. 10.1109/twc.2020.3032991
|
16 |
LEE H-S. Channel metamodeling for explainable data-driven channel model[J]. IEEE Wireless Communications Letters, 2021, 10(12): 2678-2682. 10.1109/lwc.2021.3111874
|
17 |
SHEN K, YU W. Fractional programming for communication systems — Part I: power control and beamforming[J]. IEEE Transactions on Signal Processing, 2018, 66(10): 2616-2630. 10.1109/tsp.2018.2812733
|
18 |
LUO Z-Q, ZHANG S. Dynamic spectrum management: complexity and duality[J]. IEEE Journal of Selected Topics in Signal Processing, 2008, 2(1): 57-73. 10.1109/jstsp.2007.914876
|
19 |
马礼智,唐睿,张睿智,等.基于无线能量传输的物联网数据采集系统中资源分配机制的设计[J].信息与控制,2023,52(2):220-234. 10.13976/j.cnki.xk.2023.2034
|
|
MA L Z, TANG R, ZHANG R Z, et al. Design of resource allocation mechanisms for wireless power transfer-based Internet-of-things data collection system[J]. Information and Control, 2023, 52(2): 220-234. 10.13976/j.cnki.xk.2023.2034
|
20 |
SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of go with deep neural networks and tree search[J]. Nature, 2016, 529(7587): 484-489. 10.1038/nature16961
|
21 |
TANG R, ZHANG R, XU Y, et al. Energy-efficient optimization algorithm in NOMA-based UAV-assisted data collection systems[J]. IEEE Wireless Communications Letters, 2023, 12(1): 158-162. 10.1109/lwc.2022.3219675
|
22 |
ZHANG R, TANG R, XU Y, et al. Resource allocation for UAV-assisted NOMA systems with dual connectivity[J]. IEEE Wireless Communications Letters, 2023, 12(2): 341-345. 10.1109/lwc.2022.3226265
|
23 |
KIRAN B R, SOBH I, TALPAERT V, et al. Deep reinforcement learning for autonomous driving: a survey[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(6): 4909-4926. 10.1109/tits.2021.3054625
|
24 |
MABU S, HATAKEYAMA H, HIRASAWA K, et al. Genetic network programming with reinforcement learning using SARSA algorithm [C]// Proceedings of the 2006 IEEE International Conference on Evolutionary Computation. Piscataway: IEEE, 2006: 463-469.
|
25 |
KIUMARSI B, LEWIS F L, MODARES H, et al. Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics[J]. Automatica, 2014, 50(4): 1167-1175. 10.1016/j.automatica.2014.02.015
|
26 |
ALZUBAIDI L, ZHANG J, HUMAIDI A J, et al. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions[J]. Journal of Big Data, 2021, 8: No. 53. 10.1186/s40537-021-00444-8
|
27 |
LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning[EB/OL]. [2023-05-01]. .
|
28 |
LESHNO M, LIN V Y, PINKUS A, et al. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function[J]. Neural Networks, 1993, 6(6): 861-867. 10.1016/s0893-6080(05)80131-5
|
29 |
KINGMA D P, BA J. Adam: a method for stochastic optimization[EB/OL]. [2023-05-01]. .
|
30 |
FRANÇOIS-LAVET V, HENDERSON P, ISLAM R, et al. An introduction to deep reinforcement learning[J]. Foundations & Trends in Machine Learning, 2018, 11(3/4): 219-354. 10.1561/2200000071
|
31 |
HERBERT S, WASSELL I, T-H LOH, et al. Characterizing the spectral properties and time variation of the in-vehicle wireless communication channel[J]. IEEE Transactions on Communications, 2014, 62(7): 2390-2399. 10.1109/TCOMM.2014.2328635
|