With the rapid development of Internet of Things (IoT), a large amount of data generated in edge scenarios such as sensors often needs to be transmitted to cloud nodes for processing, which brings huge transmission cost and processing delay. Cloud-edge collaboration provides a solution for these problems. Firstly, on the basis of comprehensive investigation and analysis of the development process of cloud-edge collaboration, combined with the current research ideas and progress of intelligent cloud-edge collaboration, the data acquisition and analysis, computation offloading technology and model-based intelligent optimization technology in cloud edge architecture were analyzed and discussed emphatically. Secondly, the functions and applications of various technologies in intelligent cloud-edge collaboration were analyzed deeply from the edge and the cloud respectively, and the application scenarios of intelligent cloud-edge collaboration technology in reality were discussed. Finally, the current challenges and future development directions of intelligent cloud-edge collaboration were pointed out.
Unmanned Aerial Vehicle (UAV) is flexible and easy to deploy, and can assist Mobile Edge Computing (MEC) to help wireless systems improve coverage and communication quality. However, there are challenges such as computational latency requirements and resource management in the research of UAV-assisted MEC systems. Aiming at the delay problem of UAV providing auxiliary calculation services to multiple ground terminals, a Twin Delayed Deep Deterministic policy gradient (TD3) based Task Offloading Algorithm for Delay Minimization (TD3-TOADM) was proposed. Firstly, the optimization problem was modeled as the problem of minimizing the maximum computational delay under energy constraints. Secondly, TD3-TOADM was used to jointly optimize terminal equipment scheduling, UAV trajectory and task offloading ratio to minimize the maximum computational delay. Simulation analysis results show that compared with the task offloading algorithms based on Actor-Critic (AC), Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG), TD3-TOADM reduces the computational delay by more than 8.2%. It can be seen that TD3-TOADM algorithm has good convergence and robustness, and can obtain the optimal offloading strategy with low delay.
Two optimization methods for quantum simulator implemented on Sunway supercomputer were proposed aiming at the problems of gradual scaling of quantum hardware and insufficient classical simulation speed. Firstly, the tensor contraction operator library SWTT was reconstructed by improving the tensor transposition strategy and computation strategy, which improved the computing kernel efficiency of partial tensor contraction and reduced redundant memory access. Secondly, the balance between complexity and efficiency of path computation was achieved by the contraction path adjustment method based on data locality optimization. Test results show that the improvement method of operator library can improve the simulation efficiency of the "Sycamore" quantum supremacy circuit by 5.4% and the single-step tensor contraction efficiency by up to 49.7 times; the path adjustment method can improve the floating-point efficiency by about 4 times with the path computational complexity inflated by a factor of 2. The two optimization methods have the efficiencies of single-precision and mixed-precision floating-point operations for the simulation of Google’s 53-bit, 20-layer quantum chip random circuit with a million amplitude sampling improved from 3.98% and 1.69% to 18.48% and 7.42% respectively, and reduce the theoretical estimated simulation time from 470 s to 226 s for single-precision and 304 s to 134 s for mixed-precision, verifying that the two methods significantly improve the quantum computational simulation speed.
Aiming at the problems of easily falling into local optimum, slow convergence and low solution accuracy of standard Slime Mould Algorithm (SMA), an Improved Slime Mould Algorithm with Multi-Strategy fusion (MSISMA) was proposed. Firstly, Brownian motion and Levy flight were introduced to enhance the search ability of the algorithm. Secondly, according to different stages of the algorithm, the location update formula of the slime mould was improved to increase the convergence speed and accuracy of the algorithm. Thirdly, the Interval Adaptative Opposition-Based Learning (IAOBL) strategy was adopted to generate the reverse population, with which the diversity and quality of the population were improved, as a result, the convergence speed of the algorithm was improved. Finally, a convergence stagnation monitoring strategy was introduced, which would make the algorithm jump out of the local optimum by re-initializing the positions of some slime mould individuals. With 23 test functions selected,the proposed MSISMA was tested and compared with Equilibrium Slime Mould Algorithm (ESMA), Slime Mould Algorithm combined to Adaptive Guided Differential Evolution Algorithm (SMA-AGDE), SMA, Marine Predators Algorithm (MPA) and Equilibrium Optimizer (EO). Moreover, the Wilcoxon rank-sum test was performed on the running results of all algorithms. Compared with the above algorithms, MSISMA achieves the best average value on 19 test functions and the best standard deviation on 12 test functions, and has the optimization accuracy improved by 23.39% to 55.97% on average. Experimental results show that the convergence speed, solution accuracy and robustness of MSISMA are significantly better.
The purpose of research on Multi-Robot Task Allocation (MRTA) is to improve the task completion efficiency of robots in smart factories. Aiming at the deficiency of the existing algorithms in dealing with large-scale multi-constrained MRTA, an MRTA Algorithm Combining Genetic Algorithm and Rolling Scheduling (ACGARS) was proposed. Firstly, the coding method based on Directed Acyclic Graph (DAG) was adopted in genetic algorithm to efficiently deal with the priority constraints among tasks. Then, the prior knowledge was added to the initial population of genetic algorithm to improve the search efficiency of the algorithm. Finally, a rolling scheduling strategy based on task groups was designed to reduce the scale of the problem to be solved, thereby solving large-scale problems efficiently. Experimental results on large-scale problem instances show that compared with the schemes generated by Constructive Heuristic Algorithm (CHA), MinInterfere Algorithm (MIA), and Genetic Algorithm with Penalty Strategy (GAPS), the scheme generated by the proposed algorithm has the average order completion time shortened by 30.02%, 16.86% and 75.65% respectively when the number of task groups is 20, which verifies that the proposed algorithm can effectively shorten the average waiting time of orders and improve the efficiency of multi-robot task allocation.
To solve the problem of limited computing resources and storage space of edge nodes in the Edge Computing (EC) network, an Edge Computing and Service Offloading (ECSO) algorithm based on improved Deep Reinforcement Learning (DRL) was proposed to reduce node processing latency and improve service performance. Specifically, the problem of edge node service offloading was formulated as a resource-constrained Markov Decision Process (MDP). Due to the difficulty of predicting the request state transfer probability of the edge node accurately, DRL algorithm was used to solve the problem. Considering that the state action space of edge node for caching services is too large, by defining new action behaviors to replace the original actions, the optimal action set was obtained according to the proposed action selection algorithm, so that the process of calculating the action behavior reward was improved, thereby reducing the size of the action space greatly, and improving the training efficiency and reward of the algorithm. Simulation results show that compared with the original Deep Q-Network (DQN) algorithm, Proximal Policy Optimization (PPO) algorithm and traditional Most Popular (MP) algorithm, the total reward value of the proposed ECSO algorithm is increased by 7.0%, 12.7% and 65.6%, respectively, and the latency of edge node service offloading is reduced by 13.0%, 18.8% and 66.4%, respectively, which verifies the effectiveness of the proposed ECSO algorithm and shows that the ECSO can effectively improve the offloading performance of edge computing services.
The K-Means algorithms typically utilize Euclidean distance to calculate the similarity between data points when dealing with large-scale heterogeneous data. However, this method has problems of low efficiency and high computational complexity. Inspired by the significant advantage of Hamming distance in handling data similarity calculation, a Quantum K-Means Hamming (QKMH) algorithm was proposed to calculate similarity. First, the data was prepared and made into quantum state, and the quantum Hamming distance was used to calculate similarity between the points to be clustered and the K cluster centers. Then, the Grover’s minimum search algorithm was improved to find the cluster center closest to the points to be clustered. Finally, these steps were repeated until the designated number of iterations was reached or the clustering centers no longer changed. Based on the quantum simulation computing framework QisKit, the proposed algorithm was validated on the MNIST handwritten digit dataset and compared with various traditional and improved methods. Experimental results show that the F1 score of the QKMH algorithm is improved by 10 percentage points compared with that of the Manhattan distance-based quantum K-Means algorithm and by 4.6 percentage points compared with that of the latest optimized Euclidean distance-based quantum K-Means algorithm, and the time complexity of the QKMH algorithm is lower than those of the above comparison algorithms.
Aiming at the conflict between the makespan and execution cost of cloud workflows with deadlines, a Hybrid Adaptive Particle Swarm Optimization algorithm for workflow scheduling (HAPSO) was proposed. Firstly, a Directed Acyclic Graph (DAG) cloud workflow scheduling model was established based on deadlines. Secondly, through the combination of norm ideal points and adaptive weights, the DAG scheduling model was transformed into a multi-objective optimization problem that weighs DAG makespan and execution cost. Finally, based on Particle Swarm Optimization (PSO) algorithm, the adaptive inertia weight, the adaptive learning factors, the probability switching mechanism of flower pollination algorithm, Firefly Algorithm (FA) and the particle out-of-bound processing method were added to balance the global search ability and the local search ability of the particle swarm, and then to solve the objective optimization problem of DAG makespan and execution cost. The optimization results of PSO, Weight Particle Swarm Optimization (WPSO), Ant Colony Optimization (ACO) and HAPSO were compared and analyzed in the experiment. Experimental results show that HAPSO reduces the multi-objective function value by 40.9% to 81.1% that weighs the makespan and execution cost of workflow (30~300 tasks), and HAPSO effectively weighs the makespan and execution cost with the constraints of workflow deadlines. In addition, HAPSO also has a good effect on the single objective of reducing the makespan or execution cost, which verifies the universality of HAPSO.