Project Articles

    Advanced computing

    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Design of FPGA accelerator with high parallelism for convolution neural network
    WANG Xiaofeng, JIANG Penglong, ZHOU Hui, ZHAO Xiongbo
    Journal of Computer Applications    2021, 41 (3): 812-819.   DOI: 10.11772/j.issn.1001-9081.2020060996
    Abstract581)      PDF (1115KB)(779)       Save
    Most of the algorithms based on Convolutional Neural Network (CNN) are computation-intensive and memory-intensive, so they are difficult to be applied in embedded fields such as aerospace, mobile robotics and smartphones which have low-power requirements. To solve this problem, a Field Programmable Gate Array (FPGA) accelerator with high parallelism for CNN was proposed. Firstly, four kinds of parallelism in CNN algorithm that can be used for FPGA acceleration were compared and studied. Then, a Multi-channel Convolutional Rotating-register Pipeline (MCRP) structure was proposed to concisely and effectively utilize the convolution kernel parallelism of CNN algorithm. Finally, using the strategy of input/output channel parallelism+convolution kernel parallelism, a CNN accelerator architecture with high parallelism was proposed based on MCRP structure, and to verify the design rationality of the architecture, it was deployed on the XCZU9EG chip of XILINX. Under the condition of making full use of the on-chip Digital Signal Processor (DSP) resources, the peak computing capacity of the proposed CNN accelerator reached 2 304 GOPS(Giga Operations Per Second). Taking SSD-300 algorithm as the test object, this CNN accelerator had the actual computing capacity of 1 830.33 GOPS, and the hardware utilization rate of 79.44%. Experimental results show that, the MCRP structure can effectively improve the computing capacity of CNN accelerator, and the CNN accelerator based on MCRP structure can generally meet the computing capacity requirements of most applications in the embedded fields.
    Reference | Related Articles | Metrics
    Survey of data-driven intelligent cloud-edge collaboration
    Pengxin TIAN, Guannan SI, Zhaoliang AN, Jianxin LI, Fengyu ZHOU
    Journal of Computer Applications    2023, 43 (10): 3162-3169.   DOI: 10.11772/j.issn.1001-9081.2022091418
    Abstract555)   HTML29)    PDF (1772KB)(391)       Save

    With the rapid development of Internet of Things (IoT), a large amount of data generated in edge scenarios such as sensors often needs to be transmitted to cloud nodes for processing, which brings huge transmission cost and processing delay. Cloud-edge collaboration provides a solution for these problems. Firstly, on the basis of comprehensive investigation and analysis of the development process of cloud-edge collaboration, combined with the current research ideas and progress of intelligent cloud-edge collaboration, the data acquisition and analysis, computation offloading technology and model-based intelligent optimization technology in cloud edge architecture were analyzed and discussed emphatically. Secondly, the functions and applications of various technologies in intelligent cloud-edge collaboration were analyzed deeply from the edge and the cloud respectively, and the application scenarios of intelligent cloud-edge collaboration technology in reality were discussed. Finally, the current challenges and future development directions of intelligent cloud-edge collaboration were pointed out.

    Table and Figures | Reference | Related Articles | Metrics
    Constrained multi-objective optimization algorithm based on coevolution
    ZHANG Xiangfei, LU Yuming, ZHANG Pingsheng
    Journal of Computer Applications    2021, 41 (7): 2012-2018.   DOI: 10.11772/j.issn.1001-9081.2020081344
    Abstract533)      PDF (975KB)(336)       Save
    In view of the problem that it is difficult for constrained multi-objective optimization algorithms to effectively balance convergence and diversity, a new constrained multi-objective optimization algorithm based on coevolution was proposed. Firstly, a population with certain number of feasible solutions was obtained by using the feasible solution search method based on steady-state evolution. Then, this population was divided into two sub-populations and both convergence and diversity were achieved by coevolution of the two sub-populations. Finally, standard constrained multi-objective optimization problems CF1~CF7, DOC1~DOC7 and the practical engineering problems were used for simulation experiments to test the solution performance of the proposed algorithm. Experimental results show that compared with Nondominated Sorting Genetic Algorithm Ⅱ based on Constrained Dominance Principle (NSGA-Ⅱ-CDP), Two-Phase algorithm (ToP), Push and Pull Search algorithm (PPS) and Two-Archive Evolutionary Algorithm for Constrained multiobjective optimization (C-TAEA), the proposed algorithm achives good results in both Inverted Generational Distance (IGD) and HyperVolume (HV), indicating that the proposed algorithm can effectively balance convergence and diversity.
    Reference | Related Articles | Metrics
    Improved butterfly optimization algorithm based on cosine similarity
    CHEN Jun, HE Qing
    Journal of Computer Applications    2021, 41 (9): 2668-2677.   DOI: 10.11772/j.issn.1001-9081.2020111776
    Abstract502)      PDF (1469KB)(410)       Save
    Aiming at the problems that Butterfly Optimization Algorithm (BOA) tends to fall into local optimum and has poor convergence, a Multi-Strategy Improved BOA (MSBOA) was proposed. Firstly, the cosine similarity position adjustment strategy was introduced to the algorithm, rotation transformation operator and scaling transformation operator were used to update the positions, so as to effectively maintain the population diversity of the algorithm. Secondly, dynamic switching probability was introduced to balance the transformation between the local phase and the global phase of the algorithm. Finally, a hybrid inertia weight strategy was added to accelerate convergence. Solving 16 benchmark test functions, as well as the Wilcoxon rank-sum test and CEC2014 test functions were to verify, the effectiveness and robustness of the proposed algorithm. Experimental results show that compared with BOA, some BOAs with different improvement strategies and some swarm intelligence algorithms, MSBOA has significant improvement in convergence accuracy and convergence speed.
    Reference | Related Articles | Metrics
    Chaotic elite Harris hawks optimization algorithm
    TANG Andi, HAN Tong, XU Dengwu, XIE lei
    Journal of Computer Applications    2021, 41 (8): 2265-2272.   DOI: 10.11772/j.issn.1001-9081.2020101610
    Abstract469)      PDF (1295KB)(333)       Save
    Aiming at the shortcomings of Harris Hawks Optimization (HHO) algorithm, such as low convergence accuracy, low convergence speed and being easy to fall into local optimum, a Chaotic Elite HHO (CEHHO) algorithm was proposed. Firstly, the elite hierarchy strategy was introduced to make full use of the dominant population to enhance the population diversity and improve the convergence speed and accuracy of the algorithm. Secondly, the Tent chaotic map was used to adjust the key parameters of the algorithm. Thirdly, a nonlinear energy factor adjustment strategy was adopted to balance the exploitation and exploration of the algorithm. Finally, the Gaussian random walk strategy was used to disturb the optimal individual, and when the algorithm was stagnant, the random walk strategy was used to make the algorithm jump out of the local optimum effectively. Through the simulation experiments of 20 benchmark functions in different dimensions, the optimization ability of the algorithm was evaluated. Experimental results show that the improved algorithm outperforms Whale Optimization Algorithm (WOA), Grey Wolf Optimization (GWO) algorithm, Particle Swarm Optimization (PSO) algorithm, and Biogeography-Based Optimization (BBO) algorithm, and the performance of this algorithm is significantly better than that of original HHO algorithm, which prove the effectiveness of the improved algorithm.
    Reference | Related Articles | Metrics
    Real-valued Cartesian genetic programming algorithm based on quasi-oppositional mutation
    FU Anbing, WEI Wenhong, ZHANG Yuhui, GUO Wenjing
    Journal of Computer Applications    2021, 41 (2): 479-485.   DOI: 10.11772/j.issn.1001-9081.2020060791
    Abstract459)      PDF (1178KB)(417)       Save
    Concerning the problems that the traditional Cartesian Genetic Programming (CGP) is lack of diversity of mutation operation and the evolutionary strategy used in it has limitations, an ADvanced Real-Valued Cartesian Genetic Programming algorithm based on quasi-oppositional mutation (AD-RVCGP) was proposed. Firstly, the 1+lambda evolutionary strategy was adopted in the evolution process in AD-RVCGP just like in the traditional CGP, that is lambda offsprings were generated by a parent only through mutation operation. Secondly, three mutation operators including quasi-oppositional mutation operator, terminal mutation operator and single-point mutation operator were dynamically selected in the process of evolution, and the information of oppositional individuals was used for the mutation operation. Finally, in the evolution process, different parents were selected in the algorithm to generate the next generation individuals according to the state of evolution stage. In the test of symbolic regression problem, the convergence speed of the proposed AD-RVCGP was about 30% faster than that of the traditional CGP, and the running time was about 20% less. In addition, the error between the optimal solution obtained by AD-RVCGP and the real optimal solution was smaller than the optimal solution obtained by the traditional CGP and the real optimal solution. Experimental results show that the proposed AD-RVCGP has high convergence speed and precision for solving problem.
    Reference | Related Articles | Metrics
    Improved QMIX algorithm from communication and exploration for multi-agent reinforcement learning
    DENG Huiyi, LI Yongzhen, YIN Qiyue
    Journal of Computer Applications    2023, 43 (1): 202-208.   DOI: 10.11772/j.issn.1001-9081.2021111886
    Abstract455)   HTML12)    PDF (1867KB)(198)       Save
    Non-stationarity that breaks the Markov assumption followed by most single-agent reinforcement learning algorithms is one of the main challenges in multi-agent environment, making each agent may be caught in an infinite loop caused by the environment created by the other agents during the learning process. To solve above problem, the implementation method of Centralized Training with Decentralized Execution (CTDE) structure in reinforcement learning was studied, and from two perspectives of agent communication and exploration, the QMIX algorithm was improved by introducing a Variance Control-Based (VBC) communication model and a curiosity mechanism. The proposed algorithm was validated in micro control scenarios of StarCraft Ⅱ Learning Environment (SC2LE). Experimental results show that the proposed algorithm can improve the performance and obtain a training model with higher convergence speed compared to QMIX algorithm.
    Reference | Related Articles | Metrics
    Improved imperialist competitive algorithm inspired by historical facts of Spring and Autumn Period
    WANG Guilin, LI Bin
    Journal of Computer Applications    2021, 41 (2): 470-478.   DOI: 10.11772/j.issn.1001-9081.2020060974
    Abstract450)      PDF (2395KB)(288)       Save
    Aiming at the problem that imperialist competitive algorithm falls into the curse of dimensionality easily when used for solving high-dimensional functions because of premature convergence, an improved imperialist competitive algorithm inspired by the historical facts of the vassal states vying for power in the Spring and Autumn Period of China was proposed. Firstly, the competition strategy of "Cooperative Confrontation" was introduced in the process of country initialization to enhance the information interaction, so as to preserve the high-quality populations. Secondly, learnt from the colonial rule strategy of gradual infiltration and assimilation at all levels of state was used in the empire assimilation process to improve the development ability of the algorithm. Finally, the mechanism of judging and jumping out of local optimum was added to avoid the impact of "premature" on the optimization performance. In the simulation experiment, through 8 classic benchmark functions, the optimization ability, convergence speed and high-dimensional function applicability of the improved algorithm were verified, and three schemes for jumping out of local optimum were compared and analyzed. Furthermore, CEC2017 test function experiment was carried out, and the comparison of the proposed improved algorithm with five representative advanced algorithms in the field of algorithm improvement in recent years showed that the improved algorithm had higher optimization accuracy and stronger stability. And the Kendall correlation coefficient verification showed the significant difference between the improved algorithm and the original algorithm on optimization performance, and that the improvement of assimilation mechanism played a key role in the performance improvement.
    Reference | Related Articles | Metrics
    Deep pipeline 5×5 convolution method based on two-dimensional Winograd algorithm
    HUANG Chengcheng, DONG Xiaoxiao, LI Zhao
    Journal of Computer Applications    2021, 41 (8): 2258-2264.   DOI: 10.11772/j.issn.1001-9081.2020101668
    Abstract442)      PDF (1087KB)(322)       Save
    Aiming at problems such as high memory bandwidth demand, high computational complexity, long design and exploration cycle, and inter-layer computing delay of cascade convolution in two-dimensional Winograd convolution algorithm, a double-buffer 5×5 convolutional layer design method based on two-dimensional Winograd algorithm was proposed. Firstly, the column buffer structure was used to complete the data layout, so as to reuse the overlapping data between adjacent blocks and reduce the memory bandwidth demand. Then, the repeated intermediate calculation results in addition process of Winograd algorithm were precisely searched and reused to reduce the computational cost of addition, so that the energy consumption and the design area of the accelerator system were decreased. Finally, according to the calculation process of Winograd algorithm, the design of 6-stage pipeline structure was completed, and the efficient calculation for 5×5 convolution was realized. Experimental results show that, on the premise that the prediction accuracy of the Convolutional Neural Network (CNN) is basically not affected, this calculation method of 5×5 convolution reduces the multiplication computational cost by 83% compared to the traditional convolution, and has the acceleration ratio of 5.82; compared with the method of cascading 3×3 two-dimensional Winograd convolutions to generate 5×5 convolutions, the proposed method has the multiplication computational cost reduced by 12%, the memory bandwidth demand decreased by about 24.2%, and the computing time reduced by 20%.
    Reference | Related Articles | Metrics
    Consensus of two-layer multi-agent systems subjected to cyber-attack
    WANG Yunyan, HU Aihua
    Journal of Computer Applications    2021, 41 (5): 1399-1405.   DOI: 10.11772/j.issn.1001-9081.2020081159
    Abstract440)      PDF (1150KB)(384)       Save
    The consensus problem of the two-layer multi-agent systems subjected to cyber-attacks was studied. Aiming at the two-layer multi-agent systems composed of the leaders' layer and the followers' layer, the situation as the following was given:the neighboring agents in the leaders' layer were cooperative, the adjacent agents in the followers' layer were cooperative or competitive, and there was a restraining relationship between some corresponding agents in the leaders' layer and the followers' layer. The consensus problem among the nodes of leaders' layer, followers' layer and two-layer multi-agent systems subjected to cyber-attack was discussed. Based on the related knowledge such as Linear Matrix Inequality (LMI), Lyapunov stability theory and graph theory, the sufficient criteria for consensus between the nodes in the leaders' layer multi-agent system, bipartite consensus between the nodes in the followers' layer multi-agent system and node-to-node bipartite consensus between the nodes in the two-layer multi-agent systems were given. Finally, the numerical simulation examples were given, and the consensus of the two-layer multi-agent systems subjected to cyber-attack was realized, which verified the validity of the proposed criteria.
    Reference | Related Articles | Metrics
    Task offloading algorithm for UAV-assisted mobile edge computing
    Xiaolin LI, Yusang JIANG
    Journal of Computer Applications    2023, 43 (6): 1893-1899.   DOI: 10.11772/j.issn.1001-9081.2022040548
    Abstract436)   HTML7)    PDF (2229KB)(248)       Save

    Unmanned Aerial Vehicle (UAV) is flexible and easy to deploy, and can assist Mobile Edge Computing (MEC) to help wireless systems improve coverage and communication quality. However, there are challenges such as computational latency requirements and resource management in the research of UAV-assisted MEC systems. Aiming at the delay problem of UAV providing auxiliary calculation services to multiple ground terminals, a Twin Delayed Deep Deterministic policy gradient (TD3) based Task Offloading Algorithm for Delay Minimization (TD3-TOADM) was proposed. Firstly, the optimization problem was modeled as the problem of minimizing the maximum computational delay under energy constraints. Secondly, TD3-TOADM was used to jointly optimize terminal equipment scheduling, UAV trajectory and task offloading ratio to minimize the maximum computational delay. Simulation analysis results show that compared with the task offloading algorithms based on Actor-Critic (AC), Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG), TD3-TOADM reduces the computational delay by more than 8.2%. It can be seen that TD3-TOADM algorithm has good convergence and robustness, and can obtain the optimal offloading strategy with low delay.

    Table and Figures | Reference | Related Articles | Metrics
    Acceleration and optimization of quantum computing simulator implemented on new Sunway supercomputer
    Xinmin SHI, Yong LIU, Yaojian CHEN, Jiawei SONG, Xin LIU
    Journal of Computer Applications    2023, 43 (8): 2486-2492.   DOI: 10.11772/j.issn.1001-9081.2022091456
    Abstract430)   HTML59)    PDF (2000KB)(439)       Save

    Two optimization methods for quantum simulator implemented on Sunway supercomputer were proposed aiming at the problems of gradual scaling of quantum hardware and insufficient classical simulation speed. Firstly, the tensor contraction operator library SWTT was reconstructed by improving the tensor transposition strategy and computation strategy, which improved the computing kernel efficiency of partial tensor contraction and reduced redundant memory access. Secondly, the balance between complexity and efficiency of path computation was achieved by the contraction path adjustment method based on data locality optimization. Test results show that the improvement method of operator library can improve the simulation efficiency of the "Sycamore" quantum supremacy circuit by 5.4% and the single-step tensor contraction efficiency by up to 49.7 times; the path adjustment method can improve the floating-point efficiency by about 4 times with the path computational complexity inflated by a factor of 2. The two optimization methods have the efficiencies of single-precision and mixed-precision floating-point operations for the simulation of Google’s 53-bit, 20-layer quantum chip random circuit with a million amplitude sampling improved from 3.98% and 1.69% to 18.48% and 7.42% respectively, and reduce the theoretical estimated simulation time from 470 s to 226 s for single-precision and 304 s to 134 s for mixed-precision, verifying that the two methods significantly improve the quantum computational simulation speed.

    Table and Figures | Reference | Related Articles | Metrics
    Maximum common induced subgraph algorithm based on vertex conflict learning
    WANG Yu, LIU Yanli, CHEN Shaowu
    Journal of Computer Applications    2021, 41 (6): 1756-1760.   DOI: 10.11772/j.issn.1001-9081.2020091381
    Abstract407)      PDF (962KB)(500)       Save
    The traditional branching strategies of Maximum Common induced Subgraph (MCS) problem rely on the static properties of graphs and lack learning information about historical searches. In order to solve these problems, a branching strategy based on vertex conflict learning was proposed. Firstly, the reduction value of the upper bound was used as the reward to the branch node for completing a matching action. Secondly, when the optimal solution was updated, the optimal solution obtained actually was the result of continuous inference of the branch nodes. Therefore, the appropriate rewards were given to the branch nodes on the complete search path to strengthen the positive effect of these vertices on search. Finally, the value function of matching action was designed, and the vertices with the maximum cumulative rewards would be selected as new branch nodes. On the basis of Maximum common induced subgraph Split (McSplit) algorithm, an improved McSplit Reinforcement Learning and Routing (McSplitRLR) algorithm combined with the new branching strategy was completed. Experimental results show that, with the same computer and solution time limit, excluding the simple instances solved by all comparison algorithms within 10 seconds, compared to the state-of-the-art algorithms of McSplit and McSplit Solution-Biased Search (McSplitSBS), McSplitRLR solves 109 and 33 more hard instances respectively, and the solution rate increases by 5.6% and 1.6% respectively.
    Reference | Related Articles | Metrics
    Self-organized migrating algorithm for multi-task optimization with information filtering
    CHENG Meiying, QIAN Qian, NI Zhiwei, ZHU Xuhui
    Journal of Computer Applications    2021, 41 (6): 1748-1755.   DOI: 10.11772/j.issn.1001-9081.2020091390
    Abstract403)      PDF (1172KB)(275)       Save
    The Self-Organized Migrating Algorithm (SOMA) only can solve the single task, and the "implicit parallelism" of SOMA is not fully exploited. Aiming at the shortcomings, a new Self-Organized Migrating Algorithm for Multi-task optimization with Information Filtering (SOMAMIF) was proposed to solve multiple tasks concurrently. Firstly, the multi-task uniform search space was constructed, and the subpopulations were set according to the number of tasks. Secondly, the current optimal fitness of each subpopulation was judged, and the information transfer need was generated when the evolution of a task stagnated in a successive generations. Thirdly, the useful information was chosen from the remaining subpopulations and the useless information was filtered according to a probability, so as to ensure the positive transfer and readjust the population structure at the same time. Finally, the time complexity and space complexity of SOMAMIF were analyzed. Experimental results show that, SOMAMIF converges rapidly to the global optimal solution 0 when solving multiple high-dimensional function problems simultaneously; compared with those of the original datasets, the average classification accuracies obtained on two datasets by SOMAMIF combing with the fractal technology to extract the key home returning constraints from college students with different census register increase by 0.348 66 percentage points and 0.598 57 percentage points respectively.
    Reference | Related Articles | Metrics
    Improved slime mould algorithm with multi-strategy fusion
    Zhongrui QIU, Hong MIAO, Chengbi ZENG
    Journal of Computer Applications    2023, 43 (3): 812-819.   DOI: 10.11772/j.issn.1001-9081.2022020243
    Abstract385)   HTML7)    PDF (880KB)(195)       Save

    Aiming at the problems of easily falling into local optimum, slow convergence and low solution accuracy of standard Slime Mould Algorithm (SMA), an Improved Slime Mould Algorithm with Multi-Strategy fusion (MSISMA) was proposed. Firstly, Brownian motion and Levy flight were introduced to enhance the search ability of the algorithm. Secondly, according to different stages of the algorithm, the location update formula of the slime mould was improved to increase the convergence speed and accuracy of the algorithm. Thirdly, the Interval Adaptative Opposition-Based Learning (IAOBL) strategy was adopted to generate the reverse population, with which the diversity and quality of the population were improved, as a result, the convergence speed of the algorithm was improved. Finally, a convergence stagnation monitoring strategy was introduced, which would make the algorithm jump out of the local optimum by re-initializing the positions of some slime mould individuals. With 23 test functions selected,the proposed MSISMA was tested and compared with Equilibrium Slime Mould Algorithm (ESMA), Slime Mould Algorithm combined to Adaptive Guided Differential Evolution Algorithm (SMA-AGDE), SMA, Marine Predators Algorithm (MPA) and Equilibrium Optimizer (EO). Moreover, the Wilcoxon rank-sum test was performed on the running results of all algorithms. Compared with the above algorithms, MSISMA achieves the best average value on 19 test functions and the best standard deviation on 12 test functions, and has the optimization accuracy improved by 23.39% to 55.97% on average. Experimental results show that the convergence speed, solution accuracy and robustness of MSISMA are significantly better.

    Table and Figures | Reference | Related Articles | Metrics
    Task allocation strategy in unmanned aerial vehicle-assisted mobile edge computing
    WANG Daiwei, XU Gaochao, LI Long
    Journal of Computer Applications    2021, 41 (10): 2928-2936.   DOI: 10.11772/j.issn.1001-9081.2020121917
    Abstract378)      PDF (800KB)(361)       Save
    In the scenario of using Unmanned Aerial Vehicle (UAV) as the data collector for computation offloading to provide Mobile Edge Computing (MEC) services to User Equipment (UE), a wireless communication strategy to achieve efficient UE coverage through UAV was designed. Firstly, under the condition of a given UE distribution, for the UAV flight trajectory and communication strategy, an optimization method of Successive Convex Approximation (SCA) was used to obtain an approximate optimal solution that was able to minimize the global energy. In addition, for scenarios with large-scale distribution of UEs or a large number of tasks, an adaptive clustering algorithm was proposed to divide the UEs on the ground into as few clusters as possible, and to ensure the offloading data of all UEs in each cluster was able to be collected in one flight. Finally, the computation offloading data collection tasks of the UEs in each cluster were allocated to one flight, so that the goal of reducing the number of dispatches required for a single UAV or the UAV number of dispatches required for multiple UAVs to complete the task was achieved. The simulation results show that the proposed method can generate fewer clusters than the K-Means algorithm and converge quickly, and is suitable for UAV-assisted computation offloading scenarios with widely distributed UEs.
    Reference | Related Articles | Metrics
    Optimal path convergence method based on artificial potential field method and informed sampling
    LI Wei, JIN Shijun
    Journal of Computer Applications    2021, 41 (10): 2912-2918.   DOI: 10.11772/j.issn.1001-9081.2020122021
    Abstract371)      PDF (1628KB)(348)       Save
    The Rapidly exploring Random Tree star (RRT *) algorithm ensures its probabilistic completeness and asymptotic optimality in the path planning process, but still has problems such as slow convergence speed and large and dense sampling space. In order to speed up the convergence of the algorithm, a fast obtaining method of optimal path based on artificial potential field method and informed set sampling was proposed. First, the artificial potential field method was used to construct an initial path from the starting point to the target point. Then, the positions of and the distance between the starting point and the target point as well as the path cost of the initial path were used as parameters to construct the initial informed sampling set. At last, the sampling was limited in the informed set, and the range of the informed sampling set was adjusted during the running process of the algorithm to accelerate the path convergence speed. Simulation experiments show that, Potential Informed-RRT * (PI-RRT *) algorithm based on the artificial potential field combined with informed sampling method reduces the number of sampling points by about 67%, and shortens the algorithm running time by about 74.5% on average compared with RRT * algorithm; and has the number of sampling points reduced by about 40%-50%, the algorithm running time shortened by about 62.5% on average compared with Informed RRT * (Informed-RRT *) algorithm. The proposed optimal path convergence method greatly reduces the number of redundant sampling and the algorithm running time, has higher algorithm efficiency, and converges to the optimal path with faster speed.
    Reference | Related Articles | Metrics
    Task allocation strategy considering service quality of spatial crowdsourcing workers and its glowworm swarm optimization algorithm solution
    RAN Jiamin, NI Zhiwei, PENG Peng, ZHU Xuhui
    Journal of Computer Applications    2021, 41 (3): 794-802.   DOI: 10.11772/j.issn.1001-9081.2020060940
    Abstract371)      PDF (1196KB)(394)       Save
    Focusing on the task allocation problem in spatial crowdsourcing, with the consideration of the influence of the spatial crowdsourcing workers' service quality on the allocation results, a task allocation strategy with the quality evaluation of worker's service was proposed. Firstly, in each spatio-temporal environment, the evaluation element of spatial crowdsourcing workers was added to establish a multi-objective model that fully considers the service quality and distance cost of the workers. Secondly, the algorithm convergence speed was increased and the global optimization ability was improved by improving the initialization and coding strategy, position movement strategy and neighborhood search strategy of the discrete glowworm swarm optimization algorithm. Finally, the improved algorithm was used to solve the model. Experimental results on the simulated and real datasets show that, compared with other swarm intelligence algorithms, the proposed algorithm can improve the total score of task allocation by 2% to 25% on datasets with different scales. By considering the service quality of workers, the proposed algorithm can effectively improve the efficiency of task allocation and the final total score.
    Reference | Related Articles | Metrics
    Two-stage task offloading strategy based on game theory in cloud-edge environment
    WANG Yijie, FAN Jiafei, WANG Chenyu
    Journal of Computer Applications    2021, 41 (5): 1392-1398.   DOI: 10.11772/j.issn.1001-9081.2020071091
    Abstract367)      PDF (910KB)(572)       Save
    Mobile Edge Computing (MEC) provides an effective solution to the conflict between computationally intensive applications and resource constrained mobile devices. However, most studies on the MEC offloading only consider the resource allocation between mobile devices and MEC servers, and ignore the huge computing resources in the cloud computing centers. In order to make full use of cloud and MEC resources, a task offloading strategy of cloud-edge collaboration was proposed. Firstly, the task offloading problem of the cloud-edge servers was transformed into a game problem. Then, the existence and uniqueness of Nash Equilibrium (NE) in this game were proved, and the solution to this game problem was obtained. Finally, a two-stage task offloading algorithm based on game theory was proposed to solve the task offloading scheme, and the performance of this algorithm was evaluated by performance indicators. The simulation results show that the total overhead of using the proposed algorithm is reduced by 72.8%, 47.9%, and 2.65% compared with those of local execution, cloud server execution and MEC server execution, respectively. The numerical results confirm that the proposed strategy can achieve higher energy efficiency and lower task offloading overhead, and extend scale well with the number of mobile devices increases.
    Reference | Related Articles | Metrics
    Multi-robot task allocation algorithm combining genetic algorithm and rolling scheduling
    Fuqin DENG, Huanzhao HUANG, Chaoen TAN, Lanhui FU, Jianmin ZHANG, Tinlun LAM
    Journal of Computer Applications    2023, 43 (12): 3833-3839.   DOI: 10.11772/j.issn.1001-9081.2022121916
    Abstract363)   HTML6)    PDF (2617KB)(210)       Save

    The purpose of research on Multi-Robot Task Allocation (MRTA) is to improve the task completion efficiency of robots in smart factories. Aiming at the deficiency of the existing algorithms in dealing with large-scale multi-constrained MRTA, an MRTA Algorithm Combining Genetic Algorithm and Rolling Scheduling (ACGARS) was proposed. Firstly, the coding method based on Directed Acyclic Graph (DAG) was adopted in genetic algorithm to efficiently deal with the priority constraints among tasks. Then, the prior knowledge was added to the initial population of genetic algorithm to improve the search efficiency of the algorithm. Finally, a rolling scheduling strategy based on task groups was designed to reduce the scale of the problem to be solved, thereby solving large-scale problems efficiently. Experimental results on large-scale problem instances show that compared with the schemes generated by Constructive Heuristic Algorithm (CHA), MinInterfere Algorithm (MIA), and Genetic Algorithm with Penalty Strategy (GAPS), the scheme generated by the proposed algorithm has the average order completion time shortened by 30.02%, 16.86% and 75.65% respectively when the number of task groups is 20, which verifies that the proposed algorithm can effectively shorten the average waiting time of orders and improve the efficiency of multi-robot task allocation.

    Table and Figures | Reference | Related Articles | Metrics
    Edge computing and service offloading algorithm based on improved deep reinforcement learning
    Tengfei CAO, Yanliang LIU, Xiaoying WANG
    Journal of Computer Applications    2023, 43 (5): 1543-1550.   DOI: 10.11772/j.issn.1001-9081.2022050724
    Abstract357)   HTML14)    PDF (2400KB)(158)       Save

    To solve the problem of limited computing resources and storage space of edge nodes in the Edge Computing (EC) network, an Edge Computing and Service Offloading (ECSO) algorithm based on improved Deep Reinforcement Learning (DRL) was proposed to reduce node processing latency and improve service performance. Specifically, the problem of edge node service offloading was formulated as a resource-constrained Markov Decision Process (MDP). Due to the difficulty of predicting the request state transfer probability of the edge node accurately, DRL algorithm was used to solve the problem. Considering that the state action space of edge node for caching services is too large, by defining new action behaviors to replace the original actions, the optimal action set was obtained according to the proposed action selection algorithm, so that the process of calculating the action behavior reward was improved, thereby reducing the size of the action space greatly, and improving the training efficiency and reward of the algorithm. Simulation results show that compared with the original Deep Q-Network (DQN) algorithm, Proximal Policy Optimization (PPO) algorithm and traditional Most Popular (MP) algorithm, the total reward value of the proposed ECSO algorithm is increased by 7.0%, 12.7% and 65.6%, respectively, and the latency of edge node service offloading is reduced by 13.0%, 18.8% and 66.4%, respectively, which verifies the effectiveness of the proposed ECSO algorithm and shows that the ECSO can effectively improve the offloading performance of edge computing services.

    Table and Figures | Reference | Related Articles | Metrics
    Improved grey wolf optimizer for location selection problem of railway logistics distribution center
    HAO Pengfei, CHI Rui, QU Zhijian, TU Hongbin, CHI Xuexin, ZHANG Diyou
    Journal of Computer Applications    2021, 41 (10): 2905-2911.   DOI: 10.11772/j.issn.1001-9081.2020121994
    Abstract356)      PDF (1101KB)(238)       Save
    The single mechanism based Grey Wolf Optimizer (GWO) is easy to fall into local optimum and has slow convergence speed. In order to solve the problems, an Improved Grey Wolf Optimization (IGWO) was proposed to solve the actual location selection problem of railway logistics distribution center. Firstly, based on the basic GWO, the theory of good point set was introduced to initialize the population, which improved the diversity of the initial population. Then, the D-value Elimination Strategy (DES) was used to increase the global optimization ability, so as to achieve an efficient optimization mode. The simulation results show that, compared with the standard GWO, IGWO has the fitness value increased by 3%, and the accuracy of the optimal value increased by up to 7 units in 10 test functions. Compared with Particle Swarm Optimization (PSO) algorithm, Differential Evolution (DE) algorithm and Genetic Algorithm (GA), IGWO has the location selection speed increased by 39.6%, 46.5% and 65.9% respectively, and the location selection velocity is significantly improved. The proposed algorithm can be used for railway logistics center location selection.
    Reference | Related Articles | Metrics
    Survey of research progress on crowdsourcing task assignment for evaluation of workers’ ability
    MA Hua, CHEN Yuepeng, TANG Wensheng, LOU Xiaoping, HUANG Zhuoxuan
    Journal of Computer Applications    2021, 41 (8): 2232-2241.   DOI: 10.11772/j.issn.1001-9081.2020101629
    Abstract356)      PDF (1533KB)(519)       Save
    With the rapid development of internet technology and sharing economy mode, as a new crowd computing mode, crowdsourcing has been widely applied and become a research focus recently. Aiming at the characteristics of crowdsourcing applications, to ensure the completion quality of crowdsourcing tasks, the existing researches have proposed different crowdsourcing task assignment methods from the perspective of the evaluation of worker's ability. Firstly, the crowdsourcing's concept and classification were introduced, and the workflow and task characteristics of the crowdsourcing platform were analyzed. Based on them, the existing research works on the evaluation of workers' ability were summarized. Then, the crowdsourcing task assignment methods and the related challenges were reviewed from three different aspects, including matching-based, planning-based and role-based collaboration. Finally, the research directions of future work were put forward.
    Reference | Related Articles | Metrics
    Computation offloading policy for machine learning in mobile edge computing environments
    GUO Mian, ZHANG Jinyou
    Journal of Computer Applications    2021, 41 (9): 2639-2645.   DOI: 10.11772/j.issn.1001-9081.2020111734
    Abstract350)      PDF (1127KB)(329)       Save
    Concerning the challenges of the diversity of data sources, non-independent and identical distribution of data and the heterogeneity of both computing capabilities and energy consumption of edge devices in Internet of Things (IoT), a computation offloading policy in Mobile Edge Computing (MEC) network that deploys both centralized learning and federated learning was proposed. Firstly, a system model of computation offloading related to both centralized learning and federated learning was built, considering the network transmission delay, computation delay and energy consumption of centralized learning and federated learning models. Then, with the system delay minimization as optimization object, considering the constraints of energy consumption and the training times based on machine learning accuracy, a computation offloading optimization model for machine learning was constructed. After that, the game for this computation offloading was formulated and analyzed. Based on the analysis results, an Energy-Constrained Delay-Greedy (ECDG) algorithm was proposed, which found the optimal solutions for the model via a two-stage policy of greedy decision and energy-constrained decision updating. Compared to the centralized-greedy and Federated Learning with Client Selection (FedCS) algorithms, ECDG algorithm has the lowest average learning delay, which is 1/10 of that in the centralized-greedy algorithm, and 1/5 of that in the FedCS algorithm. The experimental results show that, ECDG algorithms can automatically select the optimal machine learning models by computation offloading so that it can efficiently reduce the average machine learning delay, improve the energy efficiency of edge devices and satisfy the Quality of Service (QoS) requirements of IoT applications.
    Reference | Related Articles | Metrics
    Hybrid particle swarm optimization with multi-region sampling strategy to solve multi-objective flexible job-shop scheduling problem
    ZHANG Wenqiang, XING Zheng, YANG Weidong
    Journal of Computer Applications    2021, 41 (8): 2249-2257.   DOI: 10.11772/j.issn.1001-9081.2020101675
    Abstract349)      PDF (1458KB)(404)       Save
    Flexible Job-shop Scheduling Problem (FJSP) is a widely applied combinatorial optimization problem. Aiming at the problems of multi-objective FJSP that the solution process is complex and the algorithm is easy to fall into the local optimum, a Hybrid Particle Swarm Optimization algorithm with Multi-Region Sampling strategy (HPSO-MRS) was proposed to optimize both the makespan and total machine delay time. The multi-region sampling strategy was able to reorganize the positions of the Pareto frontiers that the particles belonging to, and guide the corresponding moving directions for the particles in multiple regions of the Pareto frontiers after sampling. Thus, the convergence ability of particles in multiple directions was adjusted, and the ability of uniform distribution was improved to a certain extent. In addition, in the encoding and decoding aspect, the decoding strategy with interpolation mechanism was used to eliminate the potential local left shift; in the particle updating aspect, the particle update method of traditional Particle Swarm Optimization (PSO) algorithm was combined with the crossover and mutation operators of Genetic Algorithm (GA), which improved the diversity of search process and avoid the algorithm from falling into the local optimum. The proposed algorithm was tested on benchmark problems Mk01-Mk10 and compared with Hybrid Particle Swarm Optimization algorithm (HPSO), Non-dominated Sorting Genetic Algorithm Ⅱ (NSGA-Ⅱ), Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) on algorithm effectiveness and operation efficiency. Experimental results of significance analysis showed that, HPSO-MRS was significantly better than the comparison algorithms on the convergence evaluation indexes Hyper Volume (HV) and Inverted Generational Distance (IGD) in 85% and 77.5% of the control groups, respectively. In 35% of the control groups, the distribution index Spacing of the algorithm was significantly better than those of the comparison algorithms. And there was no situation that the proposed algorithm was significantly worse than the comparison algorithms on the three indexes. It can be seen that, compared with the others, the proposed algorithm has better convergence and distribution performance.
    Reference | Related Articles | Metrics
    Machine breakdown rescheduling of flexible job shop based on improved imperialist competitive algorithm
    ZHANG Guohui, LU Xixi, HU Yifan, SUN Jinghe
    Journal of Computer Applications    2021, 41 (8): 2242-2248.   DOI: 10.11772/j.issn.1001-9081.2020101664
    Abstract341)      PDF (1072KB)(342)       Save
    For the flexible job shop rescheduling problem with machine breakdown, an improved Imperialist Competition Algorithm (ICA) was proposed. Firstly, a flexible job shop dynamic rescheduling model was established with the maximum completion time, machine energy consumption and total delay time as the objective functions, and linear weighting method was applied to three objectives. Then, the improved ICA was proposed to retain the excellent information for the next generation. A roulette selection mechanism was added after the assimilation and revolutionary steps of the general ICA, so that the excellent genes in the initial empire were able to be retained, and the updated empire quality was better and closer to the optimal solution. Finally, after the machine breakdown, the event-driven rescheduling strategy was adopted to reschedule the unprocessed job procedures after the breakdown point. Through production examples, simulation experiments were carried out on three hypothetical machine breakdown scenarios, and the proposed algorithm was compared with improved Genetic Algorithm (GA) and Genetic and Simulated Annealing Algorithm (GASA). Experimental results show that the proposed improved ICA is effective and feasible.
    Reference | Related Articles | Metrics
    Multi-user task offloading strategy based on stable allocation
    MAO Yingchi, XU Xuesong, LIU Pengfei
    Journal of Computer Applications    2021, 41 (3): 786-793.   DOI: 10.11772/j.issn.1001-9081.2020060861
    Abstract336)      PDF (1162KB)(965)       Save
    With the emergence of many computation-intensive applications, mobile devices cannot meet user requirements such as delay and energy consumption due to their limited computing capabilities. Mobile Edge Computing (MEC) offloads user task computing to the MEC server through a wireless channel to significantly reduce the response delay and energy consumption of tasks. Concerning the problem of multi-user task offloading, a Multi-User Task Offloading strategy based on Stable Allocation (MUTOSA) was proposed to minimize energy consumption while ensuring the user delay requirement. Firstly, based on the comprehensive consideration of delay and energy consumption, the problem of multi-user task offloading in the independent task scenario was modeled. Then, based on the idea of delayed reception in the stable allocation of game theory, an adjustment strategy was proposed. Finally, the problem of multi-user task unloading was solved through continuous iteration. Experimental results show that, compared with the benchmark strategy and heuristic strategy, the proposed strategy can meet the delay requirements of more users, increase user satisfaction by about 10% on average, and reduce the total energy consumption of user devices by about 50%. It shows that the proposed strategy can effectively reduce energy consumption with ensuring the user delay requirement, and can effectively improve the user experience for delay-sensitive applications.
    Reference | Related Articles | Metrics
    Coevolutionary ant colony optimization algorithm for mixed-variable optimization problem
    WEI Mingyan, CHEN Yu, ZHANG Liang
    Journal of Computer Applications    2021, 41 (5): 1412-1418.   DOI: 10.11772/j.issn.1001-9081.2020081200
    Abstract330)      PDF (2082KB)(387)       Save
    For Mixed-Variable Optimization Problem (MVOP) containing both continuous and categorical variables, a coevolution strategy was proposed to search the mixed-variable decision space, and a Coevolutionary Ant Colony Optimization Algorithm for MVOP (CACOA MV) was developed. In CACOA MV, the continuous and categorical sub-populations were generated by using the continuous and discrete Ant Colony Optimization (ACO) strategies respectively, the sub-vectors of continuous and categorical variables were evaluated with the help of cooperators, and the continuous and categorical sub-populations were respectively updated to realize the efficient coevolutionary search in the mixed-variable decision space. Furthermore, the ability of global exploration to the categorical variable solution space was improved by introducing a smoothing mechanism of pheromone, and a "best+random cooperators" restart strategy facing the coevolution framework was proposed to enhance the efficiency of coevolutionary search. By comparing with the Mixed-Variable Ant Colony Optimization (ACO MV) algorithm and the Success History-based Adaptive Differential Evolution algorithm with linear population size reduction and Ant Colony Optimization (L-SHADE ACO), it is demonstrated that CACOA MV is able to perform better local exploitation, so as to improve approximation quality of the final results in the target space; the comparison with the set-based Differential Evolution algorithm with Mixed-Variables (DE MV) shows that CACOA MV is able to better approximate the global optimal solutions in the decision space and has better global exploration ability. In conclusion, CACOA MV with the coevolutionary strategy can keep a balance between global exploration and local exploitation, which results in better optimization ability.
    Reference | Related Articles | Metrics
    Quantum K-Means algorithm based on Hamming distance
    Jing ZHONG, Chen LIN, Zhiwei SHENG, Shibin ZHANG
    Journal of Computer Applications    2023, 43 (8): 2493-2498.   DOI: 10.11772/j.issn.1001-9081.2022091469
    Abstract316)   HTML34)    PDF (1623KB)(422)       Save

    The K-Means algorithms typically utilize Euclidean distance to calculate the similarity between data points when dealing with large-scale heterogeneous data. However, this method has problems of low efficiency and high computational complexity. Inspired by the significant advantage of Hamming distance in handling data similarity calculation, a Quantum K-Means Hamming (QKMH) algorithm was proposed to calculate similarity. First, the data was prepared and made into quantum state, and the quantum Hamming distance was used to calculate similarity between the points to be clustered and the K cluster centers. Then, the Grover’s minimum search algorithm was improved to find the cluster center closest to the points to be clustered. Finally, these steps were repeated until the designated number of iterations was reached or the clustering centers no longer changed. Based on the quantum simulation computing framework QisKit, the proposed algorithm was validated on the MNIST handwritten digit dataset and compared with various traditional and improved methods. Experimental results show that the F1 score of the QKMH algorithm is improved by 10 percentage points compared with that of the Manhattan distance-based quantum K-Means algorithm and by 4.6 percentage points compared with that of the latest optimized Euclidean distance-based quantum K-Means algorithm, and the time complexity of the QKMH algorithm is lower than those of the above comparison algorithms.

    Table and Figures | Reference | Related Articles | Metrics
    Hybrid adaptive particle swarm optimization algorithm for workflow scheduling
    Xuesen MA, Xuemei XU, Gonghui JIANG, Yan QIAO, Tianbao ZHOU
    Journal of Computer Applications    2023, 43 (2): 474-483.   DOI: 10.11772/j.issn.1001-9081.2022010001
    Abstract305)   HTML7)    PDF (2548KB)(102)       Save

    Aiming at the conflict between the makespan and execution cost of cloud workflows with deadlines, a Hybrid Adaptive Particle Swarm Optimization algorithm for workflow scheduling (HAPSO) was proposed. Firstly, a Directed Acyclic Graph (DAG) cloud workflow scheduling model was established based on deadlines. Secondly, through the combination of norm ideal points and adaptive weights, the DAG scheduling model was transformed into a multi-objective optimization problem that weighs DAG makespan and execution cost. Finally, based on Particle Swarm Optimization (PSO) algorithm, the adaptive inertia weight, the adaptive learning factors, the probability switching mechanism of flower pollination algorithm, Firefly Algorithm (FA) and the particle out-of-bound processing method were added to balance the global search ability and the local search ability of the particle swarm, and then to solve the objective optimization problem of DAG makespan and execution cost. The optimization results of PSO, Weight Particle Swarm Optimization (WPSO), Ant Colony Optimization (ACO) and HAPSO were compared and analyzed in the experiment. Experimental results show that HAPSO reduces the multi-objective function value by 40.9% to 81.1% that weighs the makespan and execution cost of workflow (30~300 tasks), and HAPSO effectively weighs the makespan and execution cost with the constraints of workflow deadlines. In addition, HAPSO also has a good effect on the single objective of reducing the makespan or execution cost, which verifies the universality of HAPSO.

    Table and Figures | Reference | Related Articles | Metrics
2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF