Project Articles

    National Open Distributed and Parallel Computing Conference 2021 (DPCS 2021)

    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Performance interference analysis and prediction for distributed machine learning jobs
    Hongliang LI, Nong ZHANG, Ting SUN, Xiang LI
    Journal of Computer Applications    2022, 42 (6): 1649-1655.   DOI: 10.11772/j.issn.1001-9081.2021061404
    Abstract687)   HTML110)    PDF (1121KB)(477)       Save

    By analyzing the problem of job performance interference in distributed machine learning, it is found that performance interference is caused by the uneven allocation of GPU resources such as memory overload and bandwidth competition, and to this end, a mechanism for quickly predicting performance interference between jobs was designed and implemented, which can adaptively predict the degree of job interference according to the given GPU parameters and job types. First, the GPU parameters and interference rates during the operation of distributed machine learning jobs were obtained through experiments, and the influences of various parameters on performance interference were analyzed. Second, some GPU parameter-interference rate models were established by using multiple prediction technologies to analyze the job interference rate errors. Finally, an adaptive job interference rate prediction algorithm was proposed to automatically select the prediction model with the smallest error for a given equipment environment and job set to predict the job interference rates quickly and accurately. By selecting five commonly used neural network tasks, experiments were designed on two GPU devices and the results were analyzed. The results show that the proposed Adaptive Interference Prediction (AIP) mechanism can quickly complete the selection of prediction model and the performance interference prediction without providing any pre-assumed information, it has comsumption time less than 300 s and achieves prediction error rate in the range of 2% to 13%, which can be applied to scenarios such as job scheduling and load balancing.

    Table and Figures | Reference | Related Articles | Metrics
    Multimodal sequential recommendation algorithm based on contrastive learning
    Tengyue HAN, Shaozhang NIU, Wen ZHANG
    Journal of Computer Applications    2022, 42 (6): 1683-1688.   DOI: 10.11772/j.issn.1001-9081.2021081417
    Abstract580)   HTML45)    PDF (1339KB)(295)       Save

    A multimodal sequential recommendation algorithm based on contrastive learning technology was proposed to improve the accuracy of sequential recommendation algorithm by using multimodal information of commodities. Firstly, to obtain the visual representations such as the color and shape of the product, the visual modal information of the product was extracted by utilizing the contrastive learning framework, where the data enhancement was performed by changing the color and intercepting the center area of the product. Secondly, the textual information of each commodity was embedded into a low-dimensional space, so that the complete multimodal representation of each commodity could be obtained. Finally, a Recurrent Neural Network (RNN) was used for modeling the sequential interactions of multimodal information according to the time sequence of the product, then the preference representation of user was obtained and used for commodity recommendation. The proposed algorithm was tested on two public datasets and compared with the existing sequential recommendation algorithm LESSR. Experimental results prove that the ranking performance of the proposed algorithm is improved, and the recommendation performance remains basically unchanged after the feature dimension value reaches 50.

    Table and Figures | Reference | Related Articles | Metrics
    New computing power network architecture and application case analysis
    Zheng DI, Yifan CAO, Chao QIU, Tao LUO, Xiaofei WANG
    Journal of Computer Applications    2022, 42 (6): 1656-1661.   DOI: 10.11772/j.issn.1001-9081.2021061497
    Abstract869)   HTML83)    PDF (1584KB)(431)       Save

    With the proliferation of Artificial Intelligence (AI) computing power to the edge of the network and even to terminal devices, the computing power network of end-edge-supercloud collaboration has become the best computing solution. The emerging new opportunities have spawned the deep integration between end-edge-supercloud computing and the network. However, the complete development of the integrated system is unsolved, including adaptability, flexibility, and valuability. Therefore, a computing power network for ubiquitous AI named ACPN was proposed with the assistance of blockchain. In ACPN, the end-edge-supercloud collaboration provides infrastructure for the framework, and the computing power resource pool formed by the infrastructure provides safe and reliable computing power for the users, the network satisfies users’ demands by scheduling resources, and the neural network and execution platform in the framework provide interfaces for AI task execution. At the same time, the blockchain guarantees the reliability of resource transaction and encourage more computing power contributors to join the platform. This framework provides adaptability for users of computing power network, flexibility for resource scheduling of networking computing power, and valuability for computing power providers. A clear description of this new computing power network architecture was given through a case.

    Table and Figures | Reference | Related Articles | Metrics
    Detection algorithm of audio scene sound replacement falsification based on ResNet
    Mingyu DONG, Diqun YAN
    Journal of Computer Applications    2022, 42 (6): 1724-1728.   DOI: 10.11772/j.issn.1001-9081.2021061432
    Abstract332)   HTML15)    PDF (2217KB)(113)       Save

    A ResNet-based faked sample detection algorithm was proposed for the detection of faked samples in audio scenes with low faking cost and undetectable sound replacement. The Constant Q Cepstral Coefficient (CQCC) features of the audio were extracted firstly, then the input features were learnt by the Residual Network (ResNet) structure, by combining the multi-layer residual blocks of the network and feature normalization, the classification results were output finally. On TIMIT and Voicebank databases, the highest detection accuracy of the proposed algorithm can reach 100%, and the lowest false acceptance rate of the algorithm can reach 1.37%. In realistic scenes, the highest detection accuracy of this algorithm is up to 99.27% when detecting the audios recorded by three different recording devices with the background noise of the device and the audio of the original scene. Experimental results show that it is effective to use the CQCC features of audio to detect the scene replacement trace of audio.

    Table and Figures | Reference | Related Articles | Metrics
    Efficient wireless federated learning algorithm based on 1‑bit compressive sensing
    Zhenyu ZHANG, Guoping TAN, Siyuan ZHOU
    Journal of Computer Applications    2022, 42 (6): 1675-1682.   DOI: 10.11772/j.issn.1001-9081.2021061374
    Abstract396)   HTML20)    PDF (2504KB)(150)       Save

    In the wireless Federated Learning (FL) architecture, the model parameter data need to be continuously exchanged between the client and the server to update the model, thus causing a large communication overhead and power consumption on the client. At present, there are many methods to reduce communication overhead by data quantization and data sparseness. In order to further reduce the communication overhead, a wireless FL algorithm based on 1?bit compressive sensing was proposed. In the uplink of wireless FL architecture, the data update parameters of its local model, including update amplitude and trend, were firstly recorded on the client. Then, sparsification was performed to the amplitude and trend information, and the threshold required for updating was determined. Finally, 1?bit compressive sensing was performed on the update trend information, thereby compressing the uplink data. On this basis, the data size was further compressed by setting dynamic threshold. Experimental results on MNIST datasets show that the 1?bit compressive sensing process with the introduction of dynamic threshold can achieve the same results as the lossless transmission process, and reduce the amount of model parameter data to be transmitted by the client during the uplink communication of FL applications to 1/25 of the normal FL process without this method; and can reduce the total user upload data size to 2/11 of the original size and reduce the transmission energy consumption to 1/10 of the original size when the global model is trained to the same level.

    Table and Figures | Reference | Related Articles | Metrics
    Multi-objective task offloading algorithm based on deep Q-network
    Shiquan DENG, Xuguo YE
    Journal of Computer Applications    2022, 42 (6): 1668-1674.   DOI: 10.11772/j.issn.1001-9081.2021061367
    Abstract532)   HTML39)    PDF (1781KB)(301)       Save

    For the Mobile Device (MD) with limited computing resources and battery capacity in Mobile Edge Computing (MEC), its computing capacity can be enhanced and its energy consumption can be reduced through offloading its own computing-intensive applications to the edge server. However, unreasonable task offloading strategy will bring a bad experience for users since it will increase the application completion time and energy consumption. To overcome above challenge, firstly, a multi-objective task offloading problem model with minimizing the application completion time and energy consumption as optimization targets was built in the dynamic MEC network via analyzing the mobility of the mobile device and the sequential dependencies between tasks. Then, a Markov Decision Process (MDP) model, including state space, action space, and reward function, was designed to solve this problem, and a Multi-Objective Task Offloading Algorithm based on Deep Q-Network (MTOA-DQN) was proposed, which uses a trajectory as the smallest unit of the experience buffer to improve the original DQN. The proposed MTOA-DQN outperforms three comparison algorithms including MultiObjective Evolutionary Algorithm based on Decomposition (MOEA/D), Adaptive DAG (Directed Acyclic Graph) Tasks Scheduling (ADTS) and original DQN in terms of cumulative reward and cost in a number of test scenarios, verifying the effectiveness and reliability of the algorithm.

    Table and Figures | Reference | Related Articles | Metrics
    Research on Bloom filter: a survey
    Wendi HUA, Yuan GAO, Meng LYU, Ping XIE
    Journal of Computer Applications    2022, 42 (6): 1729-1747.   DOI: 10.11772/j.issn.1001-9081.2021061392
    Abstract697)   HTML45)    PDF (3209KB)(283)       Save

    Bloom Filter (BF) is a binary vector data structure based on hashing strategy. With the idea of sharing hash collisions, the characteristic of one-way misjudgment and the very small time complexity of constant query, BF is often used to represent membership and as an “accelerator” for membership query operations. As the best mathematical tool to solve the membership query problem in computer engineering, BF has been widely used and developed in network engineering, storage system, database, file system, distributed system and some other fields. In the past few years, in order to adapt to various hardware environments and application scenarios, a large number of variant optimization schemes of BF based on the ideas of changing structure and optimizing algorithm appeared. With the development of big data era, it has become an important direction of membership query to improve the characteristics and operation logic of BF.

    Table and Figures | Reference | Related Articles | Metrics
    Lip language recognition algorithm based on single-tag radio frequency identification
    Yingqi ZHANG, Dawei PENG, Sen LI, Ying SUN, Qiang NIU
    Journal of Computer Applications    2022, 42 (6): 1762-1769.   DOI: 10.11772/j.issn.1001-9081.2021061390
    Abstract302)   HTML8)    PDF (4019KB)(93)       Save

    In recent years, a wireless platform for speech recognition using multiple customized and stretchable Radio Frequency Identification (RFID) tags has been proposed, however, it is difficult for the tags to accurately capture large frequency shifts caused by stretching, and multiple tags need to be detected and recalibrated when the tags fall off or wear out naturally. In response to the above problems, a lip language recognition algorithm based on single-tag RFID was proposed, in which a flexible, easily concealable and non-invasive single universal RFID tag was attached to the face, allowing lip language recognition even if the user does not make a sound and relies only on facial micro-actions. Firstly, a model was established to process the Received Signal Strength (RSS) and phase changes of individual tags received by an RFID reader responding over time and frequency. Then the Gaussian function was used to preprocess the noise of the original data by smoothing and denoising, and the Dynamic Time Warping (DTW) algorithm was used to evaluate and analyze the collected signal characteristics to solve the problem of pronunciation length mismatch. Finally, a wireless speech recognition system was created to recognize and distinguish the facial expressions corresponding to the voice, thus achieving the purpose of lip language recognition. Experimental results show that the accuracy of RSS can reach more than 86.5% by the proposed algorithm for identifying 200 groups of digital signal characteristics of different users.

    Table and Figures | Reference | Related Articles | Metrics
    Malicious code detection method based on attention mechanism and residual network
    Yang ZHANG, Jiangbo HAO
    Journal of Computer Applications    2022, 42 (6): 1708-1715.   DOI: 10.11772/j.issn.1001-9081.2021061410
    Abstract454)   HTML24)    PDF (1407KB)(182)       Save

    As the existing malicious code detection methods based on deep learning have problems of insufficiency and low accuracy of feature extraction, a malicious code detection method based on attention mechanism and Residual Network (ResNet) called ARMD was proposed. To support the training of this method, the hash values of 47 580 malicious and benign codes were obtained from Kaggle website, and the APIs called by each code were extracted by analysis tool VirusTotal. After that, the called APIs were integrated into 1 000 non-repeated APIs as the detection features, and the training sample data was constructed through these features. Then, the sample data was labeled by determining the benignity and maliciousness based on the VirusTotal analysis results, and the SMOTE (Synthetic Minority Over-sampling Technique) enhancement algorithm was used to equalize the data samples. Finally, the ResNet injecting with the attention mechanism was built and trained to complete the malicious code detection. Experimental results show that the accuracy of malicious code detection of ARMD is 97.76%, and compared with the existing detection methods based on Convolutional Neural Network (CNN) and ResNet models, ARMD has the average precision improved by at least 2%, verifying the effectiveness of ARMD.

    Table and Figures | Reference | Related Articles | Metrics
    Coupling related code smell detection method based on deep learning
    Shan SU, Yang ZHANG, Dongwen ZHANG
    Journal of Computer Applications    2022, 42 (6): 1702-1707.   DOI: 10.11772/j.issn.1001-9081.2021061403
    Abstract329)   HTML14)    PDF (1071KB)(110)       Save

    Heuristic and machine learning based code smell detection methods have been proved to have limitations, and most of these methods focus on the common code smells. In order to solve these problems, a deep learning based method was proposed to detect three relatively rare code smells which are related to coupling, those are Intensive Coupling, Dispersed Coupling and Shotgun Surgery. First, the metrics of three code smells were extracted, and the obtained data were processed. Second, a deep learning model combining Convolutional Neural Network (CNN) and attention mechanism was constructed, and the introduced attention mechanism was able to assign weights to the metric features. The datasets were extracted from 21 open source projects, and the detection methods were validated in 10 open source projects and compared with CNN model. Experimental results show that the proposed model achieves the better performance with the code smell precisions of 93.61% and 99.76% for Intensive Coupling and Dispersed Coupling respectively, and the CNN model achieves the better results with the code smell precision of 98.59% for Shotgun Surgery.

    Table and Figures | Reference | Related Articles | Metrics
    Recommendation model of penetration path based on reinforcement learning
    Haini ZHAO, Jian JIAO
    Journal of Computer Applications    2022, 42 (6): 1689-1694.   DOI: 10.11772/j.issn.1001-9081.2021061424
    Abstract514)   HTML44)    PDF (1756KB)(248)       Save

    The core problem of penetration test is the planning of penetration test paths. Manual planning relies on the experience of testers, while automated generation of penetration paths is mainly based on the priori knowledge of network security and specific vulnerabilities or network scenarios, which requires high cost and lacks flexibility. To address these problems, a reinforcement learning-based penetration path recommendation model named Q Learning Penetration Test (QLPT) was proposed to finally give the optimal penetration path for the penetration object through multiple rounds of vulnerability selection and reward feedback. It is found that the recommended path of QLPT has a high consistency with the path of manual penetration test by implementing penetration experiments at open source cyber range, verifying the feasibility and accuracy of this model; compared with the automated penetration test framework Metasploit, QLPT is more flexible in adapting to all penetration scenarios.

    Table and Figures | Reference | Related Articles | Metrics
    Traceable and revocable multi-authority attribute-based encryption scheme for vehicular ad hoc networks
    Jingwen WU, Xinchun YIN, Jianting NING
    Journal of Computer Applications    2022, 42 (6): 1695-1701.   DOI: 10.11772/j.issn.1001-9081.2021061449
    Abstract319)   HTML13)    PDF (965KB)(146)       Save

    Ensuring the confidentiality of message transmission is a fundamental security requirement for communications in Vehicular Ad hoc NETworks (VANETs). While utilizing symmetric group keys to encrypt messages, it is hard for system manager to trace inner attackers. Therefore, an attribute-based encryption scheme for VANETs was proposed. The scheme enables tracking and revocation of malicious vehicles and fine-grained division of vehicle access rights; meanwhile, the scheme allows multiple authority centers to distribute attributes and their corresponding keys independently, preventing compromised authority centers from forging attribute keys that are managed by other authorities, thus guaranteeing a high security for communication and collaboration among multiple institutions. This scheme was proven indistinguishable under q-DPBDHE2 (q-Decisional Parallel Bilinear Diffie-Hellman Exponent) assumption; and experimental results of encryption and decryption overhead comparison of this scheme and similar schemes show that while the number of attributes is 10, the decryption overhead of the proposed scheme is 459.541 ms, indicating that the scheme is suitable for communication encryption in VANETs.

    Table and Figures | Reference | Related Articles | Metrics
    Hydrological model based on temporal convolutional network
    Qingqing NIE, Dingsheng WAN, Yuelong ZHU, Zhijia LI, Cheng YAO
    Journal of Computer Applications    2022, 42 (6): 1756-1761.   DOI: 10.11772/j.issn.1001-9081.2021061366
    Abstract289)   HTML16)    PDF (2132KB)(253)       Save

    Water level prediction is an auxiliary decision support for flood warning work. For accurate water level prediction and providing scientific basis for natural disaster prevention, a prediction model combining Modified Gray Wolf Optimization (MGWO) algorithm and Temporal Convolutional Network (TCN) was proposed, namely MGWO-TCN. In view of the shortage of premature and stagnation in the original Gray Wolf Optimization (MGWO) algorithm, the idea of Differential Evolution (DE) algorithm was introduced to extend the diversity of the grey wolf population. The convergence factor during update and the mutation operator during mutation of the grey wolf population were improved to adjust the parameters in the adaptive manner, thereby improving the convergence speed and balancing the global and local search capabilities of the algorithm. The proposed MGWO algorithm was used to optimize the important parameters of TCN to improve the prediction performance of TCN. The proposed prediction model MGWO-TCN was used for river water level prediction, and the Root Mean Square Error (RMSE) of the model’s prediction results was 0.039. Experimental results show that compared with the comparison model, the proposed MGWO-TCN has better optimization ability and higher prediction accuracy.

    Table and Figures | Reference | Related Articles | Metrics
    Refined short-term traffic flow prediction model and migration deployment scheme
    Jiachen GUO, Yushen YANG, Yan WANG, Shilong MAO, Lijun SUN
    Journal of Computer Applications    2022, 42 (6): 1748-1755.   DOI: 10.11772/j.issn.1001-9081.2021061411
    Abstract333)   HTML6)    PDF (3372KB)(45)       Save

    Refined short-term traffic flow prediction is the premise to ensure the rational decision making in Intelligent Transportation System (ITS). In order to establish the lane-changing model of self-driving car, predict vehicle trajectories, and guide vehicle routes, the timely traffic flow prediction for each lane has become an urgent problem to solve. However, refined short-term traffic flow prediction faces the following challenges: first, with the increasing diversity of traffic flow data, the traditional prediction methods cannot meet the requirements of ITS for high precision and short time delay; second, training prediction model for each lane make a huge waste of resources. To solve the above problems, a refined short-term traffic flow prediction model combined Convolutional-Gated Recurrent Unit (Conv-GRU) with Grey Relational Analysis (GRA) was proposed to predict lane flow. Considering the characteristics of long training time and relatively short reasoning time of deep learning, a cloud-fog deployment scheme was designed. Meanwhile, to avoid training prediction models for each lane, a model migration deployment scheme was proposed, which only needs to train the prediction model of some lanes, and then the trained prediction models were migrated to the associated lane for prediction through GRA. Experimental results of extensive comparisons on a real-world dataset show that, compared with traditional deep learning prediction methods, the proposed model has more accurate prediction performance; compared with Convolutional-Long Short-Term Memory (Conv-LSTM) network, the model has shorter running time. Furthermore, the model migration is realized by the proposed model under the condition of ensuring high-precision prediction, which saves about 49% of training time compared to training prediction model for each lane.

    Table and Figures | Reference | Related Articles | Metrics
    Reversible data hiding in encrypted image based on multi-objective optimization
    Xiangyu ZHANG, Yang YANG, Guohui FENG, Chuan QIN
    Journal of Computer Applications    2022, 42 (6): 1716-1723.   DOI: 10.11772/j.issn.1001-9081.2021061495
    Abstract307)   HTML15)    PDF (1250KB)(115)       Save

    Focusing on the issues that the Reserving Room Before Encryption (RRBE) embedding algorithm requires a series of pre-processing work and Vacating Room After Encryption (VRAE) embedding algorithm has less embedding space, an algorithm of reversible data hiding in encrypted image based on multi-objective optimization was proposed to improve the embedding rate as well as reducing the algorithm process and workload. In this algorithm, two representative algorithms in RRBE and VRAE were combined and used in the same carrier, and performance evaluation indicators such as the amount of information embedded, distortion of direct decryption of image, extraction error rate, and computational complexity were formulated as the optimization sub-objectives. Then, the efficiency coefficient method was used to establish a model to solve the relative optimal solution of the application ratio of the two algorithms. Experimental results show that the proposed algorithm reduces the computational complexity of using RRBE algorithm alone, enables image processing users to flexibly allocate optimization objectives according to different needs in actual application scenarios, and at the same time obtains better image quality and a satisfactory amount of information embedding.

    Table and Figures | Reference | Related Articles | Metrics
    Dynamic service deployment strategy in resource constrained mobile edge computing
    Jingling YUAN, Huihua MAO, Nana WANG, Yao XIANG
    Journal of Computer Applications    2022, 42 (6): 1662-1667.   DOI: 10.11772/j.issn.1001-9081.2021061615
    Abstract411)   HTML29)    PDF (1940KB)(180)       Save

    The emergence of Mobile Edge Computing (MEC) enables mobile users to easily access services deployed on edge servers with low latency. However, there are various challenges in MEC, especially service deployment issues. The number and resources of edge servers are usually limited and only a limited number of services can be deployed on the edge servers; in addition, the mobility of users changes the popularities of different services in different regions. In this context, deploying suitable services for dynamic service requests becomes a critical problem. To address this problem, by deploying appropriate services by awareness of the dynamic user requirements to minimize interaction delay, the service deployment problem was formulated as a global optimization problem, and a cluster-based resource aggregation algorithm was proposed, which initially deployed suitable services under the resource constraints such as computing and bandwidth. Moreover, considering the influence of dynamic user requests on service popularity and edge server load, a dynamic adjustment algorithm was developed to update the existing services to ensure that the Quality of Service (QoS) always met user expectations. The performance of this deployment strategy was verified through a series of simulation experiments. Simulation results show that compared with the existing benchmark algorithms, the proposed strategy can reduce service interaction delay and achieve a more stable load balance.

    Table and Figures | Reference | Related Articles | Metrics
2024 Vol.44 No.4

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF