Project Articles

    The 35 CCF National Conference of Computer Applications (CCF NCCA 2020)

    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Visibility forecast model based on LightGBM algorithm
    YU Dongchang, ZHAO Wenfang, NIE Kai, ZHANG Ge
    Journal of Computer Applications    2021, 41 (4): 1035-1041.   DOI: 10.11772/j.issn.1001-9081.2020081589
    Abstract749)      PDF (1107KB)(759)       Save
    In order to improve the accuracy of visibility forecast, especially the accuracy of low-visibility forecast, an ensemble learning model based on random forest and LightGBM for visibility forecast was proposed. Firstly, based on the meteorological forecast data of the numerical modeling system, combined with meteorological observation data and PM 2.5 concentration observation data, the random forest method was used to construct the feature vectors. Secondly, for the missing data with different time spans, three missing value processing methods were designed to replace the missing values, and then the data sample set with good continuity for training and testing was created. Finally, a visibility forecast model based on LightGBM was established, and its parameters were optimized by using the network search method. The proposed model was compared to Support Vector Machine(SVM), Multiple Linear Regression(MLR) and Artificial Neural Network(ANN) on performance. Experimental results show that for different levels of visibility, the proposed visibility forecast model based on LightGBM algorithm obtains the highest Threat Score(TS); when the visibility is less than 2 km, the average correlation coefficient between the visibility values of observation stations predicted by the model and the observation values of visibility of observation stations is 0.75, the average mean square error between them is 6.49. It can be seen that the forecast model based on LightGBM can effectively improve the accuracy of visibility forecast.
    Reference | Related Articles | Metrics
    Construction and correlation analysis of national food safety standard graph
    QIN Li, HAO Zhigang, LI Guoliang
    Journal of Computer Applications    2021, 41 (4): 1005-1011.   DOI: 10.11772/j.issn.1001-9081.2020081311
    Abstract422)      PDF (2022KB)(651)       Save
    National Food Safety Standards(NFSS) are not only the operation specifications of food producers, but also the law enforcement criteria of food safety supervision. However, there are various NFSSs with a wide range of contents and complicated inter-reference relationships. To systematically study the contents and structures of NFSSs, it is necessary to extract the knowledges and mine the reference relationships in NFSSs. First, the contents of the standard files and the reference relationship between the standard files were extracted as knowledge triplets through the Knowledge Graph(KG) technology, and the triplets were used to construct the NFSS knowledge graph. Then, this knowledge graph was linked to the food production process ontology which was made manually based on Hazard Analysis Critical Control Point(HACCP) standards, so that the food safety standards and the related food production processes can be referenced to each other. At the same time, the Louvain community discovery algorithm was used to analyze the standard reference network in the knowledge graph, and the standards with high citations as well as their types in NFSSs were obtained. Finally, a question answering system was built using gStore's Application Programming Interface(API) and Django, which realized the knowledge retrieval and reasoning based on natural language, and the high-impact NFSSs in the graph could be found under specified requirements.
    Reference | Related Articles | Metrics
    Few-shot segmentation method for multi-modal magnetic resonance images of brain tumor
    DONG Yang, PAN Haiwei, CUI Qianna, BIAN Xiaofei, TENG Teng, WANG Bangju
    Journal of Computer Applications    2021, 41 (4): 1049-1054.   DOI: 10.11772/j.issn.1001-9081.2020081388
    Abstract636)      PDF (1162KB)(1022)       Save
    Brain tumor Magnetic Resonance Imaging(MRI) has problems such as multi-modality, lacking of training data, class imbalance, and large differences between private databases, which lead to difficulties in segmentation. In order to solve these problems, the few-shot segmentation method was introduced, and a Prototype network based on U-net(PU-net) was proposed to segment brain tumor Magnetic Resonance(MR) images. First, the U-net structure was modified to extract the features of various tumors, which was used to calculate the prototypes. Then, on the basis of the prototype network, the prototypes were used to classify the spatial locations pixel by pixel, so as to obtain the probability maps and segmentation results of various tumor regions. Aiming at the problem of class imbalance, the adaptive weighted cross-entropy loss function was used to reduce the influence of the background class on loss calculation. Finally, the prototype verification mechanism was added, which means the probability maps obtained by segmentation were fused with the query image to verify the prototypes. The proposed method was tested on the public dataset BraTS2018, and the obtained results were as following:the average Dice coefficient of 0.654, the positive prediction rate of 0.662, the sensitivity of 0.687, the Hausdorff distance of 3.858, and the mean Intersection Over Union(mIOU) reached 61.4%. Compared with Prototype Alignment Network(PANet) and Attention-based Multi-Context Guiding Network(A-MCG), all indicators of the proposed method were improved. The results show that the introduction of the few-shot segmentation method has a good effect on brain tumor MR image segmentation, and the adaptive weighted cross-entropy loss function is also helpful, which can play an effective auxiliary role in the diagnosis and treatment of brain tumors.
    Reference | Related Articles | Metrics
    Sealed-bid auction scheme based on blockchain
    LI Bei, ZHANG Wenyin, WANG Jiuru, ZHAO Wei, WANG Haifeng
    Journal of Computer Applications    2021, 41 (4): 999-1004.   DOI: 10.11772/j.issn.1001-9081.2020081329
    Abstract539)      PDF (1651KB)(830)       Save
    With the rapid development of Internet technology, many traditional auctions are gradually replaced by electronic auctions, and the security privacy protection problem in them becomes more and more concerned. Concerning the problems in the current electronic bidding and auction systems, such as the risk of the privacy of bidder being leaked, the expensive cost of third-party auction center is expensive, and the collusion between third-party auction center and the bidder, a sealed-bid auction scheme based on blockchain smart contract technology was proposed. In the scheme, an auction environment without third-party was constructed by making full use of the features of the blockchain, such as decentralization, tamper-proofing and trustworthiness; and the security deposit strategy of the blockchain was used to restrict the behaviors of bidders, which improved the security of the electronic sealed-bid auction. At the same time, Pedersen commitment was used to protect auction price from being leaked, and Bulletproofs zero-knowledge proof protocol was used to verify the correctness of the winning bid price. Security analysis and experimental results show that the proposed auction scheme meets the security requirements, and has the time consumption of every stage within the acceptable range, so as to meet the daily auction requirements.
    Reference | Related Articles | Metrics
    Message aggregation technology of runtime system for graph computing
    ZHANG Lufei, SUN Rujun, QIN Fang
    Journal of Computer Applications    2021, 41 (4): 984-989.   DOI: 10.11772/j.issn.1001-9081.2020081290
    Abstract319)      PDF (1024KB)(511)       Save
    The main communication mode of graph computing applications is spatiotemporally random point-to-point fine-grained communication. However, existing high-performance computer network systems perform poorly when dealing with a large number of fine-grained communications, which affect the overall performance. The communication optimization in application layer can improve the performance of graph computing application effectively, but this brings great burden to application developers. Therefore, a structure-dynamic message aggregation technique was proposed and implemented, which produced a lot of intermediate points in the communication path by building virtual topologies, so as to greatly improve the effect of message aggregation. By contrast, the traditional message aggregation strategy generally performed only at the communication source or destination with limited aggregation chances. In addition, this proposed technique adapted different kinds of hardware conditions and application features by flexibly adjusting the structure and configuration of the virtual topology. At the same time, the runtime system with message aggregation for graph computing was proposed and implemented, which allowed the runtime system to dynamically select parameters when executing iterations, so as to reduce the burden of developers. Experimental results on a system with 256 nodes show that typical graph computing application performance can achieve more than 100% improvement after optimized by the proposed message aggregation technique.
    Reference | Related Articles | Metrics
    Beijing Opera character recognition based on attention mechanism with HyperColumn
    QIN Jun, LUO Yifan, TIE Jun, ZHENG Lu, LYU Weilong
    Journal of Computer Applications    2021, 41 (4): 1027-1034.   DOI: 10.11772/j.issn.1001-9081.2020081274
    Abstract449)      PDF (2985KB)(596)       Save
    In order to overcome the difficulty of visual feature extraction and meet the real-time recognition demand of Beijing Opera characters, a Convolutional Neural Network based on HyperColumn Attention(HCA-CNN) was proposed to extract and recognize the fine-grained features of Beijing Opera characters. The idea of HyperColumn features used for image segmentation and fine-grained positioning were applied to the attention mechanism used for key area positioning in the network. The multi-layer superposition features was formed by concatenating the backbone classification network in the forms of pixel points through the HyperColumn set, so as to better take into account both the early shallow spatial features and the late depth category semantic features, and improve the accuracy of positioning task and backbone network classification task. At the same time, the lightweight MobileNetV2 was adopted as the backbone network of the network, which better met the real-time requirement of video application scenarios. In addition, the BeiJing Opera Role(BJOR) dataset was created and the ablation experiments were carried out on this dataset. Experimental results show that, compared with the traditional fine-grained Recurrent Attention Convolutional Neural Network(RA-CNN), HCA-CNN not only improves the accuracy index by 0.63 percentage points, but also reduces the Memory Usage and Params by 162.84 MB and 131.5 MB respectively, and reduces the times of multiplication and addition Mult-Adds and floating-point operations per second FLOPs by 39 885×10 6 times and 51 886×10 6 times respectively. It verifies that the proposed HCA-CNN can effectively improve the accuracy and efficiency of Beijing Opera character recognition, and can meet the requirements of practical applications.
    Reference | Related Articles | Metrics
    Image super-resolution reconstruction algorithm based on Laplacian pyramid generative adversarial network
    DUAN Youxiang, ZHANG Hanxiao, SUN Qifeng, SUN Youkai
    Journal of Computer Applications    2021, 41 (4): 1020-1026.   DOI: 10.11772/j.issn.1001-9081.2020081299
    Abstract622)      PDF (1652KB)(1158)       Save
    Concerning the problems of poor reconstructing performance with large-scale factors and requirement of separate training in image reconstruction with different scales in current image super-resolution reconstruction algorithms, an image super-resolution reconstruction algorithm based on Laplacian pyramid Generative Adversarial Network(GAN) was proposed. The pyramid structure generator of the proposed algorithm was used to realize the multi-scale image reconstruction, so as to reduce the difficulty in learning large-scale factors by progressive up-sampling, and dense connection was used between layers to enhance feature propagation, which effectively avoided the vanishing gradient problem. In the algorithm, Markovian discriminator was used to map the input data into the result matrix, and the generator was guided to pay attention to the local features of the image in the process of training, which enriched the details of the reconstructed images. Experimental results show that, when performing 2-times, 4-times and 8-times image reconstruction on Set5 and other benchmark datasets, the average Peak Signal-to-Noise Ratio(PSNR) of the proposed algorithm reaches 33.97 dB, 29.15 dB, 25.43 dB respectively, and the average Structural SIMilarity(SSIM) of the algorithm reaches 0.924, 0.840, 0.667 respectively, outperforming to those of other algorithms such as Super Resolution Convolutional Neural Network(SRCNN), fast and accurate image Super-Resolution with deep Laplacian pyramid Network(LapSRN) and Super-Resolution GAN(SRGAN), and the images reconstructed by the proposed algorithm retain more vivid textures and fine-grained details in subjective vision.
    Reference | Related Articles | Metrics
    Tag recommendation method combining network structure information and text content
    CHE Bingqian, ZHOU Dong
    Journal of Computer Applications    2021, 41 (4): 976-983.   DOI: 10.11772/j.issn.1001-9081.2020081275
    Abstract405)      PDF (1060KB)(714)       Save
    Recommending appropriate tags for texts is an effective way to better organize and use the text content. At present, most tag recommendation methods mainly recommend tags by mining the text content. However, most of the data information does not exist independently, for example, the co-occurrence of words between texts in a corpus can form a complex network structure. Previous studies have shown that the network structure information between texts and the text content information can summarize the semantics of the same text from two different perspectives, and the information extracted from two aspects can complement and explain each other. Based on this, a tag recommendation method was proposed to simultaneously model the network structure information of text and the content information of text. Firstly, Graph Convolutional neural Network(GCN) was used to extract the structure information of the network between texts, then Recurrent Neural Network(RNN) was used to extract the text content information, and finally the attention mechanism was used to recommend tags by combining the network structure information between texts and the text content information. Compared with baseline methods, such as tag recommendation method based on GCN and tag recommendation method with Topical attention-based Long Short-Term Memory(TLSTM) neural network, the proposed tag recommendation method with attention mechanism combining network structure information and text content information has better performance. For example, on the Mathematics Stack Exchange dataset, the precision, recall and F1 of the proposed method are improved by 2.3%, 3.8%, and 7.0% respectively compared with the optimal baseline method.
    Reference | Related Articles | Metrics
    Attention fusion network based video super-resolution reconstruction
    BIAN Pengcheng, ZHENG Zhonglong, LI Minglu, HE Yiran, WANG Tianxiang, ZHANG Dawei, CHEN Liyuan
    Journal of Computer Applications    2021, 41 (4): 1012-1019.   DOI: 10.11772/j.issn.1001-9081.2020081292
    Abstract440)      PDF (2359KB)(771)       Save
    Video super-resolution methods based on deep learning mainly focus on the inter-frame and intra-frame spatio-temporal relationships in the video, but previous methods have many shortcomings in the feature alignment and fusion of video frames, such as inaccurate motion information estimation and insufficient feature fusion. Aiming at these problems, a video super-resolution model based on Attention Fusion Network(AFN) was constructed with the use of the back-projection principle and the combination of multiple attention mechanisms and fusion strategies. Firstly, at the feature extraction stage, in order to deal with multiple motions between neighbor frames and reference frame, the back-projection architecture was used to obtain the error feedback of motion information. Then, a temporal, spatial and channel attention fusion module was used to perform the multi-dimensional feature mining and fusion. Finally, at the reconstruction stage, the obtained high-dimensional features were convoluted to reconstruct high-resolution video frames. By learning different weights of features within and between video frames, the correlations between video frames were fully explored, and an iterative network structure was adopted to process the extracted features gradually from coarse to fine. Experimental results on two public benchmark datasets show that AFN can effectively process videos with multiple motions and occlusions, and achieves significant improvements in quantitative indicators compared to some mainstream methods. For instance, for 4-times reconstruction task, the Peak Signal-to-Noise Ratio(PSNR) of the frame reconstructed by AFN is 13.2% higher than that of Frame Recurrent Video Super-Resolution network(FRVSR) on Vid4 dataset and 15.3% higher than that of Video Super-Resolution network using Dynamic Upsampling Filter(VSR-DUF) on SPMCS dataset.
    Reference | Related Articles | Metrics
    Pattern recognition of motor imagery EEG based on deep convolutional network
    HUO Shoujun, HAO Yan, SHI Huiyu, DONG Yanqing, CAO Rui
    Journal of Computer Applications    2021, 41 (4): 1042-1048.   DOI: 10.11772/j.issn.1001-9081.2020081300
    Abstract461)      PDF (2049KB)(587)       Save
    Concerning the low classification accuracy of Motor Imagery ElectroEncephaloGram(MI-EEG), a new Convolutional Neural Network(CNN) model based on deep framework was introduced. Firstly, the time-frequency information under two resolutions was obtained by using Short-Time Fourier Transform(STFT) and Continuous Wavelet Transform(CWT). Then, it was combined with the channel position information and used as the inputs of the CNN in the form of three-dimensional tensor. Secondly, two network models based on different convolution strategies, namely MixedCNN and StepByStepCNN, were designed to perform feature extraction and classification recognition of the two types of inputs. Finally, in order to solve the problem of overfitting due to insufficient training samples, the mixup data augmentation strategy was introduced. Experimental results on BCI Competition Ⅱ dataset Ⅲ showed that the model performed highest accuracy by training the CWT samples reconstructed by mixup data augmentation on MixedCNN(93.57%), which was 19.1%, 20.2%, 11.7% and 2.3% higher than those of the other four analysis methods including Common Spatial Pattern(CSP) + Support Vector Machine(SVM), Adaptive Autoregressive Model(AAR) + Linear Discriminant Analysis(LDA), Discrete Wavelet Transform(DWT) + Long Short-Term Memory(LSTM), STFT + Stacked AutoEncoder(SAE). The proposed method can provide a reference for MI-EEG classification tasks.
    Reference | Related Articles | Metrics
    Review of privacy protection mechanisms in wireless body area network
    QIN Jing, AN Wen, JI Changqing, WANG Zumin
    Journal of Computer Applications    2021, 41 (4): 970-975.   DOI: 10.11772/j.issn.1001-9081.2020081293
    Abstract424)      PDF (980KB)(694)       Save
    As a network structure composed of several wearable or implantable devices as well as their transmission nodes and processing nodes, Wireless Body Area Network(WBAN) is one of the important application directions of the medical Internet of Things(IoT). Devices in the network collect physiological data from users and send it to the remote medical servers by the wireless technology. Then, the health-care provider accesses the server through the network, so as to provide services to the wearers. However, due to the openness and mobility of the wireless network, if the information in the WBAN is stolen, forged or attacked in the channel, the wearers' privacy will be leaked, even the personal security of the users will be endangered. The research on privacy protection mechanisms in WBAN was reviewed, and on the basis of analyzing the data transmission characteristics of the network, the privacy protection mechanisms based on authentication, encryption and biological signals were summarized, and the advantages and disadvantages of these mechanisms were compared, so as to provide a reference to the enhancement of prevention awareness and the improvement of prevention technology in WBAN applications.
    Reference | Related Articles | Metrics
    Strategy of energy-aware virtual machine migration based on three-way decision
    YANG Ling, JIANG Chunmao
    Journal of Computer Applications    2021, 41 (4): 990-998.   DOI: 10.11772/j.issn.1001-9081.2020081294
    Abstract459)      PDF (1403KB)(489)       Save
    As an important way to reduce energy consumption of the data center in cloud computing, virtual machine migration is widely used. By combing the trisecting-acting-outcome model of three-way decision, a Virtual Machine Migration scheduling strategy based on Three-Way Decision(TWD-VMM) was proposed. First, a hierarchical threshold tree was built to search all the possible threshold values to obtain the pair of thresholds with the lowest total energy consumption with the data center energy consumption as the optimization target. Thus, three regions were created:high-load region, medium-load region and low-load region. Second, different migration strategies were used for hosts with different loads. Specifically, for high-load hosts, the multidimensional resource balance and host load reduction after the pre-migration of hosts were adopted as targets; for low-load hosts, the host multidimensional resource balance after pre-placing hosts was mainly considered; for medium-load hosts, the virtual machines migrated from other regions would be accepted if they still met the medium-load characteristics. The experiments were conducted on CloudSim simulator, and TWD-VMM was compare with Threshold-based energy-efficient VM Scheduling in cloud datacenters(TVMS), Virtual machine migration Scheduling method optimising Energy-Efficiency of data center(EEVS) and Virtual Machine migration Scheduling to Reduce Energy consumption in datacenter(REVMS) algorithms respectively in the aspects including host load, balance of host multidimensional resource utilization and total data center energy consumption. The results show that TWD-VMM algorithm effectively improves host resource utilization and balances host load with an average energy consumption reduction of 27%.
    Reference | Related Articles | Metrics
2024 Vol.44 No.6

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Join CCF