Loading...

Table of Content

    10 April 2021, Volume 41 Issue 4
    2020 CCF China Blockchain Conference (CCF CBCC 2020)
    Overview of blockchain consensus mechanism for internet of things
    TIAN Zhihong, ZHAO Jindong
    2021, 41(4):  917-929.  DOI: 10.11772/j.issn.1001-9081.2020111722
    Asbtract ( )   PDF (1143KB) ( )  
    References | Related Articles | Metrics
    With the continuous development of digital currency, the blockchain technology has attracted more and more attention, and the research on its key technology, consensus mechanism, is particularly important. The application of blockchain technology in the Internet of Things(IoT) is one of the hot issues. Consensus mechanism is one of the core technologies of blockchain, which has an important impact on IoT in terms of decentralization degree, transaction processing speed, transaction confirmation delay, security, and scalability.Firstly, the architecture characteristics of IoT and the lightweight problem caused by resource limitation were described, the problems faced in the implementation of the blockchain in IoT were briefly summarized, and the demands of blockchain in IoT were analyzed by combining the operation flow of bitcoin. Secondly, the consensus mechanisms were divided into proof class, Byzantine class and Directed Acyclic Graph(DAG) class, and the working principles of these various classes of consensus mechanisms were studied, their adaptabilities to IoT were analyzed in terms of communication complexity, their advantages and disadvantages were summarized, and the combination architectures of the existing consensus mechanisms and IoT were investigated and analyzed. Finally, the problems of IoT, such as high operating cost, poor scalability and security risks were deeply studied, the analysis results show that the Internet of Things Application(IOTA) and Byteball consensus mechanisms based on DAG technology have the advantages of fast transaction processing speed, good scalability and strong security in the case of having a large number of transactions, and they are the development directions of blockchain consensus mechanism in the field of IoT in the future.
    Formal verification of smart contract for access control in IoT applications
    BAO Yulong, ZHU Xueyang, ZHANG Wenhui, SUN Pengfei, ZHAO Yingqi
    2021, 41(4):  930-938.  DOI: 10.11772/j.issn.1001-9081.2020111732
    Asbtract ( )   PDF (1289KB) ( )  
    References | Related Articles | Metrics
    The advancement of network technologies such as bluetooth and WiFi has promoted the development of the Internet of Things(IoT). IoT facilitates people's lives, but there are also serious security issues in it. Without secure access control, illegal access of IoT may bring losses to users in many aspects. Traditional access control methods usually rely on a trusted central node, which is not suitable for an IoT environment with nodes distributed. The blockchain technology and smart contracts provide a more effective solution for access control in IoT applications. However, it is difficult to ensure the correctness of smart contracts used for access control in IoT applications by using general test methods. To solve this problem, a method was proposed to formally verify the correctness of smart contracts for access control by using model checking tool Verds. In the method, the state transition system was used to define the semantics of the Solidity smart contract, the Computation Tree Logic(CTL) formula was used to describe the properties to be verified, and the smart contract interaction and user behavior were modelled to form the input model of Verds and the properties to be verified. And then Verds was used to verify whether the properties to be verified are correct. The core of this method is the translation from a subset of Solidity to the input model of Verds. Experimental results on two smart contracts for access control of IoT source show that the proposed method can be used to verify some typical scenarios and expected properties of access control contracts, thereby improving the reliability of smart contracts.
    Data trading scheme based on blockchain and trusted computing
    ZHANG Xuewang, YIN Zijie, FENG Jiaqi, YE Caijin, FU Kang
    2021, 41(4):  939-944.  DOI: 10.11772/j.issn.1001-9081.2020111723
    Asbtract ( )   PDF (1137KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problem of data being easily copied and the realization of data confidentiality in current data trading process, a data trading scheme based on blockchain and trusted computing was proposed. First, the blockchain was applied to record data information, trading information and data usage records, which facilitated to confirm the rights of data assets and data provenance. Then, the trusted computing and encryption algorithms were used to ensure the security of the trading data transmission. Finally, the algorithms provided by the data owners and demanders were applied to complete the calculation in the trusted computing environment, after that, the results were output and encrypted to return to the demanders. In the proposed scheme, the demanders can use the data for calculation without revealing data from the data subjects, and the transmission security is guaranteed through trusted encryption.
    Digital music copyright management system based on blockchain
    ZHANG Guochao, TANG Huayun, CHEN Jianhai, SHEN Rui, HE Qinming, HUANG Butian
    2021, 41(4):  945-955.  DOI: 10.11772/j.issn.1001-9081.2020111731
    Asbtract ( )   PDF (2086KB) ( )  
    References | Related Articles | Metrics
    It is of great significance to apply the blockchain technology to the field of digital music copyright management in view of the difficulties in copyright confirmation, infringement monitoring, right protection and evidence collection, royalty settlement and other problems in the traditional music copyright industry. A digital music copyright management system was designed and constructed by using the VNT Chain blockchain platform. In the system, the blockchain technology was used to provide the proof of music copyright and realize the evidence solidification, the Shazam algorithm was used to provide the proof of originality for music copyright, and the smart contract was used to guarantee the security and reliability of transactions. This system included six function modules:user management, copyright registration, copyright trading, infringement monitoring, evidence solidification and music ecology, covering the main parts of copyright management. According to different needs of business data, blockchain, InterPlanetary File System(IPFS) and MySQL were adopted as storage engines respectively for the system. Experimental results show that the copyright registration time of each music increases by about 1.9 s, and the storage of music feature fingerprint data of one song on IPFS costs about 8 MB averagely, which meets the expected system performance requirements.
    Kubernetes-based Fabric chaincode management and high availability technology
    LIU Hongyu, LIANG Xiubo, WU Junhan
    2021, 41(4):  956-962.  DOI: 10.11772/j.issn.1001-9081.2020111977
    Asbtract ( )   PDF (1215KB) ( )  
    References | Related Articles | Metrics
    The core of the Blockchain as a Service(BaaS) platform is how to deploy the blockchain network on the cloud computing platform. Fabric deployment can be divided into static components and dynamic chaincodes according to the component startup time, and chaincode deployment is the core and the most complex part of Fabric cloudification. Because the Fabric has no interfaces for Kubernetes,the current solutions in the industry implement chaincode deployment through a series of auxiliary technologies, but these solutions do not incorporate the chaincodes into the Kubernetes management environment along with static components. In response to the existing problems of BaaS scheme, the following works were mainly done:1) a comprehensive study of the underlying infrastructure, especially of the Kubernetes platform with high availability in the production environment; 2) the cloud deployment of Kubernetes on Fabric was designed and implemented, especially in the chaincode part, a brand-new container control plug-in was used to realize the support for Kubernetes at the code level and complete the goal of incorporating chaincodes into Kubernetes environment management; 3) the functional computing service was used to manage the Fabric chaincodes to realize a brand-new chaincode execution mode, which means changing from the "start-wait-call-wait" mode to the "start-call-exit" mode. The above works in Fabric cloud deployment, especially in chaincode deployment management, have certain reference value for the optimization of the BaaS platform based on Fabric and Kubernetes.
    Data sharing model of smart grid based on double consortium blockchains
    ZHANG Lihua, WANG Xinyi, HU Fangzhou, HUANG Yang, BAI Jiayi
    2021, 41(4):  963-969.  DOI: 10.11772/j.issn.1001-9081.2020111721
    Asbtract ( )   PDF (1411KB) ( )  
    References | Related Articles | Metrics
    Considering the data sharing difficulties and the risk of privacy disclosure in grid cloud server based on blockchain, a Data Sharing model based on Double Consortium Blockchains in smart grid(DSDCB) was proposed. Firstly, the data of electricity was stored under-chain by Inter Planetary File System(IPFS), the IPFS file fingerprints were stored on-chain, and the electricity data was shared to other consortium blockchain based on the multi-signature notary technology. Secondly, with ensuring privacy from leakage, proxy re-encryption and secure multi-party computing were combined to share single-node or multi-node security data. Finally, fully homomorphic encryption algorithm was used to integrate ciphertext data reasonably without decrypting the electricity data. The 51% attack, sybil attack, replay attack and man-in-the-middle attacks were resisted by the single-node cross-chain data sharing model of DSDCB. It was verified that the security and privacy of data were guaranteed by the secure multi-party cross-chain data sharing model of DSDCB when the number of malicious participants was less than k and the number of honest participants was more than 1. The simulation comparison shows that the computational cost of the DSDCB model is lower than those of Proxy Broadcast Re-Encryption(PBRE) and Data Sharing scheme based on Conditional PBRE(CPBRE-DS), and the model is more feasible than the Fully Homomorphic Non-interactive Verifiable Secret Sharing(FHNVSS) scheme.
    The 35 CCF National Conference of Computer Applications (CCF NCCA 2020)
    Review of privacy protection mechanisms in wireless body area network
    QIN Jing, AN Wen, JI Changqing, WANG Zumin
    2021, 41(4):  970-975.  DOI: 10.11772/j.issn.1001-9081.2020081293
    Asbtract ( )   PDF (980KB) ( )  
    References | Related Articles | Metrics
    As a network structure composed of several wearable or implantable devices as well as their transmission nodes and processing nodes, Wireless Body Area Network(WBAN) is one of the important application directions of the medical Internet of Things(IoT). Devices in the network collect physiological data from users and send it to the remote medical servers by the wireless technology. Then, the health-care provider accesses the server through the network, so as to provide services to the wearers. However, due to the openness and mobility of the wireless network, if the information in the WBAN is stolen, forged or attacked in the channel, the wearers' privacy will be leaked, even the personal security of the users will be endangered. The research on privacy protection mechanisms in WBAN was reviewed, and on the basis of analyzing the data transmission characteristics of the network, the privacy protection mechanisms based on authentication, encryption and biological signals were summarized, and the advantages and disadvantages of these mechanisms were compared, so as to provide a reference to the enhancement of prevention awareness and the improvement of prevention technology in WBAN applications.
    Tag recommendation method combining network structure information and text content
    CHE Bingqian, ZHOU Dong
    2021, 41(4):  976-983.  DOI: 10.11772/j.issn.1001-9081.2020081275
    Asbtract ( )   PDF (1060KB) ( )  
    References | Related Articles | Metrics
    Recommending appropriate tags for texts is an effective way to better organize and use the text content. At present, most tag recommendation methods mainly recommend tags by mining the text content. However, most of the data information does not exist independently, for example, the co-occurrence of words between texts in a corpus can form a complex network structure. Previous studies have shown that the network structure information between texts and the text content information can summarize the semantics of the same text from two different perspectives, and the information extracted from two aspects can complement and explain each other. Based on this, a tag recommendation method was proposed to simultaneously model the network structure information of text and the content information of text. Firstly, Graph Convolutional neural Network(GCN) was used to extract the structure information of the network between texts, then Recurrent Neural Network(RNN) was used to extract the text content information, and finally the attention mechanism was used to recommend tags by combining the network structure information between texts and the text content information. Compared with baseline methods, such as tag recommendation method based on GCN and tag recommendation method with Topical attention-based Long Short-Term Memory(TLSTM) neural network, the proposed tag recommendation method with attention mechanism combining network structure information and text content information has better performance. For example, on the Mathematics Stack Exchange dataset, the precision, recall and F1 of the proposed method are improved by 2.3%, 3.8%, and 7.0% respectively compared with the optimal baseline method.
    Message aggregation technology of runtime system for graph computing
    ZHANG Lufei, SUN Rujun, QIN Fang
    2021, 41(4):  984-989.  DOI: 10.11772/j.issn.1001-9081.2020081290
    Asbtract ( )   PDF (1024KB) ( )  
    References | Related Articles | Metrics
    The main communication mode of graph computing applications is spatiotemporally random point-to-point fine-grained communication. However, existing high-performance computer network systems perform poorly when dealing with a large number of fine-grained communications, which affect the overall performance. The communication optimization in application layer can improve the performance of graph computing application effectively, but this brings great burden to application developers. Therefore, a structure-dynamic message aggregation technique was proposed and implemented, which produced a lot of intermediate points in the communication path by building virtual topologies, so as to greatly improve the effect of message aggregation. By contrast, the traditional message aggregation strategy generally performed only at the communication source or destination with limited aggregation chances. In addition, this proposed technique adapted different kinds of hardware conditions and application features by flexibly adjusting the structure and configuration of the virtual topology. At the same time, the runtime system with message aggregation for graph computing was proposed and implemented, which allowed the runtime system to dynamically select parameters when executing iterations, so as to reduce the burden of developers. Experimental results on a system with 256 nodes show that typical graph computing application performance can achieve more than 100% improvement after optimized by the proposed message aggregation technique.
    Strategy of energy-aware virtual machine migration based on three-way decision
    YANG Ling, JIANG Chunmao
    2021, 41(4):  990-998.  DOI: 10.11772/j.issn.1001-9081.2020081294
    Asbtract ( )   PDF (1403KB) ( )  
    References | Related Articles | Metrics
    As an important way to reduce energy consumption of the data center in cloud computing, virtual machine migration is widely used. By combing the trisecting-acting-outcome model of three-way decision, a Virtual Machine Migration scheduling strategy based on Three-Way Decision(TWD-VMM) was proposed. First, a hierarchical threshold tree was built to search all the possible threshold values to obtain the pair of thresholds with the lowest total energy consumption with the data center energy consumption as the optimization target. Thus, three regions were created:high-load region, medium-load region and low-load region. Second, different migration strategies were used for hosts with different loads. Specifically, for high-load hosts, the multidimensional resource balance and host load reduction after the pre-migration of hosts were adopted as targets; for low-load hosts, the host multidimensional resource balance after pre-placing hosts was mainly considered; for medium-load hosts, the virtual machines migrated from other regions would be accepted if they still met the medium-load characteristics. The experiments were conducted on CloudSim simulator, and TWD-VMM was compare with Threshold-based energy-efficient VM Scheduling in cloud datacenters(TVMS), Virtual machine migration Scheduling method optimising Energy-Efficiency of data center(EEVS) and Virtual Machine migration Scheduling to Reduce Energy consumption in datacenter(REVMS) algorithms respectively in the aspects including host load, balance of host multidimensional resource utilization and total data center energy consumption. The results show that TWD-VMM algorithm effectively improves host resource utilization and balances host load with an average energy consumption reduction of 27%.
    Sealed-bid auction scheme based on blockchain
    LI Bei, ZHANG Wenyin, WANG Jiuru, ZHAO Wei, WANG Haifeng
    2021, 41(4):  999-1004.  DOI: 10.11772/j.issn.1001-9081.2020081329
    Asbtract ( )   PDF (1651KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of Internet technology, many traditional auctions are gradually replaced by electronic auctions, and the security privacy protection problem in them becomes more and more concerned. Concerning the problems in the current electronic bidding and auction systems, such as the risk of the privacy of bidder being leaked, the expensive cost of third-party auction center is expensive, and the collusion between third-party auction center and the bidder, a sealed-bid auction scheme based on blockchain smart contract technology was proposed. In the scheme, an auction environment without third-party was constructed by making full use of the features of the blockchain, such as decentralization, tamper-proofing and trustworthiness; and the security deposit strategy of the blockchain was used to restrict the behaviors of bidders, which improved the security of the electronic sealed-bid auction. At the same time, Pedersen commitment was used to protect auction price from being leaked, and Bulletproofs zero-knowledge proof protocol was used to verify the correctness of the winning bid price. Security analysis and experimental results show that the proposed auction scheme meets the security requirements, and has the time consumption of every stage within the acceptable range, so as to meet the daily auction requirements.
    Construction and correlation analysis of national food safety standard graph
    QIN Li, HAO Zhigang, LI Guoliang
    2021, 41(4):  1005-1011.  DOI: 10.11772/j.issn.1001-9081.2020081311
    Asbtract ( )   PDF (2022KB) ( )  
    References | Related Articles | Metrics
    National Food Safety Standards(NFSS) are not only the operation specifications of food producers, but also the law enforcement criteria of food safety supervision. However, there are various NFSSs with a wide range of contents and complicated inter-reference relationships. To systematically study the contents and structures of NFSSs, it is necessary to extract the knowledges and mine the reference relationships in NFSSs. First, the contents of the standard files and the reference relationship between the standard files were extracted as knowledge triplets through the Knowledge Graph(KG) technology, and the triplets were used to construct the NFSS knowledge graph. Then, this knowledge graph was linked to the food production process ontology which was made manually based on Hazard Analysis Critical Control Point(HACCP) standards, so that the food safety standards and the related food production processes can be referenced to each other. At the same time, the Louvain community discovery algorithm was used to analyze the standard reference network in the knowledge graph, and the standards with high citations as well as their types in NFSSs were obtained. Finally, a question answering system was built using gStore's Application Programming Interface(API) and Django, which realized the knowledge retrieval and reasoning based on natural language, and the high-impact NFSSs in the graph could be found under specified requirements.
    Attention fusion network based video super-resolution reconstruction
    BIAN Pengcheng, ZHENG Zhonglong, LI Minglu, HE Yiran, WANG Tianxiang, ZHANG Dawei, CHEN Liyuan
    2021, 41(4):  1012-1019.  DOI: 10.11772/j.issn.1001-9081.2020081292
    Asbtract ( )   PDF (2359KB) ( )  
    References | Related Articles | Metrics
    Video super-resolution methods based on deep learning mainly focus on the inter-frame and intra-frame spatio-temporal relationships in the video, but previous methods have many shortcomings in the feature alignment and fusion of video frames, such as inaccurate motion information estimation and insufficient feature fusion. Aiming at these problems, a video super-resolution model based on Attention Fusion Network(AFN) was constructed with the use of the back-projection principle and the combination of multiple attention mechanisms and fusion strategies. Firstly, at the feature extraction stage, in order to deal with multiple motions between neighbor frames and reference frame, the back-projection architecture was used to obtain the error feedback of motion information. Then, a temporal, spatial and channel attention fusion module was used to perform the multi-dimensional feature mining and fusion. Finally, at the reconstruction stage, the obtained high-dimensional features were convoluted to reconstruct high-resolution video frames. By learning different weights of features within and between video frames, the correlations between video frames were fully explored, and an iterative network structure was adopted to process the extracted features gradually from coarse to fine. Experimental results on two public benchmark datasets show that AFN can effectively process videos with multiple motions and occlusions, and achieves significant improvements in quantitative indicators compared to some mainstream methods. For instance, for 4-times reconstruction task, the Peak Signal-to-Noise Ratio(PSNR) of the frame reconstructed by AFN is 13.2% higher than that of Frame Recurrent Video Super-Resolution network(FRVSR) on Vid4 dataset and 15.3% higher than that of Video Super-Resolution network using Dynamic Upsampling Filter(VSR-DUF) on SPMCS dataset.
    Image super-resolution reconstruction algorithm based on Laplacian pyramid generative adversarial network
    DUAN Youxiang, ZHANG Hanxiao, SUN Qifeng, SUN Youkai
    2021, 41(4):  1020-1026.  DOI: 10.11772/j.issn.1001-9081.2020081299
    Asbtract ( )   PDF (1652KB) ( )  
    References | Related Articles | Metrics
    Concerning the problems of poor reconstructing performance with large-scale factors and requirement of separate training in image reconstruction with different scales in current image super-resolution reconstruction algorithms, an image super-resolution reconstruction algorithm based on Laplacian pyramid Generative Adversarial Network(GAN) was proposed. The pyramid structure generator of the proposed algorithm was used to realize the multi-scale image reconstruction, so as to reduce the difficulty in learning large-scale factors by progressive up-sampling, and dense connection was used between layers to enhance feature propagation, which effectively avoided the vanishing gradient problem. In the algorithm, Markovian discriminator was used to map the input data into the result matrix, and the generator was guided to pay attention to the local features of the image in the process of training, which enriched the details of the reconstructed images. Experimental results show that, when performing 2-times, 4-times and 8-times image reconstruction on Set5 and other benchmark datasets, the average Peak Signal-to-Noise Ratio(PSNR) of the proposed algorithm reaches 33.97 dB, 29.15 dB, 25.43 dB respectively, and the average Structural SIMilarity(SSIM) of the algorithm reaches 0.924, 0.840, 0.667 respectively, outperforming to those of other algorithms such as Super Resolution Convolutional Neural Network(SRCNN), fast and accurate image Super-Resolution with deep Laplacian pyramid Network(LapSRN) and Super-Resolution GAN(SRGAN), and the images reconstructed by the proposed algorithm retain more vivid textures and fine-grained details in subjective vision.
    Beijing Opera character recognition based on attention mechanism with HyperColumn
    QIN Jun, LUO Yifan, TIE Jun, ZHENG Lu, LYU Weilong
    2021, 41(4):  1027-1034.  DOI: 10.11772/j.issn.1001-9081.2020081274
    Asbtract ( )   PDF (2985KB) ( )  
    References | Related Articles | Metrics
    In order to overcome the difficulty of visual feature extraction and meet the real-time recognition demand of Beijing Opera characters, a Convolutional Neural Network based on HyperColumn Attention(HCA-CNN) was proposed to extract and recognize the fine-grained features of Beijing Opera characters. The idea of HyperColumn features used for image segmentation and fine-grained positioning were applied to the attention mechanism used for key area positioning in the network. The multi-layer superposition features was formed by concatenating the backbone classification network in the forms of pixel points through the HyperColumn set, so as to better take into account both the early shallow spatial features and the late depth category semantic features, and improve the accuracy of positioning task and backbone network classification task. At the same time, the lightweight MobileNetV2 was adopted as the backbone network of the network, which better met the real-time requirement of video application scenarios. In addition, the BeiJing Opera Role(BJOR) dataset was created and the ablation experiments were carried out on this dataset. Experimental results show that, compared with the traditional fine-grained Recurrent Attention Convolutional Neural Network(RA-CNN), HCA-CNN not only improves the accuracy index by 0.63 percentage points, but also reduces the Memory Usage and Params by 162.84 MB and 131.5 MB respectively, and reduces the times of multiplication and addition Mult-Adds and floating-point operations per second FLOPs by 39 885×106 times and 51 886×106 times respectively. It verifies that the proposed HCA-CNN can effectively improve the accuracy and efficiency of Beijing Opera character recognition, and can meet the requirements of practical applications.
    Visibility forecast model based on LightGBM algorithm
    YU Dongchang, ZHAO Wenfang, NIE Kai, ZHANG Ge
    2021, 41(4):  1035-1041.  DOI: 10.11772/j.issn.1001-9081.2020081589
    Asbtract ( )   PDF (1101KB) ( )  
    References | Related Articles | Metrics
    In order to improve the accuracy of visibility forecast, especially the accuracy of low-visibility forecast, an ensemble learning model based on random forest and LightGBM for visibility forecast was proposed. Firstly, based on the meteorological forecast data of the numerical modeling system, combined with meteorological observation data and PM2.5 concentration observation data, the random forest method was used to construct the feature vectors. Secondly, for the missing data with different time spans, three missing value processing methods were designed to replace the missing values, and then the data sample set with good continuity for training and testing was created. Finally, a visibility forecast model based on LightGBM was established, and its parameters were optimized by using the network search method. The proposed model was compared to Support Vector Machine(SVM), Multiple Linear Regression(MLR) and Artificial Neural Network(ANN) on performance. Experimental results show that for different levels of visibility, the proposed visibility forecast model based on LightGBM algorithm obtains the highest Threat Score(TS); when the visibility is less than 2 km, the average correlation coefficient between the visibility values of observation stations predicted by the model and the observation values of visibility of observation stations is 0.75, the average mean square error between them is 6.49. It can be seen that the forecast model based on LightGBM can effectively improve the accuracy of visibility forecast.
    Pattern recognition of motor imagery EEG based on deep convolutional network
    HUO Shoujun, HAO Yan, SHI Huiyu, DONG Yanqing, CAO Rui
    2021, 41(4):  1042-1048.  DOI: 10.11772/j.issn.1001-9081.2020081300
    Asbtract ( )   PDF (2049KB) ( )  
    References | Related Articles | Metrics
    Concerning the low classification accuracy of Motor Imagery ElectroEncephaloGram(MI-EEG), a new Convolutional Neural Network(CNN) model based on deep framework was introduced. Firstly, the time-frequency information under two resolutions was obtained by using Short-Time Fourier Transform(STFT) and Continuous Wavelet Transform(CWT). Then, it was combined with the channel position information and used as the inputs of the CNN in the form of three-dimensional tensor. Secondly, two network models based on different convolution strategies, namely MixedCNN and StepByStepCNN, were designed to perform feature extraction and classification recognition of the two types of inputs. Finally, in order to solve the problem of overfitting due to insufficient training samples, the mixup data augmentation strategy was introduced. Experimental results on BCI Competition Ⅱ dataset Ⅲ showed that the model performed highest accuracy by training the CWT samples reconstructed by mixup data augmentation on MixedCNN(93.57%), which was 19.1%, 20.2%, 11.7% and 2.3% higher than those of the other four analysis methods including Common Spatial Pattern(CSP) + Support Vector Machine(SVM), Adaptive Autoregressive Model(AAR) + Linear Discriminant Analysis(LDA), Discrete Wavelet Transform(DWT) + Long Short-Term Memory(LSTM), STFT + Stacked AutoEncoder(SAE). The proposed method can provide a reference for MI-EEG classification tasks.
    Few-shot segmentation method for multi-modal magnetic resonance images of brain tumor
    DONG Yang, PAN Haiwei, CUI Qianna, BIAN Xiaofei, TENG Teng, WANG Bangju
    2021, 41(4):  1049-1054.  DOI: 10.11772/j.issn.1001-9081.2020081388
    Asbtract ( )   PDF (1162KB) ( )  
    References | Related Articles | Metrics
    Brain tumor Magnetic Resonance Imaging(MRI) has problems such as multi-modality, lacking of training data, class imbalance, and large differences between private databases, which lead to difficulties in segmentation. In order to solve these problems, the few-shot segmentation method was introduced, and a Prototype network based on U-net(PU-net) was proposed to segment brain tumor Magnetic Resonance(MR) images. First, the U-net structure was modified to extract the features of various tumors, which was used to calculate the prototypes. Then, on the basis of the prototype network, the prototypes were used to classify the spatial locations pixel by pixel, so as to obtain the probability maps and segmentation results of various tumor regions. Aiming at the problem of class imbalance, the adaptive weighted cross-entropy loss function was used to reduce the influence of the background class on loss calculation. Finally, the prototype verification mechanism was added, which means the probability maps obtained by segmentation were fused with the query image to verify the prototypes. The proposed method was tested on the public dataset BraTS2018, and the obtained results were as following:the average Dice coefficient of 0.654, the positive prediction rate of 0.662, the sensitivity of 0.687, the Hausdorff distance of 3.858, and the mean Intersection Over Union(mIOU) reached 61.4%. Compared with Prototype Alignment Network(PANet) and Attention-based Multi-Context Guiding Network(A-MCG), all indicators of the proposed method were improved. The results show that the introduction of the few-shot segmentation method has a good effect on brain tumor MR image segmentation, and the adaptive weighted cross-entropy loss function is also helpful, which can play an effective auxiliary role in the diagnosis and treatment of brain tumors.
    Artificial intelligence
    Overview of information extraction of free-text electronic medical records
    CUI Bowen, JIN Tao, WANG Jianmin
    2021, 41(4):  1055-1063.  DOI: 10.11772/j.issn.1001-9081.2020060796
    Asbtract ( )   PDF (1090KB) ( )  
    References | Related Articles | Metrics
    Information extraction technology can extract the key information in free-text electronic medical records, helping the information management and subsequent information analysis of the hospital. Therefore, the main process of free-text electronic medical record information extraction was simply introduced, the research results of single extraction and joint extraction methods for three most important types of information:named entity, entity assertion and entity relation in the past few years were studied, and the methods, datasets, and final effects of these results were compared and summarized. In addition, an analysis of the features, advantages and disadvantages of several popular new methods, a summarization of commonly used datasets in the field of information extraction of free-text electronic medical records, and an analysis of the current status and research directions of related fields in China was carried out.
    Auto-encoder based multi-view attributed network representation learning model
    FAN Wei, WANG Huimin, XING Yan
    2021, 41(4):  1064-1070.  DOI: 10.11772/j.issn.1001-9081.2020061006
    Asbtract ( )   PDF (1029KB) ( )  
    References | Related Articles | Metrics
    Most of the traditional network representation learning methods cannot consider the rich structure information and attribute information in the network at the same time, resulting in poor performance of subsequent tasks such as classification and clustering. In order to solve this problem, an Auto-Encoder based Multi-View Attributed Network Representation learning model(AE-MVANR) was proposed. Firstly, the topological structure information of the network was transformed into the Topological Structure View(TSV), and the co-occurrence frequencies of the same attributes between nodes were calculated to construct the Attributed Structure View(ASV). Then, the random walk algorithm was used to obtain a series of node sequences on two views separately. At last, by inputting all the generated sequences into an auto-encoder model for training, the node representation vectors that integrate structure information and attribute information were obtained. Extensive experiments of classification and clustering tasks on several real-world datasets were carried out. The results demonstrate that AE-MVANR outperforms the widely used network representation learning method based solely on structure information and the one based on both network structure information and node attribute information. In specific, for classification results of the proposed model, the maximum increase of accuracy is 43.75%, and for clustering results of the proposed model, the maximum increase of Normalized Mutual Information(NMI) is 137.95%, the maximum increase of Silhouette Coefficient is 1 314.63% and the maximum decrease of Davies Bouldin Index(DBI) is 45.99%.
    Weight allocation and case base maintenance method of case-based reasoning classifier
    YAN Aijun, WEI Zhiyuan
    2021, 41(4):  1071-1077.  DOI: 10.11772/j.issn.1001-9081.2020071016
    Asbtract ( )   PDF (871KB) ( )  
    References | Related Articles | Metrics
    As feature weight allocation and case base maintenance have an important influence on the performance of Case-Based Reasoning(CBR) classifier, a CBR algorithm model named Ant lion and Expectation maximization of Gaussian mixture model CBR(AGECBR) was proposed, in which the Ant Lion Optimizer(ALO) was used to allocate weights and Expectation Maximization algorithm of Gaussian Mixture Model(GMMEM) was used for case base maintenance. Firstly, the ALO was used to allocate the feature weights. In this process, the classification accuracy of CBR was used as the fitness function of the ALO to iteratively optimize the feature weights, so as to achive the optimized allocation of feature weights. Secondly, the expectation maximization algorithm of Gaussian mixture model was used to perform clustering analysis to each case in the case base, and the noise cases and redundant cases in the base were deleted, so as to realize the maintenance of the case base. The experiments were carried out on the UCI standard datasets, in which, AGECBR has the average classification accuracy 3.83-5.44 percentage points higher than Back Propagation(BP), k-Nearest Neighbor(kNN) and other classification algorithms. Experimental results show that the proposed method can effectively improve the accuracy of CBR classification.
    Topic-expanded emotional conversation generation based on attention mechanism
    YANG Fengrui, HUO Na, ZHANG Xuhong, WEI Wei
    2021, 41(4):  1078-1083.  DOI: 10.11772/j.issn.1001-9081.2020071063
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics
    More and more studies begin to focus on emotional conversation generation. However, the existing studies tend to focus only on emotional factors and ignore the relevance and diversity of topics in dialogues, as well as the emotional tendency closely related to topics, which may lead to the quality decline of generated responses. Therefore, a topic-expanded emotional conversation generation model that integrated topic information and emotional factors was proposed. Firstly, the conversation context was globally-encoded, the topic model was introduced to obtain the global topic words, and the external affective dictionary was used to obtain the global affective words in this model. Secondly, the topic words were expanded by semantic similarity and the topic-related affective words were extracted by dependency syntax analysis in the fusion module. Finally, the context, topic words and affective words were input into a decoder based on the attention mechanism to prompt the decoder to generate topic-related emotional responses. Experimental results show that the model can generate rich and emotion-related responses. Compared with the model Topic-Enhanced Emotional Conversation Generation(TE-ECG), the proposed model has an average increase of 16.3% and 15.4% in unigram diversity(distinct-1) and bigram diversity(distinct-2); and compared with Seq2SeqA(Sequence to Sequence model with Attention), the proposed model has an average increase of 26.7% and 28.7% in unigram diversity(distinct-1) and bigram diversity(distinct-2).
    β-distribution reduction based on discernibility matrix in interval-valued decision systems
    LI Leitao, ZHANG Nan, TONG Xiangrong, YUE Xiaodong
    2021, 41(4):  1084-1092.  DOI: 10.11772/j.issn.1001-9081.2020040563
    Asbtract ( )   PDF (935KB) ( )  
    References | Related Articles | Metrics
    At present, the scale of interval type data is getting larger and larger. When using the classic attribute reduction method to process, the data needs to be preprocessed,thus leading to the loss of original information. To solve this problem, a β-distribution reduction algorithm of the interval-valued decision system was proposed. Firstly, the concept and the reduction target of β-distribution of the interval-valued decision system were given, and the proposed related theories were proved. Then, the discernibility matrix and discernibility function of β-distribution reduction were constructed for the above reduction target,and the β-distribution reduction algorithm of the interval-valued decision system was proposed. Finally,14 UCI datasets were selected for experimental verification. On Statlog dataset, when the similarity threshold is 0.6 and the number of objects is 100, 200, 400, 600 and 846 respectively, the average reduction length of the β-distribution reduction algorithm is 1.6, 2.2, 1.4, 2.4 and 2.6 respectively, the average reduction length of the Distribution Reduction Algorithm based on Discernibility Matrix(DRADM) is 2.0, 3.0, 3.0, 4.0 and 4.0 respectively, the average reduction length of the Maximum Distribution Reduction Algorithm based on Discernibility Matrix(MDRADM) is 2.0, 3.0, 3.0, 4.0 and 3.0 respectively. The effectiveness of the proposed β-distribution reduction algorithm is verified by experimental results.
    Robust multi-view clustering algorithm based on adaptive neighborhood
    LI Xingfeng, HUANG Yuqing, REN Zhenwen, LI Yihong
    2021, 41(4):  1093-1099.  DOI: 10.11772/j.issn.1001-9081.2020060828
    Asbtract ( )   PDF (1021KB) ( )  
    References | Related Articles | Metrics
    Since the existing adaptive neighborhood based multi-view clustering algorithms do not consider the noise and the loss of consensus graph information, a Robust Multi-View Graph Clustering(RMVGC) algorithm based on adaptive neighborhood was proposed. Firstly, to avoid the influence of noise and outliers on the data, the Robust Principal Component Analysis(RPCA) model was used to learn multiple clean low-rank data from the original data. Secondly, the adaptive neighborhood learning was employed to directly fuse multiple clean low-rank data to obtain a clean consensus affinity graph, thus reducing the information loss in the process of graph fusion. Experimental results demonstrate that the Normalized Mutual Informations(NMI) of the proposed algorithm RMVGC is improved by 5.2, 1.36, 27.2, 4.66 and 5.85 percentage points, respectively, compared to the current popular multi-view clustering algorithms on MRSCV1, BBCSport, COIL20, ORL and UCI digits datasets. Meanwhile, in the proposed algorithm, the local structure of data is maintained, the robustness against the original data is enhanced, the quality of affinity graph is improved, and such that the proposed algorithm has great clustering performance on multi-view datasets.
    Accurate object tracking algorithm based on distance weighting overlap prediction and ellipse fitting optimization
    WANG Ning, SONG Huihui, ZHANG Kaihua
    2021, 41(4):  1100-1105.  DOI: 10.11772/j.issn.1001-9081.2020060869
    Asbtract ( )   PDF (2560KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problems of Discriminative Correlation Filter(DCF) tracking algorithm such as model drift, rough scale and tracking failure when the tracking object suffers from rotation or non-rigid deformation, an accurate object tracking algorithm based on Distance Weighting Overlap Prediction and Ellipse Fitting Optimization(DWOP-EFO) was proposed. Firstly, the overlap and center-distance between bounding-boxes were both used as the basis for the evaluation of dynamic anchor boxes, which can narrow the spatial distance between the prediction result and the object region,easing the model drift problem. Secondly,in order to further improve the tracking accuracy,a lightweight object segmentation network was applied to segment the object from background, and the ellipse fitting algorithm was applied to optimize the segmentation contour result and output stable rotated bounding box, achieving accurate estimation of the object scale. Finally, a scale-confidence optimization strategy was used to realize gating output of the scale result with high confidence. The proposed algorithm can alleviate the problem of model drift, enhance the robustness of the tracker, and improve the accuracy of the tracker. Experiments were conducted on two widely used evaluation datasets Visual Object Tracking challenge(VOT2018) and Object Tracking Benchmark(OTB100). Experimental results demonstrate that the proposed algorithm improves Expected-Average-Overlap(EAO) index by 2.2 percentage points compared with Accurate Tracking by Overlap Maximization(ATOM) and by 1.9 percentage points compared with Learning Discriminative Model Prediction for tracking(DiMP). Meanwhile, on evaluation dataset OTB100, the proposed algorithm outperforms ATOM by 1.3 percentage on success rate index and shows significant performance especially on attribute of non-rigid deformation. the proposed algorithm runs over 25 frame/s averagely on evaluation datasets which realizes real-time tracking.
    Data science and technology
    Survey on online hashing algorithm
    GUO Yicun, CHEN Huahui
    2021, 41(4):  1106-1112.  DOI: 10.11772/j.issn.1001-9081.2020071047
    Asbtract ( )   PDF (1188KB) ( )  
    References | Related Articles | Metrics
    In the current large-scale data retrieval tasks, learning to hash methods can learn compact binary codes, which saves storage space and can quickly calculate the similarity in Hamming space. Therefore, for approximate nearest neighbor search, hashing methods are often used to improve the mechanism of fast nearest neighbor search. In most current hashing methods, the offline learning models are used for batch training, which cannot adapt to possible data changes appeared in the environment of large-scale streaming data, resulting in reduction of retrieval efficiency. Therefore, the adaptive hash functions were proposed and learnt in online hashing methods, which realize the continuous learning in the process of inputting data and make the methods can be applied to similarity retrieval in real-time. Firstly, the basic principles of learning to hash and the inherent requirements to realize online hashing were explained. Secondly, the different learning methods of online hashing were introduced from the perspectives such as the reading method, learning mode, and model update method of streaming data under online conditions. Thirdly, the online learning algorithms were further divided into six categories, that is, categories based on passive-aggressive algorithms, matrix factorization technology, unsupervised clustering, similarity supervision, mutual information measurement, codebook supervision respectively. And the advantages, disadvantages and characteristics of these algorithms were analyzed. Finally, the development directions of online hashing were summarized and discussed.
    Reliability analysis models for replication-based storage systems with proactive fault tolerance
    LI Jing, LUO Jinfei, LI Bingchao
    2021, 41(4):  1113-1121.  DOI: 10.11772/j.issn.1001-9081.2020071067
    Asbtract ( )   PDF (1396KB) ( )  
    References | Related Articles | Metrics
    Proactive fault tolerance mechanism, which predicts disk failures and prompts the system to perform migration and backup for the data in danger in advance, can be used to enhance the storage system reliability. In view of the problem that the reliability of the replication-based storage systems with proactive fault tolerance cannot be evaluated by the existing research accurately, several state transition models were proposed for replication-based storage systems; then the models were implemented based on Monte Carlo simulation, so as to simulate the running of the replication-based storage systems with proactive fault tolerance; at last, the expected number of data-loss events during a period in the systems was counted. The Weibull distribution function was used to model the time distribution of device failure and failure repair events, and the impact of proactive fault tolerance mechanism, node failures, node failure repairs, disk failures and disk failure repairs on the system reliability were evaluated quantitatively. Experimental results showed that when the accuracy of the prediction model reached 50%, the reliability of the systems were able to be improved by 1-3 times, and compared with 2-way replication systems, 3-way replication systems were more sensitive to system parameters. By using the proposed models, system administrators can easily assess system reliability under different fault tolerance schemes and system parameters, and then build storage systems with high reliability and high availability.
    Improved wavelet clustering algorithm based on peak grid
    LONG Chaoqi, JIANG Yu, XIE Yu
    2021, 41(4):  1122-1127.  DOI: 10.11772/j.issn.1001-9081.2020071042
    Asbtract ( )   PDF (1096KB) ( )  
    References | Related Articles | Metrics
    Aiming at the difference between the clustering effects of wavelet clustering algorithm under different grid division scales, an improved method based on peak grid was proposed. The algorithm mainly aimed at improving the detection method of connected regions in wavelet clustering. First, the spatial grids after wavelet transform were sorted according to the grid values; then, the breadth-first-search method was used to traverse each spatial grid to detect the peak connected regions in the data after wavelet transform; finally, the connected regions were marked and mapped to the original data space to obtain the clustering result. Experimental results of 8 synthetic datasets(4 convex datasets and 4 non-convex datasets) and 2 real datasets in the UCI database showed that the improved algorithm had good performance at low grid division scales, and compared with the original wavelet clustering algorithm, this algorithm had the requirement for grid division scale reduced by 25% to 60%, and the clustering time reduced by 14% under the same clustering effect.
    Cyber security
    Internet rumor propagation model considering non-supportive comments
    LI Yan, CHEN Qiaoping
    2021, 41(4):  1128-1135.  DOI: 10.11772/j.issn.1001-9081.2020071135
    Asbtract ( )   PDF (1088KB) ( )  
    References | Related Articles | Metrics
    In view of the existing rumor propagation model, the impact of non-supportive comments on internet rumor propagation has not been analyzed in detail. An SIICR1R2(Susceptible-Infected-Infected with non-supportive comment-Removed1-Removed2) internet rumor propagation model was proposed by introducing rumor spreaders with non-supportive comments. Firstly, the steady-state analysis of the model was performed to prove the stability of rumor-free equilibrium and rumor propagation equilibrium. Secondly, the theoretical results were verified by the numerical simulation, and the impacts of non-supportive comment probability, recovery probability, propagation probability and the persuasiveness of non-supportive comments on internet rumor propagation were analyzed. The analysis results show that increasing non-supportive comment probability has an inhibitory effect on internet rumor propagation, but the effect is affected by the recovery probability, and enhancing the persuasiveness of non-supportive comments and reducing the propagation probability can effectively reduce the influence range of internet rumor. Simulations results on WS(Watts-Strogatz) small-world network and BA(Barabási-Albert) scale-free network confirm that non-supportive comments can suppress internet rumor propagation. Finally, according to the analysis results, the prevention and control strategies of rumor were put forward.
    Parallel implementation and analysis of SKINNY encryption algorithm using CUDA
    XIE Wenbo, WEI Yongzhuang, LIU Zhenghong
    2021, 41(4):  1136-1141.  DOI: 10.11772/j.issn.1001-9081.2020071060
    Asbtract ( )   PDF (927KB) ( )  
    References | Related Articles | Metrics
    Focusing on the issue of low efficiency of SKINNY encryption algorithm in Central Processing Unit(CPU), a fast implementation method was proposed based on Graphic Processing Unit(GPU). In the first place, an optimization scheme was proposed by combining the structural characteristics of SKINNY algorithm, and one whole calculation, where the whole calculation was integrated by 5 step-by-step operations. Moreover, the characteristics of the Electronic CodeBook(ECB) mode and counter(CTR) mode of this algorithm were analyzed, and the parallel design schemes such as parallel granularity and memory allocation were given. Experimental results illustrate that the efficiency and throughput of SKINNY algorithm implemented by Computing Unified Device Architecture(CUDA) are significantly improved, when compared to the algorithm with the traditional CPU implementation. More specifically, for data size of 16 MB or large size, the SKINNY algorithm implementation with ECB mode achieves maximum efficiency improvement of 99.85% and maximum speedup ratio of 671. On the other hand, the SKINNY algorithm implementation with CTR mode achieves maximum efficiency improvement of 99.87% and maximum speedup ratio of 765. In particular, the throughput of the proposed SKINNY-256(ECB) parallel algorithm has 1.29 times and 2.55 times of those of the existing AES-256(ECB) and SKINNY_ECB parallel algorithms, respectively.
    Malicious code detection based on multi-channel image deep learning
    JIANG Kaolin, BAI Wei, ZHANG Lei, CHEN Jun, PAN Zhisong, GUO Shize
    2021, 41(4):  1142-1147.  DOI: 10.11772/j.issn.1001-9081.2020081224
    Asbtract ( )   PDF (2386KB) ( )  
    References | Related Articles | Metrics
    Existing deep learning-based malicious code detection methods have problems such as weak deep-level feature extraction capability, relatively complex model and insufficient model generalization capability. At the same time, code reuse phenomenon occurred in large number of malicious samples of the same type, resulting in similar visual features of the code. This similarity can be used for malicious code detection. Therefore, a malicious code detection method based on multi-channel image visual features and AlexNet was proposed. In the method, the codes to be detected were converted into multi-channel images at first. After that, AlexNet was used to extract and classify the color texture features of the images, so as to detect the possible malicious codes. Meanwhile, the multi-channel image feature extraction, the Local Response Normalization(LRN) and other technologies were used comprehensively, which effectively improved the generalization ability of the model with effective reduction of the complexity of the model. The Malimg dataset after equalization was used for testing, the results showed that the average classification accuracy of the proposed method was 97.8%, and the method had the accuracy increased by 1.8% and the detection efficiency increased by 60.2% compared with the VGGNet method. Experimental results show that the color texture features of multi-channel images can better reflect the type information of malicious codes, the simple network structure of AlexNet can effectively improve the detection efficiency, and the local response normalization can improve the generalization ability and detection effect of the model.
    Image grey level encryption based on cat map
    LI Shanshan, ZHAO Li, ZHANG Hongli
    2021, 41(4):  1148-1152.  DOI: 10.11772/j.issn.1001-9081.2020071029
    Asbtract ( )   PDF (1056KB) ( )  
    References | Related Articles | Metrics
    In order to solve the problem that the leakage of privacy content of images in the process of public channel transmission results in endangering information security, a new encryption method of greyscale image was proposed. The iteration of coupled logistic map was used to generate two-dimensional chaotic sequences. One of the sequences was used to generate the coefficients of cat map. The another was used to scramble the pixel positions. The traditional image encryption method based on cat map was used to encrypt the image pixel position, while the proposed encryption method was used to adopt different cat map coefficients for different pixel groups, so as to transform the grey value of each pixel in the group. In addition, bidirectional diffusion was adopted by the method to improve the security performance. The proposed method has simple encryption and decryption processes, high execution efficiency, and no limitation for the image size. Security analysis shows that the proposed encryption method is very sensitive to secret keys, and has good stability under multiple attack methods.
    Network and communications
    Indoor intrusion detection based on direction-of-arrival estimation algorithm for single snapshot
    REN Xiaokui, LIU Pengfei, TAO Zhiyong, LIU Ying, BAI Lichun
    2021, 41(4):  1153-1159.  DOI: 10.11772/j.issn.1001-9081.2020071030
    Asbtract ( )   PDF (1270KB) ( )  
    References | Related Articles | Metrics
    Intrusion detection methods based on Channel State Information(CSI) are vulnerable to environment layout and noise interference, resulting in low detection rate. To solve this problem, an indoor intrusion detection method based on the algorithm of Direction-Of-Arrival(DOA) estimation for single snapshot was proposed. Firstly, the CSI data received by the antenna array was mathematically decomposed by combining the feature of spatial selective fading of the wireless signals, and the unknown DOA estimation problem was transformed into an over-complete representation problem. Secondly, the sparsity of the sparse signal was constrained by l1 norm, and the accurate DOA information was obtained by solving the sparse regularized optimization problem, so as to provide the reliable feature parameters for the final detection results at data level. Finally, the Indoor Safety Index Number(ISIN) was evaluated according to the DOA changes before and after the moments, and then indoor intrusion detection was realized. In the experiment, the method was verified by real indoor scenes and compared with traditional data preprocessing methods of principal component analysis and discrete wavelet transform. Experimental results show that the proposed method can accurately detect the occurrence of intrusion in different complex indoor environments, with an average detection rate of more than 98%, and has better performance in robustness compared to comparison algorithms.
    Data center adaptive multi-path load balancing algorithm based on software defined network
    XU Hongliang, YANG Guiqin, JIANG Zhanjun
    2021, 41(4):  1160-1164.  DOI: 10.11772/j.issn.1001-9081.2020060845
    Asbtract ( )   PDF (916KB) ( )  
    References | Related Articles | Metrics
    The traditional multi-path load balancing algorithms cannot effectively perceive the running state of the network, cannot comprehensively consider the real-time transmission states of the links and most of them lack adaptability. In order to solve these problems, a Software Defined Network(SDN) adaptive multi-path Load Balancing Algorithm based on Spider Monkey Optimization(SMO-LBA) was proposed based on the idea of centralized control and whole network control of SDN. Firstly, the perceptul ability of data center network was used to obtain the multi-path real-time link state information. Then, based on the global exploration and local exploitation ability of spider monkey optimization algorithm, the link idle rate was used as the adaptability value of each path, and the paths were dynamically evaluated and updated by introducing the adaptive weight. Finally, the path with the lowest link occupancy rate in data center network was determined as the optimal forwarding path. The fat tree topology was selected to carry out the simulation experiment on Mininet platform. Experimental results show that SMO-LBA can improve the throughput and average link utilization of data center network, and realize the adaptive load balancing of the network.
    Multimedia computing and computer simulation
    Enhanced vehicle 3D surround view based on coordinate inverse mapping
    TAN Zhaoyi, CHEN Baifan
    2021, 41(4):  1165-1171.  DOI: 10.11772/j.issn.1001-9081.2020071039
    Asbtract ( )   PDF (4343KB) ( )  
    References | Related Articles | Metrics
    The current state-of-the-art vehicle 3D surround view system can realistically display the 3D surround environment of the vehicle body, but it still causes display distortion of the 3D objects close to the vehicle body, greatly decreasing the display effect and the practicality. To solve this problem, an enhanced vehicle 3D surround view synthesis method was proposed. First, the You Only Look Once v4(YOLOv4) network was used to detect the positions of the vehicles and pedestrians in images. Then, based on the coordinate dimension-increasing inverse mapping, the positions of the detected objects were mapped to the world coordinate system with dimension increased. Finally, the 3D models were placed and rendered on the corresponding inverse mapping positions to replace 3D objects with distortion, so as to provide effective position information of the surround objects. Experimental results show that the enhanced vehicle 3D surround view generated by proposed method has good real-time performance and display effect, and can effectively solve the display defects of the current vehicle 3D surround view.
    Improved remote sensing image fusion algorithm based on channel attention feedback network
    WU Lei, YANG Xiaomin
    2021, 41(4):  1172-1178.  DOI: 10.11772/j.issn.1001-9081.2020071064
    Asbtract ( )   PDF (5163KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of feedforward Convolutional Neural Network(CNN), such as small receptive field, insufficient context information acquirement and that only shallow features can be extracted by the feature extraction convolutional layer of the network, an improved remote sensing image fusion algorithm based on channel attention feedback network was proposed. Firstly, the detail features of PANchromatic(PAN) images and the spectral features of Low-resolution MultiSpectral(LMS) images were initially extracted through two convolutional layers. Secondly, the extracted features were combined with the deep features fed back from the network and inputted to the channel attention mechanism module to obtain the initially refined features. Thirdly, the deep features with stronger characterization capability were generated by feedback module. Finally, High-resolution MultiSpectral(HMS) images were obtained by putting the generated deep features into the reconstruction layer with deconvolution. Experimental results on three different satellite image datasets show that the proposed algorithm can well extract the detail features of PAN images and the spectral features of LMS images, and the HMS images recovered by this algorithm are clearer subjectively and better than the comparison algorithms objectively; at the same time, the Root Mean Square Error(RMSE) index of the proposed method is more than 50% lower than that the traditional methods, and more than 10% lower than that the feedforward convolutional network methods.
    Image segmentation model without initial contour
    LUO Qin, WANG Yan
    2021, 41(4):  1179-1183.  DOI: 10.11772/j.issn.1001-9081.2020071058
    Asbtract ( )   PDF (4070KB) ( )  
    References | Related Articles | Metrics
    In order to enhance the robustness to initial contour as well as improve the segmentation efficiency for images with intensity inhomogeneity or noise, a region-based active contour model was proposed. First, a global intensity fitting force and a local intensity fitting force were designed separately. Then, the model's fitting term was obtained by the linear combination. And the weight between the two fitting forces were adjusted to improve the robustness of the model to the initial contour. Finally, the length term of evolution curve was employed to keep the smoothness of the curve. Experimental results show that compared with Region-Scalable Fitting(RSF) model and Selective Local or Global Segmentation(SLGS) model, the proposed model has the number of iterations reduced by about 57% and 31%, and the segmentation time reduced by about 62% and 14%. The proposed model can quickly and accurately segment noisy images and images with intensity inhomogeneity without initial contour. Besides, it has good segmentation effect on some practical images such as medical images and infrared images.
    Frontier and comprehensive applications
    Optimization of maintenance strategy model for multi-component system based on performance-based contract
    XU Feixue, LIU Qinming, OUYANG Hailing, YE Chunming
    2021, 41(4):  1184-1191.  DOI: 10.11772/j.issn.1001-9081.2020071033
    Asbtract ( )   PDF (1141KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of the low maintenance efficiency of multi-component series system and the low profit of suppliers, considering the economic correlation between components, a maintenance strategy model for multi-component system based on performance-based contract was proposed. First, Weibull distribution was used to describe the service life law of each component in the system, and different maintenance strategies were implemented by judging the relationship between the usage of each component, the preventive maintenance threshold and the opportunistic maintenance threshold. Second, the probability of each maintenance activity and the corresponding number of maintenances were calculated in the unit renewal cycle, and a maintenance strategy model for multi-component system based on performance-based contract was established with the preventive maintenance threshold and the opportunity maintenance threshold as decision variables and with the goal of maximizing the profit of suppliers. Finally, Grey Wolf Optimizer(GWO) algorithm was used to solve the proposed model. The numerical example analysis showed that compared with Genetic Algorithm(GA) and Particle Swarm Optimization(PSO) algorithm, the improved GWO algorithm had the precision improved by 22.6% and 7.6% respectively, and the profit margin of the proposed performance model was as high as 25.3% based on the linear return function, which was increased by 5.2% compared with that of the traditional cost model. The model and algorithm of multi-component system maintenance strategy optimization based on performance-based contract can effectively solve the problem of low maintenance quality and efficiency of suppliers, and provide a basis for suppliers and operators to jointly make maintenance contracts.
    Fuzzy multi-objective charging scheduling algorithm for electric vehicle based on load balance
    ZHOU Meiling, CHEN Huaili
    2021, 41(4):  1192-1198.  DOI: 10.11772/j.issn.1001-9081.2020071013
    Asbtract ( )   PDF (1148KB) ( )  
    References | Related Articles | Metrics
    Three-phase imbalance and load peak-valley difference in the distribution network were caused by single-phase charging of Electric Vehicle(EV) in residential area. Therefore, amulti-objective charging scheduling strategy for EV considering load balance was proposed. Based on the three-phase network, the total delay time and charge balance were used as the objective function, and constraints such as load peak-valley difference and three-phase imbalance were taken into account to establish the scheduling model of EV charging for static and online scheduling problems. The multi-objective solution was obtained by the improved Non-dominated Sorting Genetic Algorithm-Ⅱ(NSGA-Ⅱ), and the results were optimized by designing crossover operators, adaptively adjusting mutation probability and local optimization. The Pareto optimal frontier was obtained by setting a certain volume of external archives and crowding distance, and the fuzzy membership method was used to obtain the compromise optimal solution. The influence of number of simultaneously active charging points and three-phase imbalance value on the optimization results was analyzed through an example.The proposed strategy was compared with the disorderly charging strategy so that the validity of the proposed model and strategy was proved.
    Modified backtracking search algorithm for solving photovoltaic model parameter identification problem
    ZHANG Weiwei, TAO Cong, FAN Yan, YU Kunjie, WEN Xiaoyu, ZHANG Weizheng
    2021, 41(4):  1199-1206.  DOI: 10.11772/j.issn.1001-9081.2020071041
    Asbtract ( )   PDF (1336KB) ( )  
    References | Related Articles | Metrics
    In order to identify photovoltaic model parameters accurately and reliably, a Modified Backtracking Search Algorithm(MBSA) was proposed. In the algorithm, firstly, some individuals were selected to learn the current population and historical population information at the same time, and the others were made to learn from the best individual in the current population and stay away from the worst solution, so as to maintain the population diversity and improve the convergence speed. Then, the performances of individuals in the population were quantified by the probability. On this basis, the individuals were able to adaptively select different evolution strategies to balance the exploration and exploitation capabilities. Finally, an elite strategy based on chaotic local search was used to further improve the quality of the population. The proposed algorithm was tested on different photovoltaic models such as single diode, double diode, and photovoltaic module. Experimental results show that the convergence speed and parameter identification accuracy of Backtracking Search Algorithm(BSA) are significantly improved by the proposed strategies. Eight advanced algorithms such as Logistic Chaotic JAYA(LCJAYA) algorithm and Multiple Learning BSA(MLBSA) were compared with the proposed algorithm. Experimental results show that the robustness of the proposed algorithm is the best among these algorithms, and the identification accuracy of the proposed algorithm is better than those of JAYA, LCJAYA, Improved JAYA(IJAYA) and Teaching-Learning-Based Optimization(TLBO) algorithms on both single diode and double diode models, and the proposed algorithm outperforms JAYA, LCJAYA, IJAYA and TLBO algorithms on photovoltaic module model in identification accuracy. Under different illumination conditions and at different temperatures, the manufacturer real data on three photovoltaic modules:thin-film, mono-crystalline and multi-crystalline were used for the actual measurement test, and the results predicted by the proposed algorithm were consistent with the actual situations in the test. Simulation results show that the proposed algorithm is accurate and stable on photovoltaic model parameter identification.
    Path planning algorithm in complex environment using self-adjusting sampling space
    ZHANG Kang, CHEN Jianping
    2021, 41(4):  1207-1213.  DOI: 10.11772/j.issn.1001-9081.2020060863
    Asbtract ( )   PDF (3715KB) ( )  
    References | Related Articles | Metrics
    To overcome low pathfinding efficiency and slow convergence speed of Rapid-exploring Random Tree star(RRT*) in high-dimensional and complex environment, an Unmanned Aerial Vehicle(UAV) path planning algorithm with self-adjusting sampling space based on RRT* named Adjust Sampling space-RRT*(AS-RRT*) was proposed. In this algorithm, by adjusting the sampling space adaptively, the tree was guided to grow more efficiently, which was realized through three strategies including:biased sampling, node selection and node learning. Firstly, the light and dark areas in the sampling space were defined to performing biased sampling, and the probability weights of the light and dark areas were determined by the current expansion failure rate, so as to ensure that the algorithm was both exploratory and directional when searching for the initial path. Then, once the initial path was found,the nodes were periodically filter,and the high-quality nodes were used as learning samples to generate the new sampling distribution, the lowest-quality nodes were replaced by new nodes after the algorithm reaching the maximum number of nodes. Simulation experiments for comparison were conducted in multiple types of environments. The results show that the proposed algorithm improves the inherent randomness of the sampling algorithm to a certain extent, and compared with the traditional RRT* algorithms, it has less pathfinding time used in the same environment, lower cost path generated in the same time, and the improvements are more obvious in three-dimensional space.
    Circular pointer instrument recognition system based on MobileNetV2
    LI Huihui, YAN Kun, ZHANG Lixuan, LIU Wei, LI Zhi
    2021, 41(4):  1214-1220.  DOI: 10.11772/j.issn.1001-9081.2020060765
    Asbtract ( )   PDF (2333KB) ( )  
    References | Related Articles | Metrics
    Aiming at the problems of large number of model parameters, large computational cost and low accuracy when using deep learning algorithms for pointer instrument recognition task, an intelligent detection and recognition system of circular pointer instrument based on the combination of improved pre-trained MobileNetV2 network model and circular Hough transform was proposed. Firstly, the Hough transform was used to solve the interference problem of non-circular areas in complex scene. Then, the circular areas were extracted to construct datasets. Finally, the circular pointer instrument recognition was realized by using the improved pre-trained MobileNetV2 network model. The average confusion matrix was used to measure the performance of the proposed model. Experimental results show that, the recognition rate of the proposed system in the recognition task of circular pointer instruments reaches 99.76%. At the same time, the results of comparing the proposed model with other five different network models show that the proposed model and ResNet50 both have the highest accuracy, but compared with ResNet50, the proposed network model has the model parameter number and model computational cost reduced by 90.51% and 92.40% respectively, verifying that the proposed model is helpful for the further deployment and implementation of industrial grade real-time circular pointer instrument detection and recognition in mobile terminals or embedded devices.
    Cerebral infarction image recognition based on semi-supervised method
    OU Lili, SHAO Fengjing, SUN Rencheng, SUI Yi
    2021, 41(4):  1221-1226.  DOI: 10.11772/j.issn.1001-9081.2020071034
    Asbtract ( )   PDF (1167KB) ( )  
    References | Related Articles | Metrics
    In the field of image recognition, images with insufficient label data cannot be well recognized by the supervised method model. In order to solve this problem, a semi-supervised method model based on Generative Adversarial Network(GAN) was proposed. That is, by combining the advantages of semi-supervised GANs and deep convolutional GANs, and replacing the sigmoid activation function with softmax in the output layer, the Semi-Supervised Deep Convolutional GAN(SS-DCGAN) model was established. Firstly, the generated samples were defined as pseudo-samples and used to guide the training process. Secondly, the semi-supervised training method was adopted to update the parameters of the model. Finally, the recognition of abnormal(cerebral infarction) images was realized. Experimental results show that the SS-DCGAN model can recognize abnormal images well with little label data, which achieves 95.05% recognition rates. Compared with Residual Network 32(ResNet32) and Ladder networks, the SS-DCGAN model has significant advantages.
2024 Vol.44 No.3

Current Issue
Archive
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
  028-85222239-803
Website: www.joca.cn
E-mail: bjb@joca.cn
WeChat
Join CCF