Project Articles


    Default Latest Most Read
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Survey on application of binary reverse analysis in detecting software supply chain pollution
    WU Zhenhua, ZHANG Chao, SUN He, YAN Xuexiong
    Journal of Computer Applications    2020, 40 (1): 103-115.   DOI: 10.11772/j.issn.1001-9081.2019071245
    Abstract323)      PDF (2085KB)(437)       Save
    In recent years, Software Supply Chain (SSC) security issues have frequently occurred, which has brought great challenges to software security research. Since there are millions of new software released every day, it is essential to detect the pollution of SSC automatically. The problem of SSC pollution was first analyzed and discussed. Then focusing on the requirements of pollution detection in the downstream of SSC, the automatic program reverse analysis methods and their applications in the SSC pollution detection was introduced. Finally, the shortcomings and challenges faced by the existing technologies in solving the problem of SSC pollution was summarized and analyzed, and some researches worth studying to overcome these challenges were pointed out.
    Reference | Related Articles | Metrics
    Review on deep learning-based pedestrian re-identification
    YANG Feng, XU Yu, YIN Mengxiao, FU Jiacheng, HUANG Bing, LIANG Fangxuan
    Journal of Computer Applications    2020, 40 (5): 1243-1252.   DOI: 10.11772/j.issn.1001-9081.2019091703
    Abstract887)      PDF (1156KB)(996)       Save
    Pedestrian Re-IDentification (Re-ID) is a hot issue in the field of computer vision and mainly focuses on “how to relate to specific person captured by different cameras in different physical locations”. Traditional methods of Re-ID were mainly based on the extraction of low-level features, such as local descriptors, color histograms and human poses. In recent years, in view of the problems in traditional methods such as pedestrian occlusion and posture disalignment, pedestrian Re-ID methods based on deep learning such as region, attention mechanism, posture and Generative Adversarial Network (GAN) were proposed and the experimental results became significantly better than before. Therefore, the researches of deep learning in pedestrian Re-ID were summarized and classified, and different from the previous reviews, the pedestrian Re-ID methods were divided into four categories to discuss in this review. Firstly, the pedestrian Re-ID methods based on deep learning were summarized by following four methods region, attention, posture, and GAN. Then the performances of mAP (mean Average Precision) and Rank-1 indicators of these methods on the mainstream datasets were analyzed. The results show that the deep learning-based methods can reduce the model overfitting by enhancing the connection between local features and narrowing domain gaps. Finally, the development direction of pedestrian Re-ID method research was forecasted.
    Reference | Related Articles | Metrics
    Survey of person re-identification technology based on deep learning
    WEI Wenyu, YANG Wenzhong, MA Guoxiang, HUANG Mei
    Journal of Computer Applications    2020, 40 (9): 2479-2492.   DOI: 10.11772/j.issn.1001-9081.2020010038
    Abstract491)      PDF (1851KB)(1333)       Save
    As one of intelligent video surveillance technologies, person Re-identification (Re-id) has great research significance for maintaining social order and stability, and it aims to retrieve the specific person in different camera views. For traditional hand-crafted feature methods are difficult to address the complex camera environment problem in person Re-id task, a large number of deep learning-based person Re-id methods were proposed, so as to promote the development of person Re-id technology greatly. In order to deeply understand the person Re-id technology based on deep learning, a large number of related literature were collated and analyzed. First, a comprehensive introduction was given from three aspects: image, video and cross-modality. The image-based person Re-id technology was divided into two categories: supervised and unsupervised, and the two categories were generalized respectively. Then, some related datasets were listed, and the performance of some algorithms in recent years on image and video datasets were compared and analyzed. At last, the development difficulties of person Re-id technology were summarized, and the possible future research directions of this technology were discussed.
    Reference | Related Articles | Metrics
    Review of facial action unit detection
    YAN Jingwei, LI Qiang, WANG Chunmao, XIE Di, WANG Baoqing, DAI Jun
    Journal of Computer Applications    2020, 40 (1): 8-15.   DOI: 10.11772/j.issn.1001-9081.2019061043
    Abstract540)      PDF (1281KB)(513)       Save
    Facial action unit detection aims at making computers detect the action unit targets based on the given facial images or videos automatically. Due to a great amount of research during the past 20 years, especially the construction of more and more facial action unit databases and the raise of deep learning based methods, facial action unit detection technology has been rapidly developed. Firstly, the concept of facial action unit and commonly used facial action unit databases were introduced, and the traditional methods including steps such as pre-processing, feature extraction and classifier learning were summarized. Then, for several important research areas, such as region learning, facial action unit correlation learning and weak supervised learning, systematic review and analysis were conducted. Finally, the shortcomings of the existing reasearch and potential developing trends of facial action unit detection were discussed.
    Reference | Related Articles | Metrics
    Review of speech segmentation and endpoint detection
    YANG Jian, LI Zhenpeng, SU Peng
    Journal of Computer Applications    2020, 40 (1): 1-7.   DOI: 10.11772/j.issn.1001-9081.2019061071
    Abstract556)      PDF (1105KB)(797)       Save
    Speech segmentation is an indispensable basic work in speech recognition and speech synthesis, and its quality has a great impact on the following system. Although manual segmentation and labeling is of high accuracy, it is quite time-consuming and laborious, and requires domain experts to deal with. As a result, automatic speech segmentation has become a research hotspot in speech processing. Firstly, aiming at current progress of automatic speech segmentation, several different classification methods of speech segmentation were explained. The alignment-based methods and boundary detection-based methods were introduced respectively, and the neural network speech segmentation methods, which can be applied in the above two frameworks, were expounded in detail. Then, some new speech segmentation technologies based on the methods such as bio-inspiration signal and game theory were introduced, and the performance evaluation metrics widely used in the speech segmentation field were given, and these evaluation metrics were compared and analyzed. Finally, the above contents were summarized and the future important research directions of speech segmentation were put forward.
    Reference | Related Articles | Metrics
    Review of anomaly detection algorithms for multidimensional time series
    HU Min, BAI Xue, XU Wei, WU Bingjian
    Journal of Computer Applications    2020, 40 (6): 1553-1564.   DOI: 10.11772/j.issn.1001-9081.2019101805
    Abstract688)      PDF (930KB)(2021)       Save

    With the continuous development of information technology, the scale of time series data has grown exponentially, which provides opportunities and challenges for the development of time series anomaly detection algorithm, making the algorithm in this field gradually become a new research hotspot in the field of data analysis. However, the research in this area is still in the initial stage and the research work is not systematic. Therefore, by sorting out and analyzing the domestic and foreign literature, this paper divides the research content of multidimensional time series anomaly detection into three aspects: dimension reduction, time series pattern representation and anomaly pattern detection in logical order, and summarizes the mainstream algorithms to comprehensively show the current research status and characteristics of anomaly detection. On this basis, the research difficulties and trends of multi-dimensional time series anomaly detection algorithms were summarized in order to provide useful reference for related theory and application research.

    Reference | Related Articles | Metrics
    Survey of sub-topic detection technology based on internet social media
    LI Shanshan, YANG Wenzhong, WANG Ting, WANG Lihua
    Journal of Computer Applications    2020, 40 (6): 1565-1573.   DOI: 10.11772/j.issn.1001-9081.2019101871
    Abstract288)      PDF (666KB)(305)       Save

    The data in internet social media has the characteristics of fast transmission, high user participation and complete coverage compared with traditional media under the background of the rise of various platforms on the internet.There are various topics that people pay attention to and publish comments in, and there may exist deeper and more fine-grained sub-topics in the related information of one topic. A survey of sub-topic detection based on internet social media, as a newly emerging and developing research field, was proposed. The method of obtaining topic and sub-topic information through social media and participating in the discussion is changing people’s lives in an all-round way. However, the technologies in this field are not mature at present, and the researches are still in the initial stage in China. Firstly, the development background and basic concept of the sub-topic detection in internet social media were described. Secondly, the sub-topic detection technologies were divided into seven categories, each of which was introduced, compared and summarized. Thirdly, the methods of sub-topic detection were divided into online and offline methods, and the two methods were compared, then the general technologies and the frequently used technologies of the two methods were listed. Finally, the current shortages and future development trends of this field were summarized.

    Reference | Related Articles | Metrics
    Overview of content and semantic based 3D model retrieval
    PEI Yandong, GU Kejiang
    Journal of Computer Applications    2020, 40 (7): 1863-1872.   DOI: 10.11772/j.issn.1001-9081.2019112034
    Abstract397)      PDF (1598KB)(713)       Save
    Retrieval of multimedia data is one of the most important issues in information reuse. As a key step of 3D modeling, 3D model retrieval has been deeply studied in recent years due to the widespread use of 3D modeling. Aiming at the current progress of 3D model retrieval technology, content-based retrieval technologies were firstly introduced. According to the extracted features, these technologies were divided into four categories:based on statistical data, based on geometric shape, based on topological structure and based on visual features. The main achievements, advantages and disadvantages of each technology were presented respectively. And then the semantic-based retrieval technologies considering semantic information to solve the "semantic gap" phenomenon were introduced. They were divided into three categories:relevance feedback, active learning and ontology technology. Then, the relationship and characteristics of these technologies were introduced. Finally, the future research directions of 3D model retrieval were concluded and proposed.
    Reference | Related Articles | Metrics
    Review of image edge detection algorithms based on deep learning
    LI Cuijin, QU Zhong
    Journal of Computer Applications    2020, 40 (11): 3280-3288.   DOI: 10.11772/j.issn.1001-9081.2020030314
    Abstract900)      PDF (922KB)(2501)       Save
    Edge detection is the process of extracting the important information of mutations in the image. It is a research hotspot in the field of computer vision and the basis of many middle-and high-level vision tasks such as image segmentation, target detection and recognition. In recent years, in view of the problems of thick edge contour lines and low detection accuracy, edge detection algorithms based on deep learning such as spectral clustering, multi-scale fusion, and cross-layer fusion were proposed by the industry. In order to make more researchers understand the research status of edge detection, firstly, the implementation theory and methods of traditional edge detection were introduced. Then, the main edge detection methods based on deep learning in resent years were summarized, and these methods were classified according to the implementation technologies of the methods. And the analysis of the key technologies of these methods show that the multi-scale multi-level fusion and selection of loss function was the important research directions. Various methods were compared to each other through evaluation indicators. It can be seen that the Optimal Dataset Scale (ODS) of edge detection algorithm on the Berkeley Segmentation Data Set and benchmark 500 (BSDS500) was increased from 0.598 to 0.828, which was close to the level of human vision. Finally, the development direction of edge detection algorithm research was forecasted.
    Reference | Related Articles | Metrics
    Review of privacy protection mechanisms in wireless body area network
    QIN Jing, AN Wen, JI Changqing, WANG Zumin
    Journal of Computer Applications    2021, 41 (4): 970-975.   DOI: 10.11772/j.issn.1001-9081.2020081293
    Abstract200)      PDF (980KB)(581)       Save
    As a network structure composed of several wearable or implantable devices as well as their transmission nodes and processing nodes, Wireless Body Area Network(WBAN) is one of the important application directions of the medical Internet of Things(IoT). Devices in the network collect physiological data from users and send it to the remote medical servers by the wireless technology. Then, the health-care provider accesses the server through the network, so as to provide services to the wearers. However, due to the openness and mobility of the wireless network, if the information in the WBAN is stolen, forged or attacked in the channel, the wearers' privacy will be leaked, even the personal security of the users will be endangered. The research on privacy protection mechanisms in WBAN was reviewed, and on the basis of analyzing the data transmission characteristics of the network, the privacy protection mechanisms based on authentication, encryption and biological signals were summarized, and the advantages and disadvantages of these mechanisms were compared, so as to provide a reference to the enhancement of prevention awareness and the improvement of prevention technology in WBAN applications.
    Reference | Related Articles | Metrics
    Summarization of natural language generation
    LI Xueqing, WANG Shi, WANG Zhujun, ZHU Junwu
    Journal of Computer Applications    2021, 41 (5): 1227-1235.   DOI: 10.11772/j.issn.1001-9081.2020071069
    Abstract1731)      PDF (1165KB)(2896)       Save
    Natural Language Generation (NLG) technologies use artificial intelligence and linguistic methods to automatically generate understandable natural language texts. The difficulty of communication between human and computer is reduced by NLG, which is widely used in machine news writing, chatbot and other fields, and has become one of the research hotspots of artificial intelligence. Firstly, the current mainstream methods and models of NLG were listed, and the advantages and disadvantages of these methods and models were compared in detail. Then, aiming at three NLG technologies:text-to-text, data-to-text and image-to-text, the application fields, existing problems and current research progresses were summarized and analyzed respectively. Furthermore, the common evaluation methods and their application scopes of the above generation technologies were described. Finally, the development trends and research difficulties of NLG technologies were given.
    Reference | Related Articles | Metrics
    Survey of sentiment analysis based on image and text fusion
    MENG Xiangrui, YANG Wenzhong, WANG Ting
    Journal of Computer Applications    2021, 41 (2): 307-317.   DOI: 10.11772/j.issn.1001-9081.2020060923
    Abstract450)      PDF (1277KB)(1514)       Save
    With the continuous improvement of information technology, the amount of image-text data with orientation on various social platforms is growing rapidly, and the sentiment analysis with image and text fusion is widely concerned. The single sentiment analysis method can no longer meet the demand of multi-modal data. Aiming at the technical problems of image and text sentiment feature extraction and fusion, firstly, the widely used image and text emotional analysis datasets were listed, and the extraction methods of text features and image features were introduced. Then, the current fusion modes of image features and text features were focused on and the problems existing in the process of image-text sentiment analysis were briefly described. Finally, the research directions of sentiment analysis in the future were summarized and prospected for. In order to have a deeper understanding of image-text fusion technology, literature research method was adopted to review the study of image-text sentiment analysis, which is helpful to compare the differences between different fusion methods and find more valuable research schemes.
    Reference | Related Articles | Metrics
    Review of pre-trained models for natural language processing tasks
    LIU Ruiheng, YE Xia, YUE Zengying
    Journal of Computer Applications    2021, 41 (5): 1236-1246.   DOI: 10.11772/j.issn.1001-9081.2020081152
    Abstract535)      PDF (1296KB)(2718)       Save
    In recent years, deep learning technology has developed rapidly. In Natural Language Processing (NLP) tasks, with text representation technology rising from the word level to the document level, the unsupervised pre-training method using a large-scale corpus has been proved to be able to effectively improve the performance of models in downstream tasks. Firstly, according to the development of text feature extraction technology, typical models were analyzed from word level and document level. Secondly, the research status of the current pre-trained models was analyzed from the two stages of pre-training target task and downstream application, and the characteristics of the representative models were summed up. Finally, the main challenges faced by the development of pre-trained models were summarized and the prospects were proposed.
    Reference | Related Articles | Metrics
    Survey on online hashing algorithm
    GUO Yicun, CHEN Huahui
    Journal of Computer Applications    2021, 41 (4): 1106-1112.   DOI: 10.11772/j.issn.1001-9081.2020071047
    Abstract384)      PDF (1188KB)(933)       Save
    In the current large-scale data retrieval tasks, learning to hash methods can learn compact binary codes, which saves storage space and can quickly calculate the similarity in Hamming space. Therefore, for approximate nearest neighbor search, hashing methods are often used to improve the mechanism of fast nearest neighbor search. In most current hashing methods, the offline learning models are used for batch training, which cannot adapt to possible data changes appeared in the environment of large-scale streaming data, resulting in reduction of retrieval efficiency. Therefore, the adaptive hash functions were proposed and learnt in online hashing methods, which realize the continuous learning in the process of inputting data and make the methods can be applied to similarity retrieval in real-time. Firstly, the basic principles of learning to hash and the inherent requirements to realize online hashing were explained. Secondly, the different learning methods of online hashing were introduced from the perspectives such as the reading method, learning mode, and model update method of streaming data under online conditions. Thirdly, the online learning algorithms were further divided into six categories, that is, categories based on passive-aggressive algorithms, matrix factorization technology, unsupervised clustering, similarity supervision, mutual information measurement, codebook supervision respectively. And the advantages, disadvantages and characteristics of these algorithms were analyzed. Finally, the development directions of online hashing were summarized and discussed.
    Reference | Related Articles | Metrics
    Review of spatio-temporal trajectory sequence pattern mining methods
    KANG Jun, HUANG Shan, DUAN Zongtao, LI Yixiu
    Journal of Computer Applications    2021, 41 (8): 2379-2385.   DOI: 10.11772/j.issn.1001-9081.2020101571
    Abstract518)      PDF (1204KB)(1194)       Save
    With the rapid development of global positioning technology and mobile communication technology, huge amounts of trajectory data appear. These data are true reflections of the moving patterns and behavior characteristics of moving objects in the spatio-temporal environment, and they contain a wealth of information which carries important application values for the fields such as urban planning, traffic management, service recommendation, and location prediction. And the applications of spatio-temporal trajectory data in these fields usually need to be achieved by sequence pattern mining of spatio-temporal trajectory data. Spatio-temporal trajectory sequence pattern mining aims to find frequently occurring sequence patterns from the spatio-temporal trajectory dataset, such as location patterns (frequent trajectories, hot spots), activity periodic patterns, and semantic behavior patterns, so as to mine hidden information in the spatio-temporal data. The research progress of spatial-temporal trajectory sequence pattern mining in recent years was summarized. Firstly, the data characteristics and applications of spatial-temporal trajectory sequence were introduced. Then, the mining process of spatial-temporal trajectory patterns was described:the research situation in this field was introduced from the perspectives of mining location patterns, periodic patterns and semantic patterns based on spatial-temporal trajectory sequence. Finally, the problems existing in the current spatio-temporal trajectory sequence pattern mining methods were elaborated, and the future development trends of spatio-temporal trajectory sequence pattern mining method were prospected.
    Reference | Related Articles | Metrics
    Survey of research progress on crowdsourcing task assignment for evaluation of workers’ ability
    MA Hua, CHEN Yuepeng, TANG Wensheng, LOU Xiaoping, HUANG Zhuoxuan
    Journal of Computer Applications    2021, 41 (8): 2232-2241.   DOI: 10.11772/j.issn.1001-9081.2020101629
    Abstract177)      PDF (1533KB)(381)       Save
    With the rapid development of internet technology and sharing economy mode, as a new crowd computing mode, crowdsourcing has been widely applied and become a research focus recently. Aiming at the characteristics of crowdsourcing applications, to ensure the completion quality of crowdsourcing tasks, the existing researches have proposed different crowdsourcing task assignment methods from the perspective of the evaluation of worker's ability. Firstly, the crowdsourcing's concept and classification were introduced, and the workflow and task characteristics of the crowdsourcing platform were analyzed. Based on them, the existing research works on the evaluation of workers' ability were summarized. Then, the crowdsourcing task assignment methods and the related challenges were reviewed from three different aspects, including matching-based, planning-based and role-based collaboration. Finally, the research directions of future work were put forward.
    Reference | Related Articles | Metrics
    Review of remote sensing image change detection
    REN Qiuru, YANG Wenzhong, WANG Chuanjian, WEI Wenyu, QIAN Yunyun
    Journal of Computer Applications    2021, 41 (8): 2294-2305.   DOI: 10.11772/j.issn.1001-9081.2020101632
    Abstract431)      PDF (1683KB)(669)       Save
    As a key technology of land use/land cover detection, change detection aims to detect the changed part and its type in the remote sensing data of the same region in different periods. In view of the problems in traditional change detection methods, such as heavy manual labor and poor detection results, a large number of change detection methods based on remote sensing images have been proposed. In order to further understand the change detection technology based on remote sensing images and further study on the change detection methods, a comprehensive review of change detection was carried out by sorting, analyzing and comparing a large number of researches on change detection. Firstly, the development process of change detection was described. Then, the research progress of change detection was summarized in detail from three aspects:data selection and preprocessing, change detection technology, post-processing and precision evaluation, where the change detection technology was mainly summarized from analysis unit and comparison method respectively. Finally, the summary of the problems in each stage of change detection was performed and the future development directions were proposed.
    Reference | Related Articles | Metrics
    Review of deep learning-based medical image segmentation
    CAO Yuhong, XU Hai, LIU Sun'ao, WANG Zixiao, LI Hongliang
    Journal of Computer Applications    2021, 41 (8): 2273-2287.   DOI: 10.11772/j.issn.1001-9081.2020101638
    Abstract565)      PDF (2539KB)(882)       Save
    As a fundamental and key task in computer-aided diagnosis, medical image segmentation aims to accurately recognize the target regions such as organs, tissues and lesions at pixel level. Different from natural images, medical images show high complexity in texture and have the boundaries difficult to judge caused by ambiguity, which is the fault of much noise due to the limitations of the imaging technology and equipment. Furthermore, annotating medical images highly depends on expertise and experience of the experts, thereby leading to limited available annotations in the training and potential annotation errors. For medical images suffer from ambiguous boundary, limited annotated data and large errors in the annotations, which makes it is a great challenge for the auxiliary diagnosis systems based on traditional image segmentation algorithms to meet the demands of clinical applications. Recently, with the wide application of Convolutional Neural Network (CNN) in computer vision and natural language processing, deep learning-based medical segmentation algorithms have achieved tremendous success. Firstly the latest research progresses of deep learning-based medical image segmentation were summarized, including the basic architecture, loss function, and optimization method of the medical image segmentation algorithms. Then, for the limitation of medical image annotated data, the mainstream semi-supervised researches on medical image segmentation were summed up and analyzed. Besides, the studies related to measuring uncertainty of the annotation errors were introduced. Finally, the characteristics summary and analysis as well as the potential future trends of medical image segmentation were listed.
    Reference | Related Articles | Metrics
    Overview of information extraction of free-text electronic medical records
    CUI Bowen, JIN Tao, WANG Jianmin
    Journal of Computer Applications    2021, 41 (4): 1055-1063.   DOI: 10.11772/j.issn.1001-9081.2020060796
    Abstract433)      PDF (1090KB)(1103)       Save
    Information extraction technology can extract the key information in free-text electronic medical records, helping the information management and subsequent information analysis of the hospital. Therefore, the main process of free-text electronic medical record information extraction was simply introduced, the research results of single extraction and joint extraction methods for three most important types of information:named entity, entity assertion and entity relation in the past few years were studied, and the methods, datasets, and final effects of these results were compared and summarized. In addition, an analysis of the features, advantages and disadvantages of several popular new methods, a summarization of commonly used datasets in the field of information extraction of free-text electronic medical records, and an analysis of the current status and research directions of related fields in China was carried out.
    Reference | Related Articles | Metrics
    Overview of blockchain consensus mechanism for internet of things
    TIAN Zhihong, ZHAO Jindong
    Journal of Computer Applications    2021, 41 (4): 917-929.   DOI: 10.11772/j.issn.1001-9081.2020111722
    Abstract940)      PDF (1143KB)(1659)       Save
    With the continuous development of digital currency, the blockchain technology has attracted more and more attention, and the research on its key technology, consensus mechanism, is particularly important. The application of blockchain technology in the Internet of Things(IoT) is one of the hot issues. Consensus mechanism is one of the core technologies of blockchain, which has an important impact on IoT in terms of decentralization degree, transaction processing speed, transaction confirmation delay, security, and scalability.Firstly, the architecture characteristics of IoT and the lightweight problem caused by resource limitation were described, the problems faced in the implementation of the blockchain in IoT were briefly summarized, and the demands of blockchain in IoT were analyzed by combining the operation flow of bitcoin. Secondly, the consensus mechanisms were divided into proof class, Byzantine class and Directed Acyclic Graph(DAG) class, and the working principles of these various classes of consensus mechanisms were studied, their adaptabilities to IoT were analyzed in terms of communication complexity, their advantages and disadvantages were summarized, and the combination architectures of the existing consensus mechanisms and IoT were investigated and analyzed. Finally, the problems of IoT, such as high operating cost, poor scalability and security risks were deeply studied, the analysis results show that the Internet of Things Application(IOTA) and Byteball consensus mechanisms based on DAG technology have the advantages of fast transaction processing speed, good scalability and strong security in the case of having a large number of transactions, and they are the development directions of blockchain consensus mechanism in the field of IoT in the future.
    Reference | Related Articles | Metrics
    Review of event causality extraction based on deep learning
    WANG Zhujun, WANG Shi, LI Xueqing, ZHU Junwu
    Journal of Computer Applications    2021, 41 (5): 1247-1255.   DOI: 10.11772/j.issn.1001-9081.2020071080
    Abstract1831)      PDF (1460KB)(2534)       Save
    Causality extraction is a kind of relation extraction task in Natural Language Processing (NLP), which mines event pairs with causality from text by constructing event graph, and play important role in applications of finance, security, biology and other fields. Firstly, the concepts such as event extraction and causality were introduced, and the evolution of mainstream methods and the common datasets of causality extraction were described. Then, the current mainstream causality extraction models were listed. Based on the detailed analysis of pipeline based models and joint extraction models, the advantages and disadvantages of various methods and models were compared. Furthermore, the experimental performance and related experimental data of the models were summarized and analyzed. Finally, the research difficulties and future key research directions of causality extraction were given.
    Reference | Related Articles | Metrics
    Knowledge graph survey: representation, construction, reasoning and knowledge hypergraph theory
    TIAN Ling, ZHANG Jinchuan, ZHANG Jinhao, ZHOU Wangtao, ZHOU Xue
    Journal of Computer Applications    2021, 41 (8): 2161-2186.   DOI: 10.11772/j.issn.1001-9081.2021040662
    Abstract1315)      PDF (2811KB)(2475)       Save
    Knowledge Graph (KG) strongly support the research of knowledge-driven artificial intelligence. Aiming at this fact, the existing technologies of knowledge graph and knowledge hypergraph were analyzed and summarized. At first, from the definition and development history of knowledge graph, the classification and architecture of knowledge graph were introduced. Second, the existing knowledge representation and storage methods were explained. Then, based on the construction process of knowledge graph, several knowledge graph construction techniques were analyzed. Specifically, aiming at the knowledge reasoning, an important part of knowledge graph, three typical knowledge reasoning approaches were analyzed, which are logic rule-based, embedding representation-based, and neural network-based. Furthermore, the research progress of knowledge hypergraph was introduced along with heterogeneous hypergraph. To effectively present and extract hyper-relational characteristics and realize the modeling of hyper-relation data as well as the fast knowledge reasoning, a three-layer architecture of knowledge hypergraph was proposed. Finally, the typical application scenarios of knowledge graph and knowledge hypergraph were summed up, and the future researches were prospected.
    Reference | Related Articles | Metrics
    Research advances in disentangled representation learning
    Keyang CHENG, Chunyun MENG, Wenshan WANG, Wenxi SHI, Yongzhao ZHAN
    Journal of Computer Applications    2021, 41 (12): 3409-3418.   DOI: 10.11772/j.issn.1001-9081.2021060895
    Abstract500)   HTML50)    PDF (877KB)(263)       Save

    The purpose of disentangled representation learning is to model the key factors that affect the form of data, so that the change of a key factor only causes the change of data on a certain feature, while the other features are not affected. It is conducive to face the challenge of machine learning in model interpretability, object generation and operation, zero-shot learning and other issues. Therefore, disentangled representation learning always be a research hotspot in the field of machine learning. Starting from the history and motives of disentangled representation learning, the research status and applications of disentangled representation learning were summarized, the invariance, reusability and other characteristics of disentangled representation learning were analyzed, and the research on the factors of variation via generative entangling, the research on the factors of variation with manifold interaction, and the research on the factors of variation using adversarial training were introduced, as well as the latest research trends such as a Variational Auto-Encoder (VAE) named β-VAE were introduced. At the same time, the typical applications of disentangled representation learning were shown, and the future research directions were prospected.

    Table and Figures | Reference | Related Articles | Metrics
    Survey of communication overhead of federated learning
    Xinyuan QIU, Zecong YE, Xiaolong CUI, Zhiqiang GAO
    Journal of Computer Applications    2022, 42 (2): 333-342.   DOI: 10.11772/j.issn.1001-9081.2021020232
    Abstract978)   HTML165)    PDF (1356KB)(1627)       Save

    To solve the irreconcilable contradiction between data sharing demands and requirements of privacy protection, federated learning was proposed. As a distributed machine learning, federated learning has a large number of model parameters needed to be exchanged between the participants and the central server, resulting in higher communication overhead. At the same time, federated learning is increasingly deployed on mobile devices with limited communication bandwidth and limited power, and the limited network bandwidth and the sharply raising client amount will make the communication bottleneck worse. For the communication bottleneck problem of federated learning, the basic workflow of federated learning was analyzed at first, and then from the perspective of methodology, three mainstream types of methods based on frequency reduction of model updating, model compression and client selection respectively as well as special methods such as model partition were introduced, and a deep comparative analysis of specific optimization schemes was carried out. Finally, the development trends of federated learning communication overhead technology research were summarized and prospected.

    Table and Figures | Reference | Related Articles | Metrics
2022 Vol.42 No.11

Current Issue
Honorary Editor-in-Chief: ZHANG Jingzhong
Editor-in-Chief: XU Zongben
Associate Editor: SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code: 62-110
Foreign Distribution Code: M4616
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel: 028-85224283-803
Join CCF