With the continuous development of information technology, the scale of time series data has grown exponentially, which provides opportunities and challenges for the development of time series anomaly detection algorithm, making the algorithm in this field gradually become a new research hotspot in the field of data analysis. However, the research in this area is still in the initial stage and the research work is not systematic. Therefore, by sorting out and analyzing the domestic and foreign literature, this paper divides the research content of multidimensional time series anomaly detection into three aspects: dimension reduction, time series pattern representation and anomaly pattern detection in logical order, and summarizes the mainstream algorithms to comprehensively show the current research status and characteristics of anomaly detection. On this basis, the research difficulties and trends of multi-dimensional time series anomaly detection algorithms were summarized in order to provide useful reference for related theory and application research.
The data in internet social media has the characteristics of fast transmission, high user participation and complete coverage compared with traditional media under the background of the rise of various platforms on the internet.There are various topics that people pay attention to and publish comments in, and there may exist deeper and more fine-grained sub-topics in the related information of one topic. A survey of sub-topic detection based on internet social media, as a newly emerging and developing research field, was proposed. The method of obtaining topic and sub-topic information through social media and participating in the discussion is changing people’s lives in an all-round way. However, the technologies in this field are not mature at present, and the researches are still in the initial stage in China. Firstly, the development background and basic concept of the sub-topic detection in internet social media were described. Secondly, the sub-topic detection technologies were divided into seven categories, each of which was introduced, compared and summarized. Thirdly, the methods of sub-topic detection were divided into online and offline methods, and the two methods were compared, then the general technologies and the frequently used technologies of the two methods were listed. Finally, the current shortages and future development trends of this field were summarized.
The purpose of disentangled representation learning is to model the key factors that affect the form of data, so that the change of a key factor only causes the change of data on a certain feature, while the other features are not affected. It is conducive to face the challenge of machine learning in model interpretability, object generation and operation, zero-shot learning and other issues. Therefore, disentangled representation learning always be a research hotspot in the field of machine learning. Starting from the history and motives of disentangled representation learning, the research status and applications of disentangled representation learning were summarized, the invariance, reusability and other characteristics of disentangled representation learning were analyzed, and the research on the factors of variation via generative entangling, the research on the factors of variation with manifold interaction, and the research on the factors of variation using adversarial training were introduced, as well as the latest research trends such as a Variational Auto-Encoder (VAE) named β-VAE were introduced. At the same time, the typical applications of disentangled representation learning were shown, and the future research directions were prospected.
To solve the irreconcilable contradiction between data sharing demands and requirements of privacy protection, federated learning was proposed. As a distributed machine learning, federated learning has a large number of model parameters needed to be exchanged between the participants and the central server, resulting in higher communication overhead. At the same time, federated learning is increasingly deployed on mobile devices with limited communication bandwidth and limited power, and the limited network bandwidth and the sharply raising client amount will make the communication bottleneck worse. For the communication bottleneck problem of federated learning, the basic workflow of federated learning was analyzed at first, and then from the perspective of methodology, three mainstream types of methods based on frequency reduction of model updating, model compression and client selection respectively as well as special methods such as model partition were introduced, and a deep comparative analysis of specific optimization schemes was carried out. Finally, the development trends of federated learning communication overhead technology research were summarized and prospected.
With the widespread use of mobile devices and emerging mobile applications, the exponential growth of traffic in mobile networks has caused problems such as network congestion, large delay, and poor user experience that cannot satisfy the needs of mobile users. Edge caching technology can greatly relieve the transmission pressure of wireless networks through the reuse of hot contents in the network. At the same time, it has become one of the key technologies in 5G/Beyond 5G Mobile Edge Computing (MEC) to reduce the network delay of user requests and thus improve the network experience of users. Focusing on mobile edge caching technology, firstly, the application scenarios, main characteristics, execution process, and evaluation indicators of mobile edge caching were introduced. Secondly, the edge caching strategies with energy efficiency, delay, hit ratio, and revenue maximization as optimization goals were analyzed and compared, and their key research points were summarized. Thirdly, the deployment of the MEC servers supporting 5G was described, based on this, the green mobility-aware caching strategy in 5G network and the caching strategy in 5G heterogeneous cellular network were analyzed. Finally, the research challenges and future development directions of edge caching strategies were discussed from the aspects of security, mobility-aware caching, edge caching based on reinforcement learning and federated learning and edge caching for Beyond 5G/6G networks.
The event that the user is interested in is extracted from the unstructured information, and then displayed to the user in a structured way, that is event extraction. Event extraction has a wide range of applications in information collection, information retrieval, document synthesis, and information questioning and answering. From the overall perspective, event extraction algorithms can be divided into four categories: pattern matching algorithms, trigger lexical methods, ontology-based algorithms, and cutting-edge joint model methods. In the research process, different evaluation methods and datasets can be used according to the related needs, and different event representation methods are also related to event extraction research. Distinguished by task type, meta-event extraction and subject event extraction are the two basic tasks of event extraction. Among them, meta-event extraction has three methods based on pattern matching, machine learning and neural network respectively, while there are two ways to extract subjective events: based on the event framework and based on ontology respectively. Event extraction research has achieved excellent results in single languages such as Chinese and English, but cross-language event extraction still faces many problems. Finally, the related works of event extraction were summarized and the future research directions were prospected in order to provide guidelines for subsequent research.
The unique advantages of Named Data Networking (NDN) make it a candidate for the next generation of new internet architecture. Through the analysis of the communication principle of NDN and the comparison of it with the traditional Transmission Control Protocol/Internet Protocol (TCP/IP) architecture, the advantages of the new architecture were described. And on this basis, the key elements of this network architecture design were summarized and analyzed. In addition, in order to help researchers better understand this new network architecture, the successful applications of NDN after years of development were summed up. Following the mainstream technology, the support of NDN for cutting-edge blockchain technology was focused on. Based on this support, the research and development of the applications of NDN and blockchain technology were discussed and prospected.
Imbalanced data classification is an important research content in machine learning, but most of the existing imbalanced data classification algorithms foucus on binary classification, and there are relatively few studies on imbalanced multi?class classification. However, datasets in practical applications usually have multiple classes and imbalanced data distribution, and the diversity of classes further increases the difficulty of imbalanced data classification, so the multi?class classification problem has become a research topic to be solved urgently. The imbalanced multi?class classification algorithms proposed in recent years were reviewed. According to whether the decomposition strategy was adopted, imbalanced multi?class classification algorithms were divided into decomposition methods and ad?hoc methods. Furthermore, according to the different adopted decomposition strategies, the decomposition methods were divided into two frameworks: One Vs. One (OVO) and One Vs. All (OVA). And according to different used technologies, the ad?hoc methods were divided into data?level methods, algorithm?level methods, cost?sensitive methods, ensemble methods and deep network?based methods. The advantages and disadvantages of these methods and their representative algorithms were systematically described, the evaluation indicators of imbalanced multi?class classification methods were summarized, the performance of the representative methods were deeply analyzed through experiments, and the future development directions of imbalanced multi?class classification were discussed.
With the widespread application of deep learning, human beings are increasingly relying on a large number of complex systems that adopt deep learning techniques. However, the black?box property of deep learning models offers challenges to the use of these models in mission?critical applications and raises ethical and legal concerns. Therefore, making deep learning models interpretable is the first problem to be solved to make them trustworthy. As a result, researches in the field of interpretable artificial intelligence have emerged. These researches mainly focus on explaining model decisions or behaviors explicitly to human observers. A review of interpretability for deep learning was performed to build a good foundation for further in?depth research and establishment of more efficient and interpretable deep learning models. Firstly, the interpretability of deep learning was outlined, the requirements and definitions of interpretability research were clarified. Then, several typical models and algorithms of interpretability research were introduced from the three aspects of explaining the logic rules, decision attribution and internal structure representation of deep learning models. In addition, three common methods for constructing intrinsically interpretable models were pointed out. Finally, the four evaluation indicators of fidelity, accuracy, robustness and comprehensibility were introduced briefly, and the possible future development directions of deep learning interpretability were discussed.
In recent years, deep learning has been widely used in many fields. However, due to the highly nonlinear operation of deep neural network models, the interpretability of these models is poor, these models are often referred to as “black box” models, and cannot be applied to some key fields with high performance requirements. Therefore, it is very necessary to study the interpretability of deep learning. Firstly, deep learning was introduced briefly. Then, around the interpretability of deep learning, the existing research work was analyzed from eight aspects, including hidden layer visualization, Class Activation Mapping (CAM), sensitivity analysis, frequency principle, robust disturbance test, information theory, interpretable module and optimization method. At the same time, the applications of deep learning in the fields of network security, recommender system, medical and social networks were demonstrated. Finally, the existing problems and future development directions of deep learning interpretability research were discussed.