Autonomous Underwater Vehicle (AUV) task planning is the key technology that affects the level of cluster intelligence. In the existing task planning models, only the problem of homogeneous AUV cluster and single dive task planning are considered. Therefore, a multi-dive task planning model for AUV heterogeneous clusters was proposed. Firstly the model considered the energy constraints of AUV, the engineering cost of AUV multiple round-trip charging in mother ship, the efficiency difference between heterogeneous cluster individuals, and the diversity of tasks. Then in order to improve the efficiency of solving the problem model, an optimization algorithm based on discrete particle swarm was proposed. The algorithm introduced matrix coding for describing particle velocity and position and the task loss model for evaluating particle quality to improve the particle updating process, achieving efficient target optimization. Simulation experiments show that the algorithm not only solves the multi-dive task planning problem of heterogeneous AUV clusters, but also reduces the task loss by 11% compared with the task planning model using genetic algorithm.
To solve the problem that the antenna resources in heterogeneous network are limited which leads to the unrealizable Interference Alignment (IA), a partial IA scheme for maximizing the utilization of antenna resources was proposed based on the characteristics of heterogeneous network. Firstly, a system model based on partial connectivity in heterogeneous network was built and the feasibility conditions for entire system to achieve IA were analyzed. Then, based on the heterogeneity of network (the difference between transmitted power and user stability), the users were assigned to different priorities and were distributed with different antenna resources according to their different priorities. Finally, with the goal of maximizing total rate of system and the utilization of antenna resources, a partial IA scheme was proposed, in which the high-priority users had full alignment and low-priority users had the maximum interference removed. In the Matlab simulation experiment where antenna resources are limited, the proposed scheme can increase total system rate by 10% compared with traditional IA algorithm, and the received rate of the high-priority users is 40% higher than that of the low-priority users. The experimental results show that the proposed algorithm can make full use of the limited antenna resources and achieve the maximum total system rate while satisfying the different requirements of users.
To solve the problems that the data stream outliers can not be disposed well, the efficiency of clustering data stream is low and the dynamic changes of data stream can not be real-time detected, an evolutionary data stream clustering algorithm based on integration of affinity propagation and density (I-APDenStream)was proposed. The traditional two-stage processing model was used in this algorithm, namely online and offline clustering. Not only the decay density of micro-cluster which could represent the dynamic changes of data stream and deletion mechanism for online dynamic maintenance of micro-cluster were introduced, but also the outliers' detection and simplification mechanism for model reconstruction by using the extended Weight Affinity Propagation (WAP) cluster was introduced. The experimental results on two types of data sets demonstrate that the cluster accuracy of the proposed algorithm remains at above 95%, and also achieves considerable improvements with respect to the purity compared to other algorithms. The proposed algorithm can cluster the data stream with high real-time, high quality and high efficiency.
Concerning that the existing visibility estimation methods based on region growing method has shortcomings of low precision and high computational complexity, a new algorithm was proposed to measure the visibility based on Inflection Point Line (IPL). Firstly, the three characteristics including anisotropy, continuity and level of inflection point line were analyzed. Secondly, a new 2-D filter to detect the IPL based on the three characteristics was proposed to improve the accuracy and speed of the inflection point detection. Finally, the visibility of fog weather could be calculated through combing the visibility model and detection results of the proposed filter. Compared with the visibility estimation algorithm based on region growing, the proposed algorithm decreased the time cost by 80% and detection error by 12.2%, respectively. The experimental results demonstrate that the proposed algorithm can effectively improve the detection accuracy, meanwhile reducing the computational complexity of positioning inflection points.
The emergence of RAMCloud has improved user experience of Online Data-Intensive (OLDI) applications. However, its energy consumption is higher than traditional cloud data centers. An energy-efficient strategy for disks under this architecture was put forward to solve this problem. Firstly, the fitness function and roulette wheel selection which belong to genetic algorithm were introduced to choose those energy-saving disks to implement persistent data backup; secondly, reasonable buffer size was needed to extend average continuous idle time of disks, so that some of them could be put into standby during their idle time. The simulation experimental results show that the proposed strategy can effectively save energy by about 12.69% in a given RAMCloud system with 50 servers. The buffer size has double impacts on energy-saving effect and data availability, which must be weighed.
Aiming at the problems that the traditional Support Vector Machine (SVM) classifier is sensitive to outliers and has the large number of Support Vectors (SV) and the parameter of its separating hyperplane is not sparse, the Truncated hinge loss SVM with Smoothly Clipped Absolute Deviation (SCAD) penalty (SCAD-TSVM) was put forward and was used for constructing the financial early-warning model. At the same time, an iterative updating algorithm was proposed to solve the SCAD-TSVM model. Experiments were implemented on the financial data of A-share manufacturing listed companies of the Shanghai and Shenzhen stock markets. Compared to the T-2 and T-3 models constructed by SVM with L1 norm penalty (L1-SVM), SVM with SCAD penalty (SCAD-SVM) and Truncated hinge loss SVM (TSVM), the T-2 and T-3 model constructed by the SCAD-TSVM had the best sparseness and the highest accuracy of prediction, and its average accuracies of prediction with different number of training samples were higher than those of the L1-SVM, SCAD-SVM and TSVM algorithms.
RAMCloud stores data using log segment structure. When large amount of small files store in RAMCloud, each small file occupies a whole segment, so it may leads to much fragments inside the segments and low memory utilization. In order to solve the small file problem, a strategy based on file classification was proposed to optimize the storage of small files. Firstly, small files were classified into three categories including structural related, logical related and independent files. Before uploading, merging algorithm and grouping algorithm were used to deal with these files respectively. The experiment demonstrates that compared with non-optimized RAMCloud, the proposed strategy can improve memory utilization.