In today’s society with the growing demand for privacy protection, federated learning is receiving widespread attention. However, in federated learning, it is difficult for the server to supervise behaviors of clients, so that the existence of lazy clients poses a potential threat to the performance and fairness of federated learning. Aiming at the problem of how to identify lazy clients efficiently and accurately, a dual-task proof-of-work method based on backdoor was proposed, namely FedBD (FedBackDoor). In FedBD, additional backdoor tasks that are easier to detect were allocated by the server for the clients participating in federated learning, the backdoor tasks were trained by the clients based on the original training tasks, and the clients’ behaviors were supervised by the server indirectly through training status of the backdoor tasks. Experimental results show that FedBD has certain advantages over the classic federated averaging algorithm FedAvg and the advanced algorithm GTG-Shapley (Guided Truncation Gradient Shapley) on datasets such as MNIST and CIFAR10. On CIFAR10 dataset, when the proportion of lazy clients is 15%, FedBD improves the accuracy by more than 10 percentage points compared with FedAvg, and increases the accuracy by 2 percentage points compared with GTG-Shapley. Moreover, the average training time of FedBD is only 11.8% of that of GTG-Shapley, and the accuracy of FedBD in identifying lazy clients can exceed 99% when the proportion of lazy clients is 10%. It can be seen that FedBD can solve the problem of lazy clients being difficult to supervise.
The existing similarity-based moving target trajectory prediction algorithms are generally classified according to the spatial-temporal characteristics of the data, and the characteristics of the algorithms themselves cannot be reflected. Therefore, a classification method based on algorithm characteristics was proposed. The calculation of the distances between two points is required for the trajectory similarity algorithms to carry out the subsequent calculations, however, the commonly used Euclidean Distance (ED) is only applicable to the problem of moving targets in a small region. A method of similarity calculation using geodetic distance instead of ED was proposed for the trajectory prediction of sea targets moving in a large region. Firstly, the trajectory data were preprocessed and segmented. Then, the discrete Fréchet Distance (FD) was adopted as similarity measure. Finally, synthetic and real data were used to test. Experimental results indicate that when sea targets move in a large region, the ED-based algorithm may gain incorrect prediction results, while the geodetic distance-based algorithm can output correct trajectory prediction.
In order to improve the locating accuracy in Wireless Senor Network (WSN) node localization, an algorithm based on Particle Swarm Optimization (PSO) and Shuffled Frog Leaping Algorithm (SFLA) was proposed, namely Bi-system Cooperative Optimization (BCO) algorithm. With the advantages of fast convergence in PSO and high optimization precision in SFLA, the proposed algorithm was easier to converge through less iterations and achieve higher accuracy of depth search. The simulation experiments indicate that the BCO algorithm is effective. First, the BCO algorithm can be very close to the optimal solution when it is used for solving the test target functions with better locating accuracy and higher convergence speed. Meanwhile, when the proposed algorithm is used for node localization based on Received Signal Strength Indicator (RSSI), the absolute distance error of the prediction location and the actual location is less than 0.05 meters. Compared with the Improved Particle Swarm Optimization algorithm based on RSSI (IPSO-RSSI), the locating accuracy of the proposed algorithm can be increased 10 times at least.
Focusing on the safety analysis of the 3D block cipher, a new method on this algorithm against the meet-in-the-middle attack was proposed. Based on the structure of the 3D algorithm and the differential properties of the S-box, the research reduced the number of required bytes during structuring the multiple sets in this attack and constructed a new 6-round meet-in-the-middle distinguisher. According to extending the distinguisher 2-round forward and 3-round backward, an 11-round meet-in-the-middle attack of the 3D algorithm was finally achieved. The experimental results show that:the number of required bytes on constructed the distinguisher is 42, the attack requires a data complexity of about 2497 chosen plaintexts, a time complexity of about 2325.3 11-round 3D algorithm encryption and a memory complexity of about 2342 bytes. The new attack shows that the 11-round of the 3D algorithm is not immune to the meet-in-the-middle attack.
The spatial index structure and the query technology plays an important role in the spatial database. According to the disadvantages in the approximation and organization of the complex spatial objects of the existing methods, a new index structure based on Minimum Bounding Rectangle (MBR), trapezoid and circle (RTC (Rectangle Trapezoid Circle) tree) was proposed. To deal with the Nearest Neighbor (NN) query of the complex spatial data objects effectively, the NN query based on RTC (NNRTC) algorithm was given. The NNRTC algorithm could reduce the nodes traversal and the distance calculation by using the pruning rules. According to the influence of the barriers on the spatial data set, the barrier-NN query based on RTC tree (BNNRTC) algorithm was proposed. The BNNRTC algorithm first queried in an idea space and then judged the query result. To deal with the dynamic simple continuous NN chain query, the Simple Continues NN chain query based on RTC tree (SCNNCRTC) algorithm was given. The experimental results show that the proposed methods can improve the efficiency of 60%-80% in dealing with large complex spatial object data set with respect to the query method based on R tree.
For the difficulty of complex non-linear system modeling, a new system modeling algorithm based on the Takagi-Sugeno (T-S) Fuzzy Radial Basis Function (RBF) neural network optimized by improved Particle Swarm Optimization (PSO) algorithm was proposed. In this algorithm, the good interpretability of T-S fuzzy model and the self-learning ability of RBF neural network were combined together to form a T-S fuzzy RBF neural network for system modeling, and the network parameters were optimized by the improved PSO algorithm with dynamic adjustment of the inertia weight combined with recursive least square method. Firstly, the proposed algorithm was used to do the approximation simulation of a non-linear multi-dimensional function, the Mean Square Error (MSE) of the approximation model was 0.00017, the absolute error was not greater than 0.04, which shows higher approximation precision; the proposed algorithm was also used to build a dynamic flow soft measurement model and to finish related experimental study, the average absolute error of the dynamic flow measurement results was less than 0.15L/min, the relative error is 1.97%, these results meet measurement requirements well and are better than the results of the existing algorithms. The above simulation results and experimental results show that the proposed algorithm is of high modeling precision and good adaptability for complex non-linear system.
There are some deficiencies in traditional two-step algorithm for under-determined blind source separation, such as the value of K is difficult to be determined, the algorithm is sensitive to the initial value, noises and singular points are difficult to be excluded, the algorithm is lacking theory basis, etcetera. In order to solve these problems, a new two-step algorithm based on the potential function algorithm and compressive sensing theory was proposed. Firstly, the mixing matrix was estimated by improved potential function algorithm based on multi-peak value particle swarm optimization algorithm, after the sensing matrix was constructed by the estimated mixing matrix, the sensing compressive algorithm based on orthogonal matching pursuit was introduced in the process of under-determined blind source separation to realize the signal reconstruction. The simulation results show that the highest estimation precision of the mixing matrix can reach 99.13%, and all the signal reconstruction interference ratios can be higher than 10dB, which meets the reconstruction accuracy requirements well and confirms the effectiveness of the proposed algorithm. This algorithm is of good universality and high accuracy for under-determined blind source separation of one-dimensional mixing signals.
To meet the application demand of high speed scanning and massive data transmission in industrial Computed Tomography (CT) of low-energy X-ray, a system of high-speed data acquisition and transmission for low-energy X-ray industrial CT was designed. X-CARD 0.2-256G of DT company was selected as the detector. In order to accommodate the needs of high-speed analog to digital conversion, high-speed time division multiplexing circuit and ping-pong operation for the data cache were combined; a gigabit Ethernet design was conducted with Field Programmable Gate Array (FPGA) selected as the master chip,so as to meet the requirements of high-speed transmission of multi-channel data. The experimental result shows that the speed of data acquisition system reaches 1MHz, the transmission speed reaches 926Mb/s and the dynamic range is greater than 5000. The system can effectively shorten the scanning time of low energy X-ray detection, which can meet the requirements of data transmission of more channels.
Aiming at the problem of sample labeling in network traffic feature selection, and the deficiency of traditional semi-supervised methods which can not select a strong correlation feature set, a Semi-supervised Feature Selection based on Extension of Label (SFSEL) algorithm was proposed. The model started from a small number of labeled samples, and the labels of unlabeled samples were extended by K-means algorithm, then MDrSVM (Multi-class Doubly regularized Support Vector Machine) algorithm was combined to achieve feature selection of multi-class network data. Comparison experiments with other semi-supervised algorithms including Spectral, PCFRSC and SEFR on Moore network data set were given, where SFSEL got higher classification accuracy and recall with fewer selection features. The experimental results show that the proposed algorithm has a better classification performance with selecting a strong correlation feature set of network traffic.
In the recent years, the computer image understanding has wide and profound applications in intelligence traffic, satellite remote sensing, machine vision, image analysis of medical treatment, Internet image search and etc. As its extension, the image holistic scene understanding is more complex and integrated than basic image scene understanding task. In this paper, the basic framework for image understanding, the researching implication and value, typical models for image holistic scene understanding were summarized. The four typical holistic scene understanding models were introduced, and the model frameworks were thoroughly compared. At last, some research insufficiency and future direction in image holistic scene understanding were presented, which pointed out some new insights for the further research in this area.