Since the pilot overhead using traditional channel estimation methods in the Reconfigurable Intelligent Surface (RIS)-assisted wireless communication systems is excessively high, a block sparseness based Orthogonal Matching Pursuit (OMP) channel estimation scheme was proposed. Firstly, according to the millimeter Wave (mmWave) channel model, the cascaded channel matrix was derived and transformed into the Virtual Angular Domain (VAD) to obtain the sparse representation of the cascaded channels. Secondly, by utilizing the sparse characteristics of the cascaded channels, the channel estimation problem was transformed into the sparse matrix recovery problem, and the reconstruction algorithm based on compressive sensing was adopted to recover the sparse matrix. Finally, the special row-block sparse structure was analyzed, and the traditional OMP scheme was optimized to further reduce pilot overhead and improve estimation performance. Simulation results show that the Normalized Mean Squared Error (NMSE) of the proposed optimized OMP scheme based on the row-block sparse structure decreases about 1 dB compared with that of the conventional OMP scheme. Therefore, the proposed channel estimation scheme can effectively reduce pilot overhead and obtain better estimation performance.
As the traditional flow scheduling method for data center network is easy to cause network congestion and link load imbalance, a dynamic flow scheduling mechanism based on Differential Evolution (DE) and Ant Colony Optimization (ACO) algorithm (DE-ACO) was proposed to optimize elephant flow scheduling in data center networks. Firstly, Software Defined Network (SDN) technology was used to capture the real-time network status information and set the optimization objectives of flow scheduling. Then, DE algorithm was redefined by the optimization objectives, several available candidate paths were calculated and used as the initialized global pheromone of the ACO algorithm. Finally, the global optimal path was obtained by combining with the global network status, and the elephant flow on the congested link was rerouted. Experimental results show that compared with Equal-Cost Multi-Path routing (ECMP) algorithm and network flow scheduling algorithm of SDN data center based on ACO algorithm (ACO-SDN), the proposed algorithm increases the average bisection bandwidth by 29.42% to 36.26% and 5% to 11.51% respectively in random communication mode, reducing the Maximum Link Utilization (MLU) of the network, and achieving better load balancing of the network.
Aiming at the multi-dimensional resource allocation problem in the downlink heterogeneous cognitive radio Ultra-Dense Network (UDN), an improved genetic algorithm was proposed to jointly optimize user association and resource allocation with the objective of maximizing the throughput of femtocell users. Firstly, preprocessing was performed before the algorithm running to initialize the user’s reachable base stations and available channels matrix. Secondly, symbol coding was used to encode the matching relationships between the user and the base stations as well as the user and the channels into a two-dimensional chromosome. Thirdly, dynamic choosing best for replication + roulette was used as the selection algorithm to speed up the convergence of the population. Finally, in order to avoid the algorithm from falling into the local optimum, the mutation operator of premature judgment was added in the mutation stage, so that the connection strategy of base station, user and channel was obtained with limited number of iterations. Experimental results show that when the numbers of base stations and channels are fixed, the proposed algorithm improves the total user throughput by 7.2% and improves the cognitive user throughput by 1.2% compared with the genetic algorithm of three-dimensional matching, and the computational complexity of the proposed algorithm is lower. The proposed algorithm reduces the search space of feasible solutions, and can effectively improve the total throughput of cognitive radio UDNs with lower complexity.
With the continuous development of blockchain technology, the block transmission delay has become a performance bottleneck of the scalability of the blockchain system. Remote Direct Memory Access (RDMA) technology, which supports high-bandwidth and low-delay data transmission, provides a new idea for block transmission with low latency. Therefore, a block catalogue structure for block information sharing was designed based on the characteristics of RDMA primitives, and the basic working process of block transmission was proposed and implemented on this basis. Experimental results show that compared with TCP(Transmission Control Protocol) transmission mechanism, the RDMA-based block transmission mechanism reduces the transmission delay between nodes by 44%, the transmission delay among the whole network by 24.4% on a block of 1 MB size, and the number of temporary forks appeared in blockchain by 22.6% on a blockchain of 10 000 nodes. It can be seen that the RDMA-based block transmission mechanism takes advantage of the performance of high speed networks, reduces block transmission latency and the number of temporary forks, thereby improving the scalability of the existing blockchain systems.
In order to meet the requirements of high reliability and low latency in the 5G network environment, and reduce the resource consumption of network bandwidth at the same time, a Service Function Chain (SFC) deployment method based on node comprehensive importance ranking for traffic and reliability optimization was proposed. Firstly, Virtualized Network Function (VNF) was aggregated based on the rate of traffic change, which reduced the deployed physical nodes and improved link reliability. Secondly, node comprehensive importance was defined by the degree, reliability, comprehensive delay and link hop account of the node in order to sort the physical nodes. Then, the VNFs were mapped to the underlying physical nodes in turn. At the same time, by restricting the number of links, the “ping-pong effect” was reduced and the traffic was optimized. Finally, the virtual link was mapped through k-shortest path algorithm to complete the deployment of the entire SFC. Compared with the original aggregation method, the proposed method has the SFC reliability improved by 2%, the end-to-end delay of SFC reduced by 22%, the bandwidth overhead reduced by 29%, and the average long-term revenue-to-cost ratio increased by 16%. Experimental results show that the proposed method can effectively improve the link reliability, reduce end-to-end delay and bandwidth resource consumption, and play a good optimization effect.
Aiming at the energy efficiency and scene adaptability problems of synchronization topology, a Greedy Synchronization Topology algorithm based on Formal Concept Analysis for traffic surveillance based sensor network (GST-FCA) was proposed. Firstly, scene adaptability requirements and energy efficiency model of the synchronization topology in traffic surveillance based sensor network were analyzed. Secondly, correlation analysis was performed on the adjacent features of sensor nodes in the same layer and adjacent layers by using Formal Concept Analysis (FCA). Afterward, Broadcast Tuples (BT) were built and synchronization sets were divided according to the greedy strategy with the maximum number of neighbors. Thirdly, a backtracking broadcast was used to improve the broadcast strategy of layer detection in Timing-synchronization Protocol of Sensor Network (TPSN) algorithm. Meanwhile, an upward hosting mechanism was designed to not only extend the information sharing range of synchronous nodes but also further alleviate the locally optimal solution problem caused by the greedy strategy. Finally, GST-FCA was verified and tested in terms of energy efficiency and scene adaptability. Simulation results show that compared with algorithms such as TPSN, Linear Estimation of Clock Frequency Offset (LECFO), GST-FCA decreases the synchronization packet overhead by 11.54%, 24.59% and 39.16% at lowest in the three test scenarios of deployment location, deployment scale and road deployment. Therefore, GST-FCA can alleviate the locally optimal solution problem and reduce the synchronization packet overhead, and it is excellent in energy efficiency when the synchronization topology meets the scene adaptability requirements of the above three scenarios.
Aiming at the single link failure problem in the vehicle-road real-time query communication scenario of Software-Defined Internet of Vehicles (SDIV), a fast link failure recovery method for SDIV was proposed, which considered link recovery delay and path transmission delay after link recovery. Firstly, the failure recovery delay was modeled, and the optimization goal of minimizing the delay was transformed into a 0-1 integer linear programming problem. Then, this problem was analyzed, two algorithms were proposed according to different situations, which tried to maximize the reuse of the existing calculation results. In specific, Path Recovery Algorithm based on Topology Partition (PRA-TP) was proposed when the flow table update delay was not able to be ignored compared with the path transmission delay, and Path Recovery Algorithm based on Single Link Search (PRA-SLS) was proposed when the flow table update delay was negligible because being farless than the path transmission delay. Experimental results show that compared with Dijkstra algorithm, PRA-TP can reduce the algorithm calculation delay by 25% and the path recovery delay by 40%, and PRA-SLS can reduce the algorithm calculation delay by 60%, realizing fast single link failure recovery at vehicle end.
Aiming at the problem that noise increases the error probability of the transmission signals of nonlinear digital communication system, a multivariate communication system based on discrete Bidirectional Associative Memory (BAM) neural network was proposed. Firstly, the appropriate number of neurons and memory vectors were selected according to the signals to be transmitted, the weight matrix was calculated, and BAM neural network was generated. Secondly, the multivariate signals were mapped to the initial input vectors with modulation amplitude and continuously input into the system. The input was iterated through the neural network and Gaussian noise was added to each neuron. After that, the output was sampled according to the code element interval, and then transmitted in the lossless channel, and the decision was decoded by the receiver according to the decision rule. Finally, in the field of image processing, the proposed system was used to transmit the compressed image data and decode the recovered image. Simulation results show that for weakly modulated signals with large code element interval, with the increase of noise intensity, the error probability firstly decreases and then increases, and the stochastic resonance phenomenon is relatively obvious. At the same time, the error probability is positively correlated with the radix number of the signal, and negatively correlated with the signal amplitude, code element interval and the number of neurons. Under certain conditions, the error probability can reach 0. These results show that BAM neural network can improve the reliability of digital communication system through noise. In addition, the similarity of the image restored by decoding shows the improvement of moderate noise on image restoration effect, extending the application of BAM neural network and stochastic resonance in image compression coding.
Wireless communication network traffic prediction is of great significance to operators in network construction, base station wireless resource management and user experience improvement. However, the existing centralized algorithm models face the problems of complexity and timeliness, so that it is difficult to meet the traffic prediction requirements of the whole city scale. Therefore, a distributed wireless traffic prediction framework under cloud-edge collaboration was proposed to realize traffic prediction based on single grid base station with low complexity and communication overhead. Based on the distributed architecture, a wireless traffic prediction model based on federated learning was proposed. Each grid traffic prediction model was trained synchronously, JS (Jensen-Shannon) divergence was used to select grid traffic models with similar traffic distributions through the center cloud server, and Federated Averaging (FedAvg) algorithm was used to fuse the parameters of the grid traffic models with similar traffic distributions, so as to improve the model generalization and describe the regional traffic accurately at the same time. In addition, as the traffic in different areas within the city was highly differentiated in features, on the basis of the algorithm, a federated training method based on coalitional game was proposed. Combined with super-additivity criteria, the grids were taken as participants in the coalitional game, and screened. And the core of the coalitional game and the Shapley value were introduced for profit distribution to ensure the stability of the alliance, thereby improving the accuracy of model prediction. Experimental results show that taking Short Message Service (SMS) traffic as an example, compared with grid-independent training, the proposed model has the prediction error decreased most significantly in the suburb, with a decline range of 26.1% to 28.7%, the decline range is 0.7% to 3.4% in the urban area, and 0.8% to 4.7% in the downtown area. Compared with the grid-centralized training, the proposed model has the prediction error in the three regions decreased by 49.8% to 79.1%.
Concerning the dynamic change of link delay, clock timing interference, and uncertainty of timestamp acquisition caused by complex link environment and temperature fluctuation in Industrial Wireless Sensor Networks (IWSNs), a time synchronization method based on Precision Time Protocol (PTP) in IWSNs was proposed. Firstly, the clock state space model and observation model were constructed by integrating the clock timing interference and asymmetric link delay noise in PTP bidirectional time synchronization process. Secondly, a reverse adaptive Kalman filter algorithm was constructed to remove the noise interference. Thirdly, the rationality of the noise statistical model was evaluated by using the clock state normalized innovation ratio of the reverse estimation and the forward estimation. Finally, the process noise of the clock state was dynamically adjusted after setting the detection threshold, thereby achieving precise estimation of clock parameters. Simulation results show that compared with the classical Kalman filter algorithm and PTP protocol, the proposed algorithm has the clock offset and skew estimation with smaller and more stable error standard deviations under different clock timing precision. The reverse adaptive Kalman filter can effectively solve the problem of Kalman filter divergence caused by reasons such as noise uncertainty, and improve the reliability of time synchronization.
In the multipath channel scenario of passive eavesdropping, the EaVEsdropper (Eve) only performs passive eavesdropping without transmitting any radio signals. The transmitter (Alice) is unable to determine any information of Eve, which brings great challenges to the secure transmission of information. In order to guarantee the secure transmission of information, under the condition that Alice knew the Channel State Information (CSI) of the legitimate receiver (Bob) but not Eve’s CSI, a precoding scheme to guarantee the physical layer security of legitimate Bob was proposed, and the security performance of the system was improved by enhancing the quality of the received signals of Bob. Firstly, without considering Eve, the precoding scheme for the upper bound on the reachable channel capacity of Bob was given based on the known CSI of Bob. Stable security capacity was obtained by using the channel specificity between Alice-Bob and Alice-Eve links. Then, the accurate closed-form expression for Bob’s average Bit Error Rate (BER) was derived from Bob’s outage probability in a Rayleigh flat fading environment. Simulation experimental results show that the proposed scheme can ensure Bob’s channel capacity always better than Eve’s channel capacity without changing the complexity of the receiver. At the same time, the proposed scheme can effectively improve the BER performance of Bob under the condition that the BER performance of Eve is greatly suppressed. And the security capacity is always guaranteed even when the Eve’s location condition is better than that of Bob.
In order to solve the problem of high computational complexity of traditional multi-user mmWave relay system beamforming methods, a Singular Value Decomposition (SVD) method based on Deep Learning (DL) was proposed to design hybrid beamforming for the optimization of the transmitter, relay and receiver. Firstly, DL method was used to design the beamforming matrix of transmitter and relay to maximize the achievable spectral efficiency. Then, the beamforming matrix of relay and receiver was designed to maximize the equivalent channel gain. Finally, a Minimum Mean Square Error (MMSE) filter was designed at the receiver to eliminate the inter-user interference. Theoretical analysis and simulation results show that compared with Alternating Maximization (AltMax) and the traditional SVD method, the hybrid beamforming method based on DL reduces the computational complexity by 12.5% and 23.44% respectively in the case of high dimensional channel matrix and many users, and has the spectral efficiency improved by 2.277% and 21.335% respectively with known Channel State Information (CSI), and the spectral efficiency improved by 11.452% and 43.375% respectively with imperfect CSI.
Focusing on the failure of intrusion detection resulted from low captured image width of traditional Wireless Visual Sensor Network (WVSN) target-barrier, a Wireless visual sensor network β Quality of Monitoring (β-QoM) Target-Barrier coverage Construction (WβTBC) algorithm was proposed to ensure that the captured image width is not less than β. Firstly, the geometric model of the visual sensor β-QoM region was established, and it was proven that the width of intruder image captured by the target-barrier of intersection of all adjacent visual sensor β-QoM regions must be greater than or equal to β. Then, based on the linear programming modeling for optimal β-QoM target-barrier coverage of WVSN, it was proven that this coverage problem is NP-hard. Finally, in order to obtain suboptimal solution of the problem, a heuristic algorithm WβTBC was proposed. In this algorithm, the directed graph of WVSN was constructed according to the counterclockwise β neighbor relationship between sensors, and Dijkstra algorithm was used to search β-QoM target-barriers in WVSN. Experimental results show that WβTBC algorithm can construct β-QoM target-barriers effectively, and save about 23.3%, 10.8% and 14.8% sensor nodes compared with Spiral Periphery Outer Coverage (SPOC), Spiral Periphery Inner Coverage (SPIC) and Target-Barrier Construction (TBC) algorithms, respectively. In addition, under the condition of meeting the requirements of intrusion detection, with the use of WβTBC algorithm, the smaller β is, the higher success rate of building β-QoM target-barrier will be, the fewer nodes will be needed in forming the barrier, and the longer working period of WVSN for β-QoM intrusion detection will be.
In order to optimize the Energy Efficiency (EE) and Spectrum Efficiency (SE) of Decode-and-Forward (DF) full-duplex relay network, a trade-off method of EE and SE for DF full-duplex relay network was proposed. In full-duplex relay network, firstly, the EE of the network was optimized with the goal of improving the SE of the network. And the optimal power of the relay was obtained by combining the derivation and the Newton-Raphson method, then the Pareto optimal set of the objective function was given. Secondly, a trade-off factor was introduced through the weighted scalar method, a trade-off optimization function of EE and SE was constructed, and the multi-objective optimization problem of EE optimization and SE optimization was transformed into a single-objective energy-spectrum efficiency optimization problem by using normalization. At the same time, the performance of EE, SE and trade-off optimization under different trade-off factor was analyzed. Simulation results show that the SE and EE of the proposed method are higher at the same data transmission rate compared with the those of the full-duplex-optimal power method and the half-duplex-optimal relay-optimal power allocation method. By adjusting different trade-off factors, the optimal trade-off and the optimization of EE and SE can be achieved.