In a full-duplex multi-cognitive relay network supported by Simultaneous Wireless Information and Power Transfer (SWIPT), in order to maximize energy-spectrum efficiency, the relay with the maximum energy harvesting was selected for decoding and forwarding, thus forming an energy-spectrum efficiency trade-off optimization problem. The problem was transformed into a convex optimization problem by variable transformation and concave-convex process optimization method. When the trade-off factor was 0, the optimization problem was equivalent to the optimization problem of maximizing the Spectrum Efficiency (SE). When the trade-off factor was 1, the optimization problem was equivalent to the problem of minimizing the energy consumed by the system. In order to solve this optimization problem, an improved algorithm that could directly obtain the trade-off factor for maximizing Energy Efficiency (EE) was proposed, which was optimized by combining the source node transmit power and the power split factor. The proposed algorithm was divided into two steps. First, the power split factor was fixed, and the source node transmit power and trade-off factor that made the EE optimal were obtained. Then, the optimal source node transmit power was fixed, and the optimal power split factor was obtained by using the relationship between energy-spectrum efficiency and power split factor. Through simulation experimental results, it is found that the relay network with the maximum energy harvesting is better in EE and SE than the network composed of other relays. Compared with the method of only optimizing the transmit power, the proposed algorithm increases the EE by more than 63%, and increases the SE by more than 30%; its EE and SE are almost the same as the exhaustive method, and the proposed algorithm converges faster.
Considering the low positioning accuracy and strong scene dependence of optimization strategy in the Distance Vector Hop (DV-Hop) localization model, an improved DV-Hop model, Function correction Distance Vector Hop (FuncDV-Hop) based on function analysis and determining coefficients by simulation was presented. First, the average hop distance, distance estimation, and least square error in the DV-Hop model were analyzed. The following concepts were introduced: undetermined coefficient optimization, step function segmentation experiment, weight function approach using equivalent points, and modified maximum likelihood estimation. Then, in order to design control trials, the number of nodes, the proportion of beacon nodes, the communication radius, the number of beacon nodes, and the number of unknown nodes were all designed for multi-scenario comparison experiments by using the control variable technique. Finally, the experiment was split into two phases:determining coefficients by simulation and integrated optimization testing. Compared with the original DV-Hop model, the positioning accuracy of the final improved strategy is improved by 23.70%-75.76%, and the average optimization rate is 57.23%. The experimental results show that, the optimization rate of FuncDV-Hop model is up to 50.73%, compared with the DV-Hop model based on genetic algorithm and neurodynamic improvement, the positioning accuracy of FuncDV-Hop model is increased by 0.55%-18.77%. The proposed model does not introduce other parameters, does not increase the protocol overhead of Wireless Sensor Networks (WSN), and effectively improves the positioning accuracy.
As a simplified version of Spatial Modulation (SM), Generalized Space Shift Keying (GSSK) has been widely used in massive Multiple-Input Multiple-Output (MIMO) systems. It can better solve the problems such as Inter-Channel Interference (ICI), Inter-Antenna Synchronization (IAS), and multiple Radio Frequency (RF) links in traditional MIMO technology. To solve the problem of high computational complexity of the Maximum Likelihood (ML) detection algorithm for GSSK systems, a low-complexity GSSK signal detection algorithm based on Compressed Sensing (CS) theory was proposed by combining Subspace Tracking (SP) and ML detection algorithms in CS, and presetting the threshold. First, the improved SP algorithm was used to obtain partial Transmit Antenna Combinations (TACs). Secondly, the set of search antennas was shrunk by deleting partial antenna combinations. Finally, the ML algorithm and the preset threshold were used to estimate the TACs. The results of simulation experiments show that the computational complexity of the proposed algorithm is significantly lower than that of ML detection algorithm, and the Bit Error Rate (BER) performance is almost the same as that of ML detection algorithm, which verify the effectiveness of the proposed algorithm.
In the Unmanned Aerial Vehicle (UAV)-assisted and Non-Orthogonal Multiple Access (NOMA)-enabled data collection system, the total energy efficiency of all sensors is maximized by jointly optimizing the three-dimensional placement design of the UAVs and the power allocation of sensors under the ground-air probabilistic channel model and the quality-of-service requirements. To solve the original mixed-integer non-convex programming problem, an energy efficiency optimization mechanism was proposed based on convex optimization theory, deep learning theory and Harris Hawk Optimization (HHO) algorithm. Under any given three-dimensional placement of the UAVs, first, the power allocation sub-problem was equivalently transformed into a convex optimization problem. Then, based on the optimal power allocation strategy, the Deep Neural Network (DNN) was applied to construct the mapping from the positions of the sensors to the three-dimensional placement of the UAVs, and the HHO algorithm was further utilized to train the model parameters corresponding to the optimal mapping offline. The trained mechanism only involved several algebraic operations and needed to solve a single convex optimization problem. Simulation experimental results show that compared with the travesal search mechanism based on particle swarm optimization algorithm, the proposed mechanism reduces the average operation time by 5 orders of magnitude while sacrificing only about 4.73% total energy efficiency in the case of 12 sensors.
To address the user cluster partitioning issue in the deployment strategy of Unmanned Aerial Vehicle (UAV) base stations for auxiliary communication in emergency scenarios, a feature-weighted fuzzy clustering algorithm, named Improved FCM, was proposed by considering both the performance of UAV base stations and user experience. Firstly, to tackle the problem of high computational complexity and convergence difficulty in the partitioning process of user clusters under random distribution conditions, a feature-weighted node data projection algorithm based on distance weighting was introduced according to the performance constraints of signal coverage range and maximum number of served users for each UAV base station. Secondly, to address the effectiveness of user partitioning when the same user falls within the effective ranges of multiple clusters, as well as the maximization of UAV base station resource utilization, a value-weighted algorithm based on user location and UAV base station load balancing was proposed. Experimental results demonstrate that the proposed methods meet the service performance constraints of UAV base stations. Additionally, the deployment scheme based on the proposed methods effectively improves the average load rate and coverage ratio of the system, reaching 0.774 and 0.026 3 respectively, which are higher than those of GFA (Geometric Fractal Analysis), Sp-C (Spectral Clustering), etc.
To deal with the co-channel interference in Device-to-Device (D2D) communication-empowered cellular networks, the sum rate of D2D links was maximized through joint channel allocation and power control while satisfying the power constraints and the Quality-of-Service (QoS) requirements of cellular links. In order to efficiently solve the mixed-integer non-convex programming problem corresponding to the above resource allocation, the original problem was transformed into a Markov decision process, and a Deep Deterministic Policy Gradient (DDPG) algorithm-based mechanism was proposed. Through offline training, the mapping relationship from the channel state information to the optimal resource allocation policy was directly built up without solving any optimization problems, so it could be deployed in an online fashion. Simulation results show that compared with the exhausting search-based mechanism, the proposed mechanism reduces the computation time by 4 orders of magnitude (99.51%) at the cost of only 9.726% performance loss.
To address the power resource limitations of wireless sensors in over-the-air computation networks and the spectrum competition with existing wireless information communication networks, a cognitive wireless network integrating information communication and over-the-air computation was studied, in which the primary network focused on wireless information communication, and the secondary network aimed to support over-the-air computation where the sensors utilized signals sent by the base station of the primary network for energy harvesting. Considering the constraints of the Mean Square Error (MSE) of over-the-air computation and the transmit power of each node in the network, base on the random channel uncertainty, a robust resource optimization problem was formulated, with the objective function of maximizing the sum rate of wireless information communication users. To solve the robust optimization problem effectively, an Alternating Optimization (AO)-Improved Constrained Stochastic Successive Convex Approximation (ICSSCA) algorithm called AO-ICSSCA,was proposed, by which the original robust optimization problem was transformed into deterministic optimization sub-problems, and the downlink beamforming vector of the base station in the primary network, the power factors of the sensors, and the fusion beamforming vector of the fusion center in the secondary network were alternately optimized. Simulation experimental results demonstrate that AO-ICSSCA algorithm achieves superior performance with less computing time compared to the Constrained Stochastic Successive Convex Approximation (CSSCA) algorithm before improvement.
Orthogonal Time Sequency Multiplexing (OTSM) achieves transmission performance similar to Orthogonal Time Frequency Space (OTFS) modulation with lower complexity, providing a promising solution for future high-speed mobile communication systems that require low complexity transceivers. To address the issue of insufficient efficiency in existing time-domain based Gauss-Seidel (GS) iterative equalization, a secondary signal detection algorithm was proposed. First, Linear Minimum Mean Square Error (LMMSE) detection with low complexity was performed in the time domain, and then Successive Over Relaxation (SOR) iterative algorithm was used to further eliminate residual symbol interference. To further optimize convergence efficiency and detection performance, the SOR algorithm was linearly optimized to obtain an Improved SOR (ISOR) algorithm. The simulation experimental results show that compared with SOR algorithm, ISOR algorithm improves detection performance and accelerates convergence while increasing lower complexity. Compared with GS iterative algorithm, ISOR algorithm has a gain of 1.61 dB when using 16 QAM modulation with a bit error rate of 10 - 4 .
In Low Earth orbit (LEO)satellite multi-beam communication scenario, the traditional fixed resource allocation algorithm can not meet the differences in channel capacity requirements of different users. In order to meet the requirements of users, the optimization model of minimum supply-demand difference of combining channel allocation, bandwidth allocation and power allocation was established, and Pattern Division Multiple Access technology (PDMA)was introduced to improve the utilization of channel resources. In view of the non-convex characteristic of the model, the optimal resource allocation strategy learned by the Q-learning algorithm was used to allocate the channel capacity suitable for each user, and a reward threshold was introduced to further improve the algorithm, speeding up the convergence and minimizing the difference between supply and demand when the algorithm converged. The simulation results show that the convergence speed of the improved algorithm is about 3.33 times that before improvement; the improved algorithm can meet larger user requirement, about 14% higher than the Q-learning algorithm before improvement, about 2.14 times that of the traditional fixed algorithm.
Aiming at the Multi-access Edge Computing (MEC) server data transmission requirements of high reliability, low latency and large data volume, a Media Access Control (MAC) scheduling strategy based on conflict-free access, priority architecture and elastic service technology for the vehicle edge computing scenario was proposed. The proposed strategy was based on the centralized coordination of channel access rights by the Road Side Unit (RSU) of the Internet of Vehicles (IoV), which prioritized the link transmission quality between the On Board Unit (OBU) and the MEC server in the vehicle network, so that the Vehicle-to-Network (V2N) service data could be transmitted in a timely manner. At the same time, an elastic service approach was adopted for services between local OBUs to enhance the reliability of emergency message transmission when dense vehicles were accessed. First, a queuing analysis model was constructed for the scheduling strategy. Then, the embedded Markov chains were established according to the non-aftereffect characteristics of the system state variables at each moment, and the system was analyzed theoretically by the analysis method of probability generating functions to obtain the exact analytical expressions of key indicators such as the average queue length, and the average waiting latency of MEC server communication units and OBUs, and RSU query period. Computer simulation experimental results show that the statistical analysis results are consistent with the theoretical calculation results, and the proposed scheduling strategy can improve the stability and flexibility of the IoV under high load conditions.
Focusing on the failure of intrusion detection resulted from low captured image width of traditional Wireless Visual Sensor Network (WVSN) target-barrier, a Wireless visual sensor network β Quality of Monitoring (β-QoM) Target-Barrier coverage Construction (WβTBC) algorithm was proposed to ensure that the captured image width is not less than β. Firstly, the geometric model of the visual sensor β-QoM region was established, and it was proven that the width of intruder image captured by the target-barrier of intersection of all adjacent visual sensor β-QoM regions must be greater than or equal to β. Then, based on the linear programming modeling for optimal β-QoM target-barrier coverage of WVSN, it was proven that this coverage problem is NP-hard. Finally, in order to obtain suboptimal solution of the problem, a heuristic algorithm WβTBC was proposed. In this algorithm, the directed graph of WVSN was constructed according to the counterclockwise β neighbor relationship between sensors, and Dijkstra algorithm was used to search β-QoM target-barriers in WVSN. Experimental results show that WβTBC algorithm can construct β-QoM target-barriers effectively, and save about 23.3%, 10.8% and 14.8% sensor nodes compared with Spiral Periphery Outer Coverage (SPOC), Spiral Periphery Inner Coverage (SPIC) and Target-Barrier Construction (TBC) algorithms, respectively. In addition, under the condition of meeting the requirements of intrusion detection, with the use of WβTBC algorithm, the smaller β is, the higher success rate of building β-QoM target-barrier will be, the fewer nodes will be needed in forming the barrier, and the longer working period of WVSN for β-QoM intrusion detection will be.
Wireless communication network traffic prediction is of great significance to operators in network construction, base station wireless resource management and user experience improvement. However, the existing centralized algorithm models face the problems of complexity and timeliness, so that it is difficult to meet the traffic prediction requirements of the whole city scale. Therefore, a distributed wireless traffic prediction framework under cloud-edge collaboration was proposed to realize traffic prediction based on single grid base station with low complexity and communication overhead. Based on the distributed architecture, a wireless traffic prediction model based on federated learning was proposed. Each grid traffic prediction model was trained synchronously, JS (Jensen-Shannon) divergence was used to select grid traffic models with similar traffic distributions through the center cloud server, and Federated Averaging (FedAvg) algorithm was used to fuse the parameters of the grid traffic models with similar traffic distributions, so as to improve the model generalization and describe the regional traffic accurately at the same time. In addition, as the traffic in different areas within the city was highly differentiated in features, on the basis of the algorithm, a federated training method based on coalitional game was proposed. Combined with super-additivity criteria, the grids were taken as participants in the coalitional game, and screened. And the core of the coalitional game and the Shapley value were introduced for profit distribution to ensure the stability of the alliance, thereby improving the accuracy of model prediction. Experimental results show that taking Short Message Service (SMS) traffic as an example, compared with grid-independent training, the proposed model has the prediction error decreased most significantly in the suburb, with a decline range of 26.1% to 28.7%, the decline range is 0.7% to 3.4% in the urban area, and 0.8% to 4.7% in the downtown area. Compared with the grid-centralized training, the proposed model has the prediction error in the three regions decreased by 49.8% to 79.1%.
Edge Computing (EC) and Simultaneous Wireless Information and Power Transfer (SWIPT) technologies can improve the performance of traditional networks, but they also increase the difficulty and complexity of system decision-making. The system decisions designed by optimization methods often have high computational complexity and are difficult to meet the real-time requirements of the system. Therefore, aiming at Wireless Sensor Network (WSN) assisted by EC and SWIPT, a mathematical model of system energy efficiency optimization was proposed by jointly considering beamforming, computing offloading and power control problems in the network. Then, concerning the non-convex and parameter coupling characteristics of this model, a joint optimization method based on deep reinforcement learning was proposed by designing information interchange process of the system. This method did not need to build an environmental model and adopted a reward function instead of the Critic network for action evaluation, which could reduce the difficulty of decision-making and improve the system real-time performance. Finally, based on the joint optimization method, an Improved Deep Deterministic Policy Gradient (IDDPG) algorithm was designed. Simulation comparisons were made with a variety of optimization algorithms and machine learning algorithms to verify the advantages of the joint optimization method in reducing the computational complexity and improving real-time performance of decision-making.
The existence of privacy security and resource consumption issues in hierarchical federated learning reduces the enthusiasm of participants. To encourage a sufficient number of participants to actively participate in learning tasks and address the decision-making problem between multiple mobile devices and multiple edge servers, an incentive mechanism based on multi-leader Stackelberg game was proposed. Firstly, by quantifying the cost-utility of mobile devices and the payment of edge servers, a utility function was constructed, and an optimization problem was defined. Then, the interaction among mobile devices was modeled as an evolutionary game, and the interaction among edge servers was modeled as a non-cooperative game. To solve the optimal edge server selection and pricing strategy, a Multi-round Iterative Edge Server selection algorithm (MIES) and a Gradient Iterative Pricing Algorithm (GIPA) were proposed. The former was used to solve the evolutionary game equilibrium solution among mobile devices, and the latter was used to solve the pricing competition problem among edge servers. Experimental results show that compared with Optimal Pricing Prediction Strategy (OPPS), Historical Optimal Pricing Strategy (HOPS) and Random Pricing Strategy (RPS), GIPA can increase the average utility of edge servers by 4.06%, 10.08%, and 31.39% respectively.
In order to optimize the Energy Efficiency (EE) and Spectrum Efficiency (SE) of Decode-and-Forward (DF) full-duplex relay network, a trade-off method of EE and SE for DF full-duplex relay network was proposed. In full-duplex relay network, firstly, the EE of the network was optimized with the goal of improving the SE of the network. And the optimal power of the relay was obtained by combining the derivation and the Newton-Raphson method, then the Pareto optimal set of the objective function was given. Secondly, a trade-off factor was introduced through the weighted scalar method, a trade-off optimization function of EE and SE was constructed, and the multi-objective optimization problem of EE optimization and SE optimization was transformed into a single-objective energy-spectrum efficiency optimization problem by using normalization. At the same time, the performance of EE, SE and trade-off optimization under different trade-off factor was analyzed. Simulation results show that the SE and EE of the proposed method are higher at the same data transmission rate compared with the those of the full-duplex-optimal power method and the half-duplex-optimal relay-optimal power allocation method. By adjusting different trade-off factors, the optimal trade-off and the optimization of EE and SE can be achieved.
For relayed collaborative communications have weak signal of direct paths between the transmitter and the receiver and low Signal-to-Noise Ratio (SNR), a Reconfigurable Intelligent Surface (RIS) assisted cooperative Index Modulation (IM) system of Decode-and-Forward (DF) relay (RIS-DF-IM) was proposed. In RIS-DF-IM, as smart Access Points (APs), RISs were adopted as part of the transmitter at the source and relay nodes to perform phase compensation for the reflected channel to maximize the receiving antenna SNR according to the transmission information, and perform IM on multiple antennas of receivers of the relay and destination nodes to improve the spectral efficiency of the system. At the same time, the theoretical union bounds about the Bit Error Rate (BER) of the proposed dual-hop system were solved by using the Moment Generating Function (MGF) method. Besides, a Simplified Pre-greedy Maximum Likelihood (SPML) detector was proposed to reduce the computational complexity by decreasing the number of traversal antenna indexes and simplifying the Maximum Likelihood (ML) decoding criterion formula. Monte Carlo simulation results show that, when the number of RIS elements is 128 and the spatial modulation is adopted, the BER of RIS-DF-IM is about 10 lower than that of the cooperative spatial modulation system where RIS is not connected to the transmitter at the far end; and the BER is dramatically decreased by about 20 compared with the traditional precoded spatial modulation system. Although SPML detector has the BER increased by about 1.4 compared to the Maximum Likelihood (ML) detector, the computational complexity is reduced by a half, achieving an effective balance between BER and complexity.
The variable-length address is one of the important research content in the field of future network. Aiming at the low efficiency of traditional routing lookup algorithms for variable-length address, an efficient routing lookup algorithm suitable for variable-length addresses based on balanced binary tree — AVL (Adelson-Velskii and Landis) tree and Bloom filter, namely AVL-Bloom algorithm, was proposed. Firstly, multiple off-chip hash tables were used to separately store route entries with the same number of prefix bits and their next-hop information in view of the flexible and unbounded characteristics of the variable-length address. Meanwhile, the on-chip Bloom filter was utilized for speeding up the search for route prefixes that were likely to match. Secondly, in order to solve the problem that the routing lookup algorithms based on hash technology need multiple hash comparisons when searching for the route with the longest prefix, the AVL tree technology was introduced, that was, the Bloom filter and hash table of each group of route prefix set were organized through AVL tree, so as to optimize the query order of route prefix length and reduce the number of hash calculations and then decrease the search time. Finally, comparative experiments of the proposed algorithm with the traditional routing lookup algorithms such as METrie (Multi-Entrance-Trie) and COBF (Controlled prefix and One-hashing Bloom Filter) were conducted on three different variable-length address datasets. Experimental results show that the search speed of AVL-Bloom algorithm is significantly faster than those of METrie and COBF algorithms, and the query time is reduced by nearly 83% and 64% respectively. At the same time, AVL-Bloom algorithm can maintain stable search performance under the condition of large change in routing table entries, and is suitable for routing lookup and forwarding with variable-length addresses.
It the rate control algorithms of the new generation video coding standard H.266/VVC (Versatile Video Coding), the rate-distortion optimization technique with independent coding parameters is adopted. However, the Coding Tree Units (CTUs) within the same frame affect others in the spatial domain, and there are global coding parameters. At the same time, in the CTU-level bit allocation formulas, approximated coding parameters for bit allocation are used, resulting in the reduction of rate control accuracy and coding performance. To address these issues, a spatial-domain global optimization algorithm for CTU-level bit allocation called RTE_RC (Rate Control with Recursive Taylor Expansion) was proposed, and the global coding parameters were approximated by using a recursive algorithm. Firstly, a globally optimized bit allocation model in spatial-domain was established. Secondly, a recursive algorithm was used to calculate the global Lagrange multiplier in the CTU-level bit allocation formula. Finally, the bit allocation of coding units was optimized and the coding units were coded. Experimental results show that under the Low-Delay Prediction frame (LDP) configuration, compared with the rate control algorithm VTM_RC (Rate Control algorithm Versatile Test Model), the proposed algorithm has the rate control error decreased from 0.46% to 0.02%, the bit-rate saved by 2.48 percentage points, and the coding time reduced by 3.52%. Therefore, the rate control accuracy and rate distortion performance are significantly improved by the proposed algorithm.
In order to solve the problem of high computational complexity of traditional multi-user mmWave relay system beamforming methods, a Singular Value Decomposition (SVD) method based on Deep Learning (DL) was proposed to design hybrid beamforming for the optimization of the transmitter, relay and receiver. Firstly, DL method was used to design the beamforming matrix of transmitter and relay to maximize the achievable spectral efficiency. Then, the beamforming matrix of relay and receiver was designed to maximize the equivalent channel gain. Finally, a Minimum Mean Square Error (MMSE) filter was designed at the receiver to eliminate the inter-user interference. Theoretical analysis and simulation results show that compared with Alternating Maximization (AltMax) and the traditional SVD method, the hybrid beamforming method based on DL reduces the computational complexity by 12.5% and 23.44% respectively in the case of high dimensional channel matrix and many users, and has the spectral efficiency improved by 2.277% and 21.335% respectively with known Channel State Information (CSI), and the spectral efficiency improved by 11.452% and 43.375% respectively with imperfect CSI.
LoRaWAN, as a wireless communication standard in Low Power Wide Area Network (LPWAN), provides the support for the development of IoT (Internet of Things). However, limited by the characteristics of incomplete orthogonality among Spreading Factor (SF) and the fact that LoRaWAN does not have a Listen-Before-Transmit (LBT) mechanism, the ALOHA-based transmission scheduling method will trigger serious channel conflicts, which reduces the scalability of LoRa (Long Range Radio) networks greatly. Therefore, in order to improve the scalability of LoRa network, Non-Persistent Carrier Sense Multiple Access (NP-CSMA) mechanism was proposed to replace the medium access control mechanism of ALOHA in LoRaWAN. The time of accessing the channel for each node with the same SF in LoRa network was coordinated by LBT, and multiple SF signals were transmitted in parallel for the transmission between different SFs, thus reducing the interference of same SF and avoiding inter-SF interference in the common channel. To analyze the impact of NP-CSMA on the scalability of LoRa networks, LoRa networks constructed by Lo RaWAN and NP-CSMA were compared by theoretical analysis and NS3 simulation. Experimental results show that NP-CSMA has 58.09% higher theoretical Packet Delivery Rate (PDR) performance than LoRaWAN under the same conditions, at a network communication load rate of 1. In terms of channel utilization, NP-CSMA increases the saturated channel utilization by 214.9% and accommodates 60.0% more nodes compared to LoRaWAN. In addition, the average latency of NP-CSMA is also shorter than that of the confirmed LoRaWAN at a network traffic load rate of less than 1.7, and the additional energy consumption to maintain the CAD (Channel Activity Detection) mode is 1.0 mJ to 1.3 mJ and 2.5 mJ to 5.1 mJ lower than the additional energy consumption required by LoRaWAN to receive confirmation messages from the gateway when spreading factor is 7 and 10. The above fully reflects that NP-CSMA can improve LoRa network scalability effectively.
Since the pilot overhead using traditional channel estimation methods in the Reconfigurable Intelligent Surface (RIS)-assisted wireless communication systems is excessively high, a block sparseness based Orthogonal Matching Pursuit (OMP) channel estimation scheme was proposed. Firstly, according to the millimeter Wave (mmWave) channel model, the cascaded channel matrix was derived and transformed into the Virtual Angular Domain (VAD) to obtain the sparse representation of the cascaded channels. Secondly, by utilizing the sparse characteristics of the cascaded channels, the channel estimation problem was transformed into the sparse matrix recovery problem, and the reconstruction algorithm based on compressive sensing was adopted to recover the sparse matrix. Finally, the special row-block sparse structure was analyzed, and the traditional OMP scheme was optimized to further reduce pilot overhead and improve estimation performance. Simulation results show that the Normalized Mean Squared Error (NMSE) of the proposed optimized OMP scheme based on the row-block sparse structure decreases about 1 dB compared with that of the conventional OMP scheme. Therefore, the proposed channel estimation scheme can effectively reduce pilot overhead and obtain better estimation performance.
Concerning the dynamic change of link delay, clock timing interference, and uncertainty of timestamp acquisition caused by complex link environment and temperature fluctuation in Industrial Wireless Sensor Networks (IWSNs), a time synchronization method based on Precision Time Protocol (PTP) in IWSNs was proposed. Firstly, the clock state space model and observation model were constructed by integrating the clock timing interference and asymmetric link delay noise in PTP bidirectional time synchronization process. Secondly, a reverse adaptive Kalman filter algorithm was constructed to remove the noise interference. Thirdly, the rationality of the noise statistical model was evaluated by using the clock state normalized innovation ratio of the reverse estimation and the forward estimation. Finally, the process noise of the clock state was dynamically adjusted after setting the detection threshold, thereby achieving precise estimation of clock parameters. Simulation results show that compared with the classical Kalman filter algorithm and PTP protocol, the proposed algorithm has the clock offset and skew estimation with smaller and more stable error standard deviations under different clock timing precision. The reverse adaptive Kalman filter can effectively solve the problem of Kalman filter divergence caused by reasons such as noise uncertainty, and improve the reliability of time synchronization.
In order to meet the requirements of high reliability and low latency in the 5G network environment, and reduce the resource consumption of network bandwidth at the same time, a Service Function Chain (SFC) deployment method based on node comprehensive importance ranking for traffic and reliability optimization was proposed. Firstly, Virtualized Network Function (VNF) was aggregated based on the rate of traffic change, which reduced the deployed physical nodes and improved link reliability. Secondly, node comprehensive importance was defined by the degree, reliability, comprehensive delay and link hop account of the node in order to sort the physical nodes. Then, the VNFs were mapped to the underlying physical nodes in turn. At the same time, by restricting the number of links, the “ping-pong effect” was reduced and the traffic was optimized. Finally, the virtual link was mapped through k-shortest path algorithm to complete the deployment of the entire SFC. Compared with the original aggregation method, the proposed method has the SFC reliability improved by 2%, the end-to-end delay of SFC reduced by 22%, the bandwidth overhead reduced by 29%, and the average long-term revenue-to-cost ratio increased by 16%. Experimental results show that the proposed method can effectively improve the link reliability, reduce end-to-end delay and bandwidth resource consumption, and play a good optimization effect.
Aiming at the energy efficiency and scene adaptability problems of synchronization topology, a Greedy Synchronization Topology algorithm based on Formal Concept Analysis for traffic surveillance based sensor network (GST-FCA) was proposed. Firstly, scene adaptability requirements and energy efficiency model of the synchronization topology in traffic surveillance based sensor network were analyzed. Secondly, correlation analysis was performed on the adjacent features of sensor nodes in the same layer and adjacent layers by using Formal Concept Analysis (FCA). Afterward, Broadcast Tuples (BT) were built and synchronization sets were divided according to the greedy strategy with the maximum number of neighbors. Thirdly, a backtracking broadcast was used to improve the broadcast strategy of layer detection in Timing-synchronization Protocol of Sensor Network (TPSN) algorithm. Meanwhile, an upward hosting mechanism was designed to not only extend the information sharing range of synchronous nodes but also further alleviate the locally optimal solution problem caused by the greedy strategy. Finally, GST-FCA was verified and tested in terms of energy efficiency and scene adaptability. Simulation results show that compared with algorithms such as TPSN, Linear Estimation of Clock Frequency Offset (LECFO), GST-FCA decreases the synchronization packet overhead by 11.54%, 24.59% and 39.16% at lowest in the three test scenarios of deployment location, deployment scale and road deployment. Therefore, GST-FCA can alleviate the locally optimal solution problem and reduce the synchronization packet overhead, and it is excellent in energy efficiency when the synchronization topology meets the scene adaptability requirements of the above three scenarios.
With the continuous development of blockchain technology, the block transmission delay has become a performance bottleneck of the scalability of the blockchain system. Remote Direct Memory Access (RDMA) technology, which supports high-bandwidth and low-delay data transmission, provides a new idea for block transmission with low latency. Therefore, a block catalogue structure for block information sharing was designed based on the characteristics of RDMA primitives, and the basic working process of block transmission was proposed and implemented on this basis. Experimental results show that compared with TCP(Transmission Control Protocol) transmission mechanism, the RDMA-based block transmission mechanism reduces the transmission delay between nodes by 44%, the transmission delay among the whole network by 24.4% on a block of 1 MB size, and the number of temporary forks appeared in blockchain by 22.6% on a blockchain of 10 000 nodes. It can be seen that the RDMA-based block transmission mechanism takes advantage of the performance of high speed networks, reduces block transmission latency and the number of temporary forks, thereby improving the scalability of the existing blockchain systems.
Aiming at the single link failure problem in the vehicle-road real-time query communication scenario of Software-Defined Internet of Vehicles (SDIV), a fast link failure recovery method for SDIV was proposed, which considered link recovery delay and path transmission delay after link recovery. Firstly, the failure recovery delay was modeled, and the optimization goal of minimizing the delay was transformed into a 0-1 integer linear programming problem. Then, this problem was analyzed, two algorithms were proposed according to different situations, which tried to maximize the reuse of the existing calculation results. In specific, Path Recovery Algorithm based on Topology Partition (PRA-TP) was proposed when the flow table update delay was not able to be ignored compared with the path transmission delay, and Path Recovery Algorithm based on Single Link Search (PRA-SLS) was proposed when the flow table update delay was negligible because being farless than the path transmission delay. Experimental results show that compared with Dijkstra algorithm, PRA-TP can reduce the algorithm calculation delay by 25% and the path recovery delay by 40%, and PRA-SLS can reduce the algorithm calculation delay by 60%, realizing fast single link failure recovery at vehicle end.
Aiming at the problem that noise increases the error probability of the transmission signals of nonlinear digital communication system, a multivariate communication system based on discrete Bidirectional Associative Memory (BAM) neural network was proposed. Firstly, the appropriate number of neurons and memory vectors were selected according to the signals to be transmitted, the weight matrix was calculated, and BAM neural network was generated. Secondly, the multivariate signals were mapped to the initial input vectors with modulation amplitude and continuously input into the system. The input was iterated through the neural network and Gaussian noise was added to each neuron. After that, the output was sampled according to the code element interval, and then transmitted in the lossless channel, and the decision was decoded by the receiver according to the decision rule. Finally, in the field of image processing, the proposed system was used to transmit the compressed image data and decode the recovered image. Simulation results show that for weakly modulated signals with large code element interval, with the increase of noise intensity, the error probability firstly decreases and then increases, and the stochastic resonance phenomenon is relatively obvious. At the same time, the error probability is positively correlated with the radix number of the signal, and negatively correlated with the signal amplitude, code element interval and the number of neurons. Under certain conditions, the error probability can reach 0. These results show that BAM neural network can improve the reliability of digital communication system through noise. In addition, the similarity of the image restored by decoding shows the improvement of moderate noise on image restoration effect, extending the application of BAM neural network and stochastic resonance in image compression coding.
As the traditional flow scheduling method for data center network is easy to cause network congestion and link load imbalance, a dynamic flow scheduling mechanism based on Differential Evolution (DE) and Ant Colony Optimization (ACO) algorithm (DE-ACO) was proposed to optimize elephant flow scheduling in data center networks. Firstly, Software Defined Network (SDN) technology was used to capture the real-time network status information and set the optimization objectives of flow scheduling. Then, DE algorithm was redefined by the optimization objectives, several available candidate paths were calculated and used as the initialized global pheromone of the ACO algorithm. Finally, the global optimal path was obtained by combining with the global network status, and the elephant flow on the congested link was rerouted. Experimental results show that compared with Equal-Cost Multi-Path routing (ECMP) algorithm and network flow scheduling algorithm of SDN data center based on ACO algorithm (ACO-SDN), the proposed algorithm increases the average bisection bandwidth by 29.42% to 36.26% and 5% to 11.51% respectively in random communication mode, reducing the Maximum Link Utilization (MLU) of the network, and achieving better load balancing of the network.
Aiming at the multi-dimensional resource allocation problem in the downlink heterogeneous cognitive radio Ultra-Dense Network (UDN), an improved genetic algorithm was proposed to jointly optimize user association and resource allocation with the objective of maximizing the throughput of femtocell users. Firstly, preprocessing was performed before the algorithm running to initialize the user’s reachable base stations and available channels matrix. Secondly, symbol coding was used to encode the matching relationships between the user and the base stations as well as the user and the channels into a two-dimensional chromosome. Thirdly, dynamic choosing best for replication + roulette was used as the selection algorithm to speed up the convergence of the population. Finally, in order to avoid the algorithm from falling into the local optimum, the mutation operator of premature judgment was added in the mutation stage, so that the connection strategy of base station, user and channel was obtained with limited number of iterations. Experimental results show that when the numbers of base stations and channels are fixed, the proposed algorithm improves the total user throughput by 7.2% and improves the cognitive user throughput by 1.2% compared with the genetic algorithm of three-dimensional matching, and the computational complexity of the proposed algorithm is lower. The proposed algorithm reduces the search space of feasible solutions, and can effectively improve the total throughput of cognitive radio UDNs with lower complexity.