In recent years, the low-light image enhancement methods based on deep learning are inspired by Retinex theory. First, the illumination map is estimated to adjust the brightness, and then the reflectance is restored to achieve low-light enhancement. Therefore, by analyzing the similarity between the low-light scene reflection map and reference reflection map, a low-light image enhancement Network guided by Reflection Prior map (RP-Net) was proposed. Firstly, the similar reflection map was generated by decomposing in Lab color space, and an Reflection Prior feature Adaptive Extractor (RPAE) was designed to re-encode and filter the guiding features from the similar reflection map in the backbone network at different scales. Then, the guiding information was injected into the backbone network through the designed Reflection Prior feature-Guided attention Block (RPGB). In addition, aiming at the limitations of traditional pixel-by-pixel L1 loss, a harmonic loss function of frequency domain was designed from the perspective of frequency domain analysis, so as to optimize the enhancement effect from the global spectral distribution. Experimental results on LOLv1, LOLv2 and LSRW datasets show that the proposed method is superior to the existing mainstream methods in Structural Similarity (SSIM), and has the Peak Signal-to-Noise Ratio (PSNR) 1.29 dB and 2.08 dB higher than that of Retinexformer and SAFNet (Spatial And Frequency Network), on the LOLv2-syn and LSRW datasets respectively, and performs well in balancing color fidelity and enhancement effect.
Current deep image steganography methods based on image-in-image concealment face challenges in practical applications of privacy protection and secure communication due to insufficient security of stego images and distortion in recovered secret images. To address these issues, a Conditional Generative Adversarial Network and Convolutional Block Attention Module-based image-in-image steganography method (CBAM-CGAN) was proposed. Firstly, a hybrid attention module was introduced into the generator network to enable the generator’s comprehensive learning of image features from both channel and spatial dimensions, thereby enhancing the visual quality of stego images. Secondly, residual connections were employed to reduce feature loss of secret images during network learning, and through adversarial training between the extractor and the discriminator, noise-free extraction of secret images was achieved. Finally, adversarial training between the generator and the steganalyzer was implemented to improve the stego image security. Experimental results on public datasets including COCO demonstrate that compared with steganography method StegGAN, the proposed steganography method achieves the Peak Signal-to-Noise Ratio (PSNR) improvements of 4.37 dB and 4.71 dB for stego and decrypted images, respectively, along with Structure Similarity Index Measure (SSIM) enhancements of 9.16% and 6.46%, respectively. For security, the proposed method has the detection Accuracy (Acc) against steganalyzer Ye-Net decreased by 9.35 percentage points with the False Negative Rate (FNR) increased by 12.01 percentage points. It can be seen that the proposed method ensures stego image security while achieving high-quality secret image recovery.
To address the issues of low sensitivity to the feature parameters of new categories and difficulty in distinguishing category related and category unrelated parameters accurately of the existing few-shot object detection models, leading to unclear feature boundaries and category confusion, a Few-Shot Object Detection algorithm based on new categories Feature Enhancement and Metric Mechanism (FEMM-FSOD) was proposed. Firstly, a Cross-Domain Parameter perception Module (CDPM) was introduced to improve the neck network, thereby reconstructing re-weighting operations of channel and spatial features, and dilated convolution was combined with cross-stage information transfer and feature fusion to provide rich gradient information guidance and enhance the sensitivity of new category parameters. Meanwhile, an Integrated Correlated Multi-Feature module (ICMF) was constructed before Region of Interest Pooling (RoI Pooling) to establish correlation between features and optimize the fusion method of relevant features dynamically, thereby enhancing salient features. The introduction of CDPM and ICMF enhanced the feature representation of new categories effectively, so as to alleviate feature boundary ambiguity. Additionally, to further reduce category confusion, an orthogonal loss function based on metric mechanism, Coherence-Separation Loss (CohSep Loss), was proposed in the detection head to achieve intra-class feature aggregation and inter-class feature separation by measuring feature vector similarity. Experimental results show that compared to the baseline algorithm TFA (Two-stage Fine-tuning Approach), on PASCAL VOC dataset, the proposed algorithm improves the mAP50 (mean Average Precision (mAP) of new categories with threshold of 0.50) of 15 types of few-shot instance numbers by 5.3 percentage points; on COCO dataset, the proposed algorithm improves the mAP (mAP of new categories with threshold from 0.50 to 0.95) for 10shot and 30shot settings by 3.6 and 5.2 percentage points, respectively, realizing higher accuracy in few-shot object detection.
Aiming at limitations of the existing sentiment classification models in deep sentiment understanding, unidirectional constraints of traditional attention mechanisms, and class imbalance problem in Natural Language Processing (NLP), a sentiment classification model M-BCA (Multi-scale BERT features with Bidirectional Cross Attention) was proposed that integrates multi-scale BERT (Bidirectional Encoder Representations from Transformers) features and a bidirectional cross attention mechanism. Firstly, multi-scale features were extracted from BERT’s lower, middle, and upper layers to capture surface information, syntactic information, and deep semantic information of sentence texts. Secondly, a three-channel Gated Recurrent Unit (GRU) was utilized to further extract deep semantic features, thereby enhancing the model’s understanding ability of text. Finally, in order to promote interaction and learning between different scale features, a bidirectional cross attention mechanism was introduced, thereby strengthening interaction between multi-scale features. Additionally, to address imbalanced data issue, a data augmentation strategy was designed, and a mixed loss function was adopted to optimize the model’s learning for minority class samples. Experimental results indicate that excellent performance is achieved by M-BCA in fine-grained sentiment classification tasks. M-BCA performs significantly better than most baseline models when dealing with imbalanced multi-class sentiment datasets. Moreover, M-BCA has outstanding performance in classifying minority class samples, particularly on NLPCC 2014 and Online_Shopping_10_Cats datasets, where the Macro-Recall of M-BCA for minority classes surpasses that of all of the comparison models. It can be seen that this model achieves remarkable performance enhancements in fine-grained sentiment classification tasks and is suitable for handling imbalanced datasets.
Facing resource wastage and performance challenges in cloud platform resource scheduling, especially the low prediction accuracy of cloud resource prediction due to the difficulty in selecting hyperparameters manually for Long Short-Term Memory (LSTM) network models, a combined prediction model optimized by Transit Search (TS) algorithm named TS-ARIMA-LSTM was proposed. The combined prediction model integrates the AutoRegressive Integrated Moving Average (ARIMA) model with the LSTM model. Firstly, TS algorithm was used to optimize the hyperparameters of the LSTM model, including the neuron counts in three layers and the transmission delays. Then, the optimized LSTM model was used for preliminary prediction, and the ARIMA model was used to correct the error of the LSTM prediction. Finally, the prediction results of the ARIMA and LSTM models were combined to obtain the final prediction value. Experimental results on the public Alibaba Cloud dataset Cluster-trace-v2018 show that the proposed model optimized by the TS algorithm improves the prediction accuracy significantly compared to the traditional single prediction models ARIMA and LSTM, as well as the combined prediction model ARIMA-LSTM. Specifically, compared to the best-performing ARIMA-LSTM model among the baseline models, the proposed model has the Mean Square Error (MSE) decreased by 49.72%, the Root Mean Square Error (RMSE) decreased by 29.24%, and the Mean Absolute Error (MAE) decreased by 33.94%. It can be seen that the application of the proposed model in cloud resource prediction demonstrates high prediction accuracy, offering a new pathway for improving cloud platform task scheduling strategies.
Identifying Drug-Target Interactions (DTI) is a crucial step in drug repurposing and novel drug discovery. Currently, many sequence-based computational methods have been widely used for DTI prediction. However, previous sequence-based studies typically focus solely on the sequence itself for feature extraction, neglecting heterogeneous information networks such as drug-drug interaction networks and drug-target interaction networks. Therefore, a novel method for DTI prediction based on sequence and multi-view networks was proposed, namely SMN-DTI (prediction of Drug-Target Interactions based on Sequence and Multi-view Networks). The Variational AutoEncoder (VAE) was used to learn the embedding matrices of drug SMILES (Simplified Molecular-Input Line-Entry System) strings and target amino acid sequences in this method. Subsequently, a Heterogeneous graph Attention Network (HAN) with two-level attention mechanism was used to aggregate information from different neighbors of drugs or targets in the networks from both node and semantic perspectives, obtaining the final embeddings. Two benchmark datasets widely used for DTI prediction, Hetero-seq-A and Hetero-seq-B, were used to evaluate SMN-DTI and the baseline methods. The results show that SMN-DTI achieves the best performance in Area Under the receiver operating Characteristic curve (AUC) and the Area Under the Precision-Recall curve (AUPR) under three different positive-and-negative sample ratios. It can be seen that SMN-DTI outperforms current mainstream advanced prediction methods.
To address the problems of low detection accuracy and high missed detection rate caused by the narrow lateral, multi-scale, and long-range dependency characteristics of pavement defect morphology, a pavement defect detection algorithm improved by YOLOv8_n with enhanced morphological perception was proposed. Firstly, an Edge-Enhancement Focus Module (EEFM) was introduced in the backbone fusion stage, a strip pooling kernel was used to capture directional and position-aware information, thereby enhancing edge details in deep features and improving representation ability of elongated features. Secondly, a Dual Chain Feature Redistribution Pyramid Network (DCFRPN) was designed to reconstruct the fusion method, so as to provide multi-scale features with extensive perception and rich localization information, thereby improving fusion ability for multi-scale defects. Additionally, a Morphological Aware Task Interaction Detection Head (MATIDH) was constructed to enhance task interaction between classification and localization, thereby adjusting data representation dynamically and integrating multi-scale strip convolutions to optimize the classification and regression of elongated defects. Finally, a PWIoU (Penalized Weighted Intersection over Union) loss function was proposed to allocate gradient gains dynamically for prediction boxes of different qualities, thereby optimizing the regression of bounding boxes. Experimental results show that on the RDD2022 dataset, compared to YOLOv8_n, the proposed algorithm has the precision and recall improved by 3.5 and 2.3 percentage points, respectively, and the mean Average Precision (mAP) at 50% Intersection over Union (IoU) increased by 3.2 percentage points, verifying the effectiveness of the proposed algorithm.
To optimize Text-to-SQL generation performance based on heterogeneous graph encoder, SELSQL model was proposed. Firstly, an end-to-end learning framework was employed by the model, and the Poincaré distance metric in hyperbolic space was used instead of the Euclidean distance metric to optimize semantically enhanced schema linking graph constructed by the pre-trained language model using probe technology. Secondly, K-head weighted cosine similarity and graph regularization method were used to learn the similarity metric graph so that the initial schema linking graph was iteratively optimized during training. Finally, the improved Relational Graph ATtention network (RGAT) graph encoder and multi-head attention mechanism were used to encode the joint semantic schema linking graphs of the two modules, and Structured Query Language (SQL) statement decoding was solved using a grammar-based neural semantic decoder and a predefined structured language. Experimental results on Spider dataset show that when using ELECTRA-large pre-training model, the accuracy of SELSQL model is increased by 2.5 percentage points compared with the best baseline model, which has a great improvement effect on the generation of complex SQL statements.
Current cache side-channel attack detection technology mainly aims at a single attack mode. The detection methods for two to three attacks are limited and cannot fully cover them. In addition, although the detection accuracy of a single attack is high, as the number of attacks increases, the accuracy decreases and false positives are easily generated. To effectively detect cache side-channel attacks, a multi-object cache side-channel attack detection model based on machine learning was proposed, which utilized Hardware Performance Counter (HPC) to collect various cache side-channel attack features. Firstly, relevant feature analysis was conducted on various cache side-channel attack modes, and key features were selected and data sets were collected. Then, independent training was carried out to establish a detection model for each attack mode. Finally, during detection, test data was input into multiple models in parallel. The detection results from multiple models were employed to ascertain the presence of any cache side-channel attack. Experimental results show that the proposed model reaches high accuracies of 99.91%, 98.69% and 99.54% respectively when detecting three cache side-channel attacks: Flush+Reload, Flush+Flush and Prime+Probe. Even when multiple attacks exist at the same time, various attack modes can be accurately identified.
Automatic inspection of concrete bridge health based on wall-climbing robot is an effective way to promote intelligent bridge management and maintenance, moreover reasonable path planning is particularly important for the robot to obtain comprehensive detection data. Aiming at the engineering practical problem of weight limitation of the wall-climbing robot power supply and the difficulty of energy supplement during inspection, the inspection scenarios of bridge components such as main beams and high piers were fully considered, the energy consumption index was taken as the objective function of performance evaluation optimization and corresponding constraint conditions were established, and a full coverage path planning evaluation model was proposed. An Improved Grey Wolf Optimization (IGWO) algorithm was proposed to solve the problem that traditional Grey Wolf Optimization (GWO) algorithm is prone to fall into local optimum. The IGWO algorithm improved the characteristics of initial gray wolf population which was difficult to maintain relatively uniform distribution in the search space by K-Means clustering. The nonlinear convergence factor was used to improve the local development ability and global search performance of the algorithm. Combining with the idea of individual superiority of particle swarm optimization, the position updating formula was improved to enhance the model solving ability of the algorithm. Algorithm simulation and comparison experiment results show that IGWO has better stability compared with GWO, Different Evolution (DE) and Genetic Algorithm (GA), IGWO reduces energy consumption by 10.2% - 16.7%, decreases iterations by 19.3% - 36.9% and solving time by 12.8% - 32.3%, reduces path repetition rate by 0.23 - 1.91 percentage points, and reduces path length by 1.6% - 11.0%.
With the rise of deep neural network, Deep Metric Learning (DML) has attracted widespread attention. To gain a deeper understanding of deep metric learning, firstly, the limitations of traditional metric learning methods were organized and analyzed. Secondly, DML was discussed from three types, including types based on sample pairs, proxies, and classification. Divergence methods, ranking methods and methods based on Generative Adversarial Network (GAN) were introduced in detail of the type based on sample pairs. Proxy-based types was mainly discussed in terms of proxy samples and categories. Cross-modal metric learning, intra-class and inter-class margin problems, hypergraph classification, and combinations with other methods (such as reinforcement learning-based and adversarial learning-based methods) were discussed in the classification-based type. Thirdly, various metrics for evaluating the performance of DML were introduced, and the applications of DML in different tasks, including face recognition, image retrieval, and person re-identification, were summarized and compared. Finally, the challenges faced by DML were discussed and some possible solution strategies were proposed.
Nested entities pose a challenge to the task of entity-relation joint extraction. The existing joint extraction models have the problems of generating a large number of negative examples and high complexity when dealing with nested entities. In addition, the interference of nested entities on triplet prediction is not considered by these models. To solve these problems, a forest-based entity-relation joint extraction method was proposed, named EF2LTF (Entity Forest to Layering Triple Forest). In EF2LTF, a two-stage joint training framework was adopted. Firstly, through the generation of an entity forest, different entities within specific nested entities were identified flexibly. Then, the identified nested entities and their hierarchical structures were combined to generate a hierarchical triplet forest. Experimental results on four benchmark datasets show that EF2LTF outperforms methods such as joint entity and relation extraction with Set Prediction Network (SPN) model, joint extraction model for entities and relations based on Span — SpERT (Span-based Entity and Relation Transformer) and Dynamic Graph Information Extraction ++ (DyGIE++)on F1 score. It is verified that the proposed method not only enhances the recognition ability of nested entities, but also enhances the ability to distinguish nested entities when constructing triples, thereby improving the joint extraction performance of entities and relations.
In visible-infrared cross-modal person re-identification, the modal differences will lead to low identification accuracy. Therefore, a dual-stream structure based cross-modal person re-identification relation network, named IVRNBDS (Infrared and Visible Relation Network Based on Dual-stream Structure), was proposed. Firstly, the dual-stream structure was used to extract the features of the visible light modal and the infrared modal person images respectively. Then, the feature map of the person image was divided into six segments horizontally to extract relationships between the local features of each segment and the features of other segments of the person and the relationship between the core features and average features of the person. Finally, when designing loss function, the Hetero-Center triplet Loss (HC Loss) function was introduced to relax the strict constraints of the ordinary triplet loss function, so that image features of different modals were able to be better mapped into the same feature space. Experimental results on public datasets SYSU-MM01 (SunYat-Sen University MultiModal re-identification) and RegDB (Dongguk Body-based person Recognition) show that the computational cost of IVRNBDS is slightly higher than those of the mainstream cross-modal person re-identification algorithms, but the proposed network has the Rank-1 (similarity Rank 1) and mAP (mean Average Precision) improved compared to the mainstream algorithms, increasing the recognition accuracy of the cross-modal people re-identification algorithm.
Aiming at the difference of health system resilience in urban areas and the random evolution of demand for emergency medical supplies, a multi-stage dynamic allocation model for emergency medical supplies based on resilience assessment was proposed. Firstly, combined with the entropy method and the K-means algorithm, the resilience assessment system and classification method of area’s health system were established. Secondly, the random evolution characteristic of demand state was designed as a Markov process, and triangular fuzzy numbers were used to deal with the fuzzy demand, thereby constructing a multi-stage dynamic allocation model of emergency medical supplies. Finally, the proposed model was solved by the binary Artificial Bee Colony (ABC) algorithm, and the effectiveness of the model was analyzed and verified by an actual example. Experimental results show that the proposed model can realize the dynamic allocation of supplies to stabilize the demand changes and prioritize the allocation of areas with weak resilience, reflecting the fairness and efficiency of emergency management requirements.
Focusing on the issue that the existing hashing image retrieval methods have weak expression ability, slow training speed, low retrieval precision, and difficulty in adapting to large-scale image retrieval, an image retrieval method based on Deep Residual Network and Iterative Quantitative Hashing (DRITQH) was proposed. Firstly, the deep residual network was used to perform multiple non-linear transformations on the image data to extract features of the image data and obtain high-dimensional feature vectors with semantic features. Then, Principal Component Analysis (PCA) was used to reduce the high-dimensional image features' dimensions. At the same time, to minimize the quantization error and obtain the best projection matrix, iterative quantization was used to binarize the generated feature vectors, the rotation matrix was updated and the data was mapped to the zero-center binary hypercube. Finally, the optimal binary hash code which was used to image retrieval in the Hamming space was obtained through performing the hash learning. Experimental results show that the retrieval precisions of DRITQH for four hash codes with different lengths on NUS-WIDE dataset are 0.789, 0.831, 0.838 and 0.846 respectively, which are 0.5, 3.8, 3.7 and 4.2 percentage points higher than those of Improved Deep Hashing Network (IDHN) respectively, and the average encoding time of the proposed method is 1 717 μs less than that of IDHN. DRITQH reduces the impact of quantization errors, improves training speed, and achieves higher retrieval performance in large-scale image retrieval.
In the face of adversarial example attack, deep neural networks are vulnerable. These adversarial examples result in the misclassification of deep neural networks by adding human-imperceptible perturbations on the original images, which brings a security threat to deep neural networks. Therefore, before the deployment of deep neural networks, the adversarial attack is an important method to evaluate the robustness of models. However, under the black-box setting, the attack success rates of adversarial examples need to be improved, that is, the transferability of adversarial examples need to be increased. To address this issue, an adversarial example method based on image flipping transform, namely FT-MI-FGSM (Flipping Transformation Momentum Iterative Fast Gradient Sign Method), was proposed. Firstly, from the perspective of data augmentation, in each iteration of the adversarial example generation process, the original input image was flipped randomly. Then, the gradient of the transformed images was calculated. Finally, the adversarial examples were generated based on this gradient, so as to alleviate the overfitting in the process of adversarial example generation and to improve the transferability of adversarial examples. In addition, the method of attacking ensemble models was used to further enhance the transferability of adversarial examples. Extensive experiments on ImageNet dataset demonstrated the effectiveness of the proposed algorithm. Compared with I-FGSM (Iterative Fast Gradient Sign Method) and MI-FGSM (Momentum I-FGSM), the average black-box attack success rate of FT-MI-FGSM on the adversarially training networks is improved by 26.0 and 8.4 percentage points under the attacking ensemble model setting, respectively.
The existing image super-resolution reconstruction methods are affected by texture distortion and details blurring of generated images. To address these problems, a new image super-resolution reconstruction network based on multi-channel attention mechanism was proposed. Firstly, in the texture extraction module of the proposed network, a multi-channel attention mechanism was designed to realize the cross-channel information interaction by combining one-dimensional convolution, thereby achieving the purpose of paying attention to important feature information. Then, in the texture recovery module of the proposed network, the dense residual blocks were introduced to recover part of high-frequency texture details as many as possible to improve the performance of model and generate high-quality reconstructed images. The proposed network is able to improve visual effects of reconstructed images effectively. Besides, the results on benchmark dataset CUFED5 show that the proposed network has achieved the 1.76 dB and 0.062 higher in Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) compared with the classic Super-Resolution using Convolutional Neural Network (SRCNN) method. Experimental results show that the proposed network can increase the accuracy of texture migration, and effectively improve the quality of generated images.
Aiming at the low segmentation precision caused by the lack of image feature extraction ability of the existing fractional-order nonlinear models, an instance segmentation model based on fractional-order network and Reinforcement Learning (RL) was proposed to generate high-quality contour curves of target instances in the image. The model consists of two layers of modules: 1) the first layer was a two-dimensional fractional-order nonlinear network in which the chaotic synchronization method was mainly utilized to obtain the basic characteristics of the pixels in the image, and the preliminary segmentation result of the image was acquired through the coupling and connection according to the similarity among the pixels; 2) the second layer was to establish instance segmentation as a Markov Decision Process (MDP) based on the idea of RL, and the action-state pairs, reward functions and strategies during the modeling process were designed to extract the region structure and category information of the image. Finally, the pixel features and preliminary segmentation result of the image obtained from the first layer were combined with the region structure and category information obtained from the second layer for instance segmentation. Experimental results on datasets Pascal VOC2007 and Pascal VOC2012 show that compared with the existing fractional-order nonlinear models, the proposed model has the Average Precision (AP) improved by at least 15 percentage points, verifying that the sequential decision-based instance segmentation model not only can obtain the class information of the target objects in the image, but also further enhance the ability to extract contour details and fine-grained information of the image.
Deep learning model with convolution structure has poor generalization performance in few-shot learning scenarios. Therefore, with AlexNet and ResNet as examples, a derivative-free few-shot learning based performance optimization method of convolution structured pre-trained models was proposed. Firstly, the sample data were modulated to generate the series data from the non-series data based on causal intervention, and the pre-trained model was pruned directly based on the co-integration test from the perspective of data distribution stability. Then, based on Capital Asset Pricing Model (CAPM) and optimal transmission theory, in the intermediate output process of the pre-trained model, the forward learning without gradient propagation was carried out, and a new structure was constructed, thereby generating the representation vectors with clear inter-class distinguishability in the distribution space. Finally, the generated effective features were adaptively weighted based on the self-attention mechanism, and the features were aggregated in the fully connected layer to generate the embedding vectors with weak correlation. Experimental results indicate that the proposed method can increase the Top-1 accuracies of the AlexNet and ResNet convolution structured pre-trained models on 100 classes of images in ImageNet 2012 dataset from 58.82%, 78.51% to 68.50%, 85.72%, respectively. Therefore, the proposed method can effectively improve the performance of convolution structured pre-trained models based on few-shot training data.
Differentiable ARchiTecture Search (DARTS) can design neural network architectures efficiently and automatically. However, there is a performance “wide gap” between the construction method of super network and the design of derivation strategy in it. To solve the above problem, a differentiable neural architecture search algorithm with constraint in optimal search space was proposed. Firstly, the training process of the super network was analyzed by using the architecture parameters associated with the candidate operations as the quantitative indicators, and it was found that the invalid candidate operation none occupied the architecture parameter with the maximum weight in deviation architecture, which caused that architectures obtained by the algorithm had poor performance. Aiming at this problem, an optimized search space was proposed. Then, the difference between the super network of DARTS and derivation architecture was analyzed, the architecture entropy was defined based on architecture parameters, and this architecture entropy was used as the constraint of the objective function of DARTS, so as to promote the super network to narrow the difference with the derivation strategy. Finally, experiments were conducted on CIFAR-10 dataset. The experimental results show that the searched architecture by the proposed algorithm achieved 97.17% classification accuracy in these experiments, better than the comparison algorithms in accuracy, parameter quantity and search time comprehensively. The proposed algorithm is effective and improves classification accuracy of searched architecture on CIFAR-10 dataset.
Before an emergency occurs, the hospitals need to maintain a certain amount of emergency resource redundancy. Aiming at the problem of configuration optimization of hospital emergency resource redundancy under emergencies, firstly, based on the utility theory, by analyzing the utility performance of the hospital emergency resource redundancy, the emergency resource redundancy was defined and classified, and the utility function conforming to the marginal law was determined. Secondly, the redundancy configuration model of hospital emergency resources with maximal total utility was established, and the upper limit of emergency resource storage and the lower limit of emergency rationality were given as the constraints of the model. Finally, the combination of particle swarm optimization and sequential quadratic programming method was used to solve the model. Through case analysis, four optimization schemes for the emergency resource redundancy of the hospital were obtained, and the demand degree of the hospital emergency level to the hospital emergency resource redundancy was summarized. The research shows that with the emergency resource redundancy configuration optimization model, the emergency rescue of hospitals under emergencies can be carried out well, and the utilization efficiency of hospital emergency resources can be improved.
To solve the problem that high dimension of descriptor decreases the matching speed of Scale Invariant Feature Transform (SIFT) algorithm, an improved SIFT algorithm was proposed. The feature point was acted as the center, the circular rotation invariance structure was used to construct feature descriptor in the approximate size circular feature points' neighborhood, which was divided into several sub-rings. In each sub-ring, the pixel information was to maintain a relatively constant and positions changed only. The accumulated value of the gradient within each ring element was sorted to generate the feature vector descriptor when the image was rotated. The dimensions and complexity of the algorithm was reduced and the dimensions of feature descriptor were reduced from 128 to 48. The experimental results show that, the improved algorithm can improve rotating registration repetition rate to more than 85%. Compared with the SIFT algorithm, the average matching registration rate increases by 5%, the average time of image registration reduces by about 30% in the image rotation, zoom and illumination change cases. The improved SIFT algorithm is effective.
In order to reduce the complexity of signal reconstruction algorithm, and reconstruct the signal with unknown sparsity, a new algorithm named One Projection Subspace Pursuit (OPSP) was proposed. Firstly, the upper and lower bounds of the signal's sparsity were determined based on the restricted isometry property, and the signal's sparsity was set as their integer middle value. Secondly, under the frame of Subspace Pursuit (SP), the projection of the observation onto the support set in each iteration process was removed to decrease the computational complexity of the algorithm. Furthermore, the whole signal's reconstruction rate was used as the index of reconstruction performance. The simulation results show that the proposed algorithm can reconstruct the signals of unknown sparsity with less time and higher reconstruction rate compared with the traditional SP algorithm, and it is effective for signal reconstruction.
In order to decrease the influence caused by low bandwidth and high latency on Media Access Control (MAC) layer in Underwater Acoustic Sensor Network (UWASN), an Evolutionary Game Theory based MAC (EGT-MAC) protocol was proposed. In EGT-MAC, each sensor node adopted two strategies including spatial multiplexing and temporal multiplexing. With the replication kinetics equation, each strategy got an evolutionary stable strategy and reached stable equilibrium of evolution. In this way, it improved channel utilization rate and data transmission efficiency to achieve performance optimization for MAC protocol. The simulation results show that EGT-MAC can improve the network throughput as well as the transmission rate of data packet.