To solve the problem of large variation caused by different distances between monitoring camera and crowd in the crowd analysis tasks, a crowd counting algorithm with multi-scale fusion based on normal inverse Gamma distribution was proposed, named MSF (Multi-Scale Fusion crowd counting) algorithm. Firstly, the common features were extracted with the traditional backbone, and then the pedestrian information of different scales was obtained with the multi-scale information extraction module. Secondly, a crowd density estimation module and an uncertainty estimation module for evaluating the reliability of the prediction results of each scale were contained in each scale network. Finally, more accurate density regression results were obtained by dynamically fusing the multi-scale prediction results based on the reliability in the multi-scale prediction fusion module. The experimental results show that after the expansion of the existing method Converged Scene Recognition Network (CSRNet) by multi-scale trusted fusion, the Mean Absolute Error (MAE) and Mean Squared Error (MSE) of crowd counting on UCF-QNRF dataset are significantly decreased by 4.43% and 1.37%, respectively, which verifies the rationality and effectiveness of MSF algorithm. In addition, different from the existing methods, the MSF algorithm can not only predict the crowd density, but also provide the reliability of the prediction during the deployment stage, so that the inaccurate areas predicted by the algorithm can be timely warned in practical applications, reducing the wrong prediction risks in subsequent analysis tasks.
To solve the problem that information hiding algorithms based on neural style transfer do not solve the embedding problem of color images, a color image information hiding algorithm based on style transfer process was proposed. Firstly, the advantages of feature extraction of Convolutional Neural Network (CNN) were utilized to extract the semantic information of the carrier image, the style information of the style image and the feature information of the color image, respectively. Then, the semantic content of images and different styles were fused together. Finally the embedding of color image was completed while performing the style transfer of the carrier image through the decoder. Experimental results show that the proposed algorithm can integrate the secret image into the generated stylized image effectively, making the secret information embedding behavior indistinguishable from the style change behavior. Under the premise of maintaining the security of the algorithm, the proposed algorithm has the hiding capacity increased to 24 bpp, and the average values of Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) reached 25.29 dB and 0.85 respectively, thereby solving the color image embedding problem effectively.
The variable-length address is one of the important research content in the field of future network. Aiming at the low efficiency of traditional routing lookup algorithms for variable-length address, an efficient routing lookup algorithm suitable for variable-length addresses based on balanced binary tree — AVL (Adelson-Velskii and Landis) tree and Bloom filter, namely AVL-Bloom algorithm, was proposed. Firstly, multiple off-chip hash tables were used to separately store route entries with the same number of prefix bits and their next-hop information in view of the flexible and unbounded characteristics of the variable-length address. Meanwhile, the on-chip Bloom filter was utilized for speeding up the search for route prefixes that were likely to match. Secondly, in order to solve the problem that the routing lookup algorithms based on hash technology need multiple hash comparisons when searching for the route with the longest prefix, the AVL tree technology was introduced, that was, the Bloom filter and hash table of each group of route prefix set were organized through AVL tree, so as to optimize the query order of route prefix length and reduce the number of hash calculations and then decrease the search time. Finally, comparative experiments of the proposed algorithm with the traditional routing lookup algorithms such as METrie (Multi-Entrance-Trie) and COBF (Controlled prefix and One-hashing Bloom Filter) were conducted on three different variable-length address datasets. Experimental results show that the search speed of AVL-Bloom algorithm is significantly faster than those of METrie and COBF algorithms, and the query time is reduced by nearly 83% and 64% respectively. At the same time, AVL-Bloom algorithm can maintain stable search performance under the condition of large change in routing table entries, and is suitable for routing lookup and forwarding with variable-length addresses.
Focused on the difficulty and big benefit difference in acquiring new modalities, a method for dynamically evaluating benefit of modality augmentation was proposed. Firstly, the intermediate feature representation and the prediction results before and after modality fusion were obtained through the multimodal fusion network. Then, the confidence before and after fusion were obtained by introducing the True Class Probability (TCP) of two prediction results to confidence estimation. Finally, the difference between two confidences was calculated and used as an sample to obtain the benefit brought by the new modality. Extensive experiments were conducted on commonly used multimodal datasets and real medical datasets such as The Cancer Genome Atlas (TCGA). The experimental results on TCGA dataset show that compared with the random benefit evaluation method and the Maximum Class Probability (MCP) based method, the proposed method has the accuracy increased by 1.73 to 4.93 and 0.43 to 4.76 percentage points respectively, and the Effective Sample Rate (ESR) increased by 2.72 to 11.26 and 1.08 to 25.97 percentage points respectively. It can be seen that the proposed method can effectively evaluate benefits of acquiring new modalities for different samples, and has a certain degree of interpretability.
In view of the problems of vascular pleomorphism on transverse sections and sampling imbalance in the process of detection, an improved Libra Region-Convolutional Neural Network (R-CNN) cerebral arterial stenosis detection algorithm was proposed to detect internal carotid artery and vertebral artery stenosis in Computed Tomography Angiography (CTA) images. Firstly, ResNet50 was used as the backbone network in Libra R-CNN, Deformable Convolutional Network (DCN) was introduced into the 3, 4, 5 stages of backbone network, and the offsets were learnt to extract the morphological features of blood vessels on different transverse sections. Secondly, the feature maps extracted from the backbone network were input into Balanced Feature Pyramid (BFP) with the Non-local Neural Network (Non-local NN) introduced for deeper feature fusion. Finally, the fused feature maps were input to the cascade detector, and the final detection result was optimized by increasing the Intersection-over-Union (IoU) threshold. Experimental results show that compared with Libra R-CNN algorithm, the improved Libra R-CNN detection algorithm increases 4.3, 1.3, 6.9 and 4.0 percentage points respectively in AP, AP50, AP75 and APS, respectivelyon the cerebral artery CTA dataset; on the public CT dataset of colon polyps, the improved Libra R-CNN detection algorithm has the AP, AP50, AP75 and APS increased by 6.6, 3.6, 13.0 and 6.4 percentage points, respectively. By adding DCN, Non-local NN and cascade detector to the backbone network of Libra R-CNN algorithm, the features are further fused to learn the semantic information of cerebral artery structure and make the results of narrow area detection more accurate, and the improved algorithm has the ability of generalization in different detection tasks.
The current reversible data hiding algorithms in encrypted domain have the problems that the ciphertext images carrying secret have poor fault tolerance and disaster resistance after embedding secret data, once attacked or damaged, the original image cannot be reconstructed and the secret data cannot be extracted. In order to solve the problems, a new reversible data hiding algorithm in encrypted domain based on secret image sharing was proposed, and its application scenarios in cloud environment were analyzed. Firstly, the encrypted image was divided into n different ciphertext images carrying secret with the same size. Secondly, in the process of segmentation, the random quantities in Lagrange interpolation polynomial were taken as redundant information, and the mapping relationship between secret data and each polynomial coefficient was established. Finally, the reversible embedding of the secret data was realized by modifying the built-in parameters of the encryption process. When k ciphertext images carrying secret were collected, the original image was able to be fully recovered and the secret data was able to be extracted. Experimental results show that, the proposed algorithm has the advantages of low computational complexity, large embedding capacity and complete reversibility. In the (3,4) threshold scheme, the maximum embedding rate of the proposed algorithm is 4 bit per pixel (bpp), and in the (4,4) threshold scheme, the maximum embedding rate of the proposed algorithm is 6 bpp. The proposed algorithm gives full play to the disaster recovery characteristic of secret sharing scheme. Without reducing the security of secret sharing, the proposed algorithm enhances the fault tolerance and disaster resistance of ciphertext images carrying secret, improves the embedding capacity of algorithm and the disaster recovery ability in the application scenario of cloud environment, and ensures the security of carrier image and secret data.
In practical classification tasks such as image annotation and disease diagnosis, there is usually a hierarchical structural relationship between the classes in the label space of data with high dimensionality of the features. Many hierarchical feature selection algorithms have been proposed for different practical tasks, but ignoring the unknown and uncertainty of feature space. In order to solve the above problems, an online streaming feature selection algorithm OH_ReliefF based on ReliefF for hierarchical classification learning was presented. Firstly, the hierarchical relationship between classes was incorporated into the ReliefF algorithm to define a new method HF_ReliefF for calculating feature weights for hierarchical data. Then, important features were dynamically selected based on the ability of features to classify decision attributes. Finally, the dynamic redundancy analysis of features was performed based on the independence between features. Experimental results show that the proposed algorithm achieves better results in all evaluation metrics of the K-Nearest Neighbor (KNN) classifier and the Lagrangian Support Vector Machine (LSVM) classifier at least 7 percentage points improvement in accuracy when compared with five advanced online streaming feature selection algorithms.
The effect of the existing Total Variation (TV) method for image denoising is not ideal, and it is not good at keeping the characteristics of image edge and texture details. A new method of image denoising based on rational-order differential was proposed in this paper. First, the advantages and disadvantages of the present image denoising methods of TV and fractional differential were discussed in detail, respectively. Then, combining the model of TV with fractional differential theory, the new method of image denoising was obtained, and a rational differential mask in eight directions was drawn. The experimental results demonstrate that compared with the existing denoising methods, Signal Noise Ratio (SNR) is increased about 2 percents, and the method retains effectively the advantages of integer and fractional differential methods, respectively. In aspects of improving significantly high frequency of image and keeping effectively the details of image texture, it is also an effective, superior image denoising method. Therefore, it is an effective method for edge detection.
As one of the means to achieve calibration, the purpose of remote calibration is to determine the relationship between the measured value provided by the standard measuring instrument and the indicating value of the instrument to be calibrated through modern information technology, therefore the technology has important research significance for the digital transformation of calibration. Compared with traditional calibration, remote calibration has the advantages such as high calibration efficiency, low calibration cost, low dependence on calibration personnel, which is in line with the current development trend from traditional measurement to measurement in digital age. In order to further understand the use of modern information technology methods to achieve remote calibration, firstly, the current research progress of remote calibration technology was investigated and surveyed, the realization methods of remote calibration were summarized, that is, the methods with and without the standard measuring instrument in the field. Then, the example applications of the two types of realization methods were listed in terms of pressure, temperature, electrical energy, time and other calibration areas. Finally, the problems of remote calibration technology were summed up, and the future development trends toward digital measurement and smart measurement were put forward.