Aiming at the problem of semi-focus images caused by improper focusing of far and near visual fields during digital image shooting, a multi-focus image fusion Network with Cascade fusion and enhanced reconstruction (CasNet) was proposed. Firstly, a cascade sampling module was constructed to calculate and merge the residuals of feature maps sampled at different depths for efficient utilization of focused features at different scales. Secondly, a lightweight multi-head self-attention mechanism was improved to perform dimensional residual calculation on feature maps for feature enhancement of the image and make the feature maps present better distribution in different dimensions. Thirdly, convolution channel attention stacking was used to complete feature reconstruction. Finally, interval convolution was used for up- and down-sampling during the sampling process, so as to retain more original image features. Experimental results demonstrate that CasNet achieves better results in metrics such as Average Gradient (AG) and Gray-Level Difference (GLD) on multi-focus image benchmark test sets Lytro, MFFW, grayscale, and MFI-WHU compared to popular methods such as SESF-Fuse (Spatially Enhanced Spatial Frequency-based Fusion) and U2Fusion (Unified Unsupervised Fusion network).
Private data is easy to suffer from the attacks about data confidentiality, integrity and freshness. To resolve this problem, a secure data aggregation algorithm based on homomorphic Hash function was proposed, called HPDA (High-Efficiency Privacy Preserving Data Aggregation) algorithm. Firstly, it used homomorphic encryption scheme to provide data privacy-preserving. Secondly, it adopted homomorphic Hash function to verify the integrity and freshness of aggregated data. Finally, it reduced the communication overhead of the system by improved ID transmission mechanism. The theoretical analyses and experimental simulation results show that HPDA can effectively preserve data confidentiality, check data integrity, satisfy data freshness, and bring low communication overhead.
Methods of parallel computation are used in validating topology of polygons stored in simple feature model. This paper designed and implemented a parallel algorithm of validating topology of polygons stored in simple feature model. The algorithm changed the master-slave strategy based on characteristics of topology validation and generated threads in master processor to implement task parallelism. Running time of computing and writing topology errors was hidden in this way. MPI and PThread were used to achieve the combination of processes and threads. The land use data of 5 cities in Jiangsu, China, was used to check the performance of this algorithm. After testing, this parallel algorithm is able to validate topology of massive polygons stored in simple feature model correctly and efficiently. Compared with master-slave strategy, the speedup of this algorithm increases by 20%.
Many existing image classification algorithms cannot be used for big image data. A new approach was proposed to accelerate big image classification based on MapReduce. The whole image classification process was reconstructed to fit the MapReduce programming model. First, the Scale Invariant Feature Transform (SIFT) feature was extracted by MapReduce, then it was converted to sparse vector using sparse coding to get the sparse feature of the image. The MapReduce was also used to distributed training of random forest, and on the basis of it, the big image classification was achieved parallel. The MapReduce based algorithm was evaluated on a Hadoop cluster. The experimental results show that the proposed approach can classify images simultaneously on Hadoop cluster with a good speedup rate.
For the vacancies on digital watermarking technology based on 2D-vector animation, this paper proposed a blind watermarking scheme which made full use of vector characteristics and the timing characteristics. This scheme adopted color values of adjacent frames in vector animation changed elements as embedded target. And it used Least Significant Bit(LSB) algorithm as embedding/extraction algorithm, which embedded multiple group watermarks to vector animation. Finally the accurate watermark could be obtained by verifying the extracted multiple group watermarks. Theoretical analysis and experimental results show that this scheme is not only easy to implement and well in robustness, but also can realize tamper-proofing. What's more, the vector animation can be played in real-time during the watermark embedding and extraction.
To meet the application demand of high speed scanning and massive data transmission in industrial Computed Tomography (CT) of low-energy X-ray, a system of high-speed data acquisition and transmission for low-energy X-ray industrial CT was designed. X-CARD 0.2-256G of DT company was selected as the detector. In order to accommodate the needs of high-speed analog to digital conversion, high-speed time division multiplexing circuit and ping-pong operation for the data cache were combined; a gigabit Ethernet design was conducted with Field Programmable Gate Array (FPGA) selected as the master chip,so as to meet the requirements of high-speed transmission of multi-channel data. The experimental result shows that the speed of data acquisition system reaches 1MHz, the transmission speed reaches 926Mb/s and the dynamic range is greater than 5000. The system can effectively shorten the scanning time of low energy X-ray detection, which can meet the requirements of data transmission of more channels.