Aiming at the problem of semi-focus images caused by improper focusing of far and near visual fields during digital image shooting, a multi-focus image fusion Network with Cascade fusion and enhanced reconstruction (CasNet) was proposed. Firstly, a cascade sampling module was constructed to calculate and merge the residuals of feature maps sampled at different depths for efficient utilization of focused features at different scales. Secondly, a lightweight multi-head self-attention mechanism was improved to perform dimensional residual calculation on feature maps for feature enhancement of the image and make the feature maps present better distribution in different dimensions. Thirdly, convolution channel attention stacking was used to complete feature reconstruction. Finally, interval convolution was used for up- and down-sampling during the sampling process, so as to retain more original image features. Experimental results demonstrate that CasNet achieves better results in metrics such as Average Gradient (AG) and Gray-Level Difference (GLD) on multi-focus image benchmark test sets Lytro, MFFW, grayscale, and MFI-WHU compared to popular methods such as SESF-Fuse (Spatially Enhanced Spatial Frequency-based Fusion) and U2Fusion (Unified Unsupervised Fusion network).
The prediction quality of simple linear models for time series forecasting often surpasses that of deep models such as Transformers. However, on datasets with a large number of channels, deep models, particularly Multi-Layer Perceptron (MLP), can outperform simple linear models. Aiming at the differences in error power spectrum between simple linear models and MLPs in time series forecasting, an High-frequency enhanced time series prediction model based on multi-layer perceptron — HiFNet (High-Frequency Network) was proposed. Firstly, the fitting capability of MLPs within low-frequency bands was utilized. Then, the Adaptive Series Decomposition (ASD) module and the grouped linear layer were adopted to address the overfitting issue of MLPs in high-frequency bands and the issue of channel independence strategy failing to handle the channel redundancy effectively, thereby enhancing the robustness of MLPs in high-frequency band. Finally, experiments were conducted to HiFNet on standard datasets in the fields of meteorology, power, and transportation. The results demonstrate that the Mean Squared Error (MSE) of HiFNet is reduced by up to 23.6%, 10.0%, 35.1%, and 6.5%, respectively, compared to those of NLinear, RLinear, SegRNN (Segment Recurrent Neural Network), and PatchTST (Patch Time Series Transformer). At the same time, the grouped linear layer alleviates the impact of the channel redundancy by learning low-rank representations related to channels.
Methods of parallel computation are used in validating topology of polygons stored in simple feature model. This paper designed and implemented a parallel algorithm of validating topology of polygons stored in simple feature model. The algorithm changed the master-slave strategy based on characteristics of topology validation and generated threads in master processor to implement task parallelism. Running time of computing and writing topology errors was hidden in this way. MPI and PThread were used to achieve the combination of processes and threads. The land use data of 5 cities in Jiangsu, China, was used to check the performance of this algorithm. After testing, this parallel algorithm is able to validate topology of massive polygons stored in simple feature model correctly and efficiently. Compared with master-slave strategy, the speedup of this algorithm increases by 20%.
Many existing image classification algorithms cannot be used for big image data. A new approach was proposed to accelerate big image classification based on MapReduce. The whole image classification process was reconstructed to fit the MapReduce programming model. First, the Scale Invariant Feature Transform (SIFT) feature was extracted by MapReduce, then it was converted to sparse vector using sparse coding to get the sparse feature of the image. The MapReduce was also used to distributed training of random forest, and on the basis of it, the big image classification was achieved parallel. The MapReduce based algorithm was evaluated on a Hadoop cluster. The experimental results show that the proposed approach can classify images simultaneously on Hadoop cluster with a good speedup rate.