Panoramic images, due to their unique projection format, suffer from severe geometric distortions. The existing 2D image super-resolution networks fail to account for the geometric distortion characteristics of panoramic images, making them unsuitable for super-resolution reconstruction of such images. Unlike 2D super-resolution networks, panoramic image super-resolution models must focus on the feature differences across different latitude regions and address issues such as insufficient feature capture at different scales and insufficient learning of contextual information. To address the above issues, an Information Compensation-based Panoramic image Super-resolution reconstruction network (ICPSnet) was proposed. Firstly, based on the geometric characteristics of panoramic images, a position awareness mechanism was introduced to calculate the position weight of each pixel in the latitude direction, thereby enhancing the model’s attention to different latitude regions. Secondly, to address the insufficient feature extraction issue at diverse scales, a Cross-Scale Collaborative Attention (CSCA) module was designed, which utilized a multi-kernel convolutional attention mechanism of different receptive fields to obtain rich cross-scale features. Additionally, to improve quality of the reconstructed image, an Information Compensation (IC) block was designed to enhance the network’s ability to learn contextual information by improving the Atrous Spatial Pyramid Pooling (ASPP). Experimental results on two benchmark datasets, ODI-SR and SUN360, show that when the amplification factor is 4 and 8, ICPSnet improves the Weighted-to-Spherically-uniform Peak Signal-to-Noise Ratio (WS-PSNR) by 0.14 dB, 0.64 dB, and 0.25 dB, 0.26 dB, respectively, compared to current state-of-the-art OSRT (Omnidirectional image Super-Resolution Transformer). It can be seen that compared to other networks, ICPSnet has superior visual performance with reconstructed images better representing the texture details of high-latitude regions.
In Natural Language Processing (NLP) tasks, Aspect Sentiment Triplet Extraction (ASTE) aims to identify the relationships among aspect terms, opinion terms, and sentiment polarity in text, serving as a crucial step in realizing fine-grained sentiment analysis. In current mainstream methods, end-to-end models generally suffer from insufficient understanding of linguistic features and poor handling of the sparsity in sentiment expressions, which limits models’ accuracy and robustness. At the same time, pipeline models are prone to error propagation. To address these issues, an ASTE model with Multi-View Linguistic Features and Sentiment Lexicon (MVLF-SL) was proposed. In this model, multi-view linguistic features were utilized to enhance the model’s ability to understand context and implicit semantics, while additional prior knowledge of sentiment was provided by a sentiment lexicon. Firstly, Graph Convolutional Network (GCN) was used to represent multi-view linguistic features and obtain enhanced linguistic features. Secondly, a dynamic fusion strategy was adopted to integrate the enhanced linguistic features with the sentiment lexicon. Thirdly, multi-layer GCN was employed to enhance the feature representations of aspect and opinion terms by incorporating adjacency relations and node features. Finally, a Boundary-Driven Table-Filling (BDTF) method, improved with a Biaffine Attention (BA) mechanism, was used for decoding and extracting the triplets. Experimental results on four subsets (14res, 14lap, 15res, and 16res) of the ASTE-DATA-V2 dataset show that compared with the BDTF model, MVLF-SL has the F1-scores improved by 0.57, 2.08, 2.20, and 1.74 percentage points, respectively. It can be seen that the proposed model achieves better performance in ASTE, and fully utilizes linguistic features and external sentiment knowledge.
Magnetic Resonance Imaging (MRI) is widely used in the diagnosis of complex diseases because of its non-invasiveness and good soft tissue contrast. Due to the low speed of MRI, most of the acceleration is currently performed by highly undersampled Magnetic Resonance (MR) signals in k-space. However, the representative algorithms often have the problem of blurred details when reconstructing highly undersampled MR images. Therefore, a highly undersampled MR image reconstruction algorithm based on Residual Graph Convolutional Neural nETwork (RGCNET) was proposed. Firstly, auto-encoding technology and Graph Convolutional neural Network (GCN) were used to build a generator. Secondly, the undersampled image was input into the feature extraction (encoder) network to extract features at the bottom layer. Thirdly, the high-level features of MR images were extracted by the GCN block. Fourthly, the initial reconstructed image was generated through the decoder network. Finally, the final high-resolution reconstructed image was obtained through a dynamic game between the generator and the discriminator. Test results on FastMRI dataset show that at 10%, 20%, 30%, 40% and 50% sampling rates, compared with spatial orthogonal attention mechanism based MRI reconstruction algorithm SOGAN(Spatial Orthogonal attention Generative Adversarial Network), the proposed algorithm decreases 3.5%, 26.6%, 23.9%, 13.3% and 14.3% on Normalized Root Mean Square Error (NRMSE), increases 1.2%, 8.7%, 6.9%, 2.9% and 3.2% on Peak Signal-to-Noise Ratio (PSNR) and increases 0.8%, 2.9%, 1.5%, 0.5% and 0.5% on Structural SIMilarity (SSIM) respectively. At the same time, subjective observation also proves that the proposed algorithm can preserve more details and have more realistic visual effects.
Concerning that the traditional saliency detection algorithm has low segmentation accuracy and the deep learning-based saliency detection algorithm has strong dependence on pixel-level manual annotation data, an unsupervised saliency object detection algorithm based on graph cut refinement and differentiable clustering was proposed. In the algorithm, the idea of “coarse” to “fine” was adopted to achieve accurate salient object detection by only using the characteristics of a single image. Firstly, Frequency-tuned algorithm was used to obtain the salient coarse image according to the color and brightness of the image itself. Then, the candidate regions of the salient object were obtained by binarization according to the image’s statistical characteristics and combination of the central priority hypothesis. After that, GrabCut algorithm based on single image for graph cut was used for segmenting the salient object finely. Finally, in order to overcome the difficulty of imprecise detection when the background was very similar to the object, the unsupervised differentiable clustering algorithm with good boundary segmentation effect was introduced to further optimize the saliency map. Experimental results show that compared with the existing seven algorithms, the optimized saliency map obtained by the proposed algorithm is closer to the ground truth, achieving an Mean Absolute Error (MAE) of 14.3% and 23.4% on ECSSD and SOD datasets, respectively.
In order to solve problems in traditional gait detection algorithms, such as simplification of information, low accuracy, being easy to fall into local optimum, a gait detection algorithm for exoskeleton robot called Support Vector Machine optimized by Improved Whale Optimization Algorithm (IWOA-SVM) was proposed. The selection, crossover and mutation of Genetic Algorithm (GA) were introduced to Whale Optimization Algorithm (WOA) to optimize the penalty factor and kernel parameters of Support Vector Machine (SVM), and then classification models were established by SVM with optimized parameters, expanding the search scope and reduce the probability of falling into local optimum. Firstly, the gait data was collected by using hybrid sensing technology. With the combination of plantar pressure sensor, knee joint and hip joint angle sensors, motion data of exoskeleton robot was acquired as the input of gait detection system. Then, the gait phases were divided and tagged according to the threshold method. Finally, the plantar pressure signal was integrated with hip and knee angle signals as input, and gait detection was realized by IWOA-SVM algorithm. Through the simulation experiments of six standard test functions, the results demonstrate that Improved Whale Optimization Algorithm (IWOA) is superior to GA, Particle Swarm Optimization (PSO) algorithm and WOA in robustness, optimization accuracy and convergence speed. By analyzing the gait detection results of different wearers, the accuracy is up to 98.8%, so the feasibility and practicability of the proposed algorithm in the new generation exoskeleton robot are verified. Compared with Support Vector Machine optimized by Genetic Algorithm (GA-SVM), Support Vector Machine optimized by Particle Swarm Optimization (PSO-SVM) and Support Vector Machine optimized by Whale Optimization Algorithm (WOA-SVM), the proposed algorithm has the gait detection accuracy improved by 5.33%, 2.70% and 1.44% respectively. The experimental results show that the proposed algorithm can effectively detect the gait of exoskeleton robot and realize the precise control and stable walking of exoskeleton robot.