Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Panoramic video super-resolution network combining spherical alignment and adaptive geometric correction
Xiaolei CHEN, Zhiwei ZHENG, Xue HUANG, Zhenbin QU
Journal of Computer Applications    2026, 46 (2): 528-535.   DOI: 10.11772/j.issn.1001-9081.2025030311
Abstract86)   HTML0)    PDF (1058KB)(172)       Save

Traditional Video Super-Resolution (VSR) methods are ineffective in solving geometric distortion problems caused by equirectangular projection when processing panoramic videos, and have deficiencies in inter-frame alignment and feature fusion, which results in poor reconstruction quality. To further improve the super-resolution reconstruction quality of panoramic videos, a panoramic video super-resolution network combining spherical alignment and adaptive geometric correction, named 360GeoVSR, was proposed. In the network, accurate alignment and efficient fusion of inter-frame features were achieved through a Spherical Alignment Module (SAM) and a Geometric Fusion Block (GFB). In SAM, spatial transformation and deformable convolution were combined to address global and local geometric distortions. In GFB, feature alignment was corrected dynamically using an embedded Adaptive Geometric Correction (AGC) submodule, and multi-frame information was fused to capture complex inter-frame relationships. The results of subjective and objective comparison experiments on the extended ODV360Extended panoramic video dataset show that 360GeoVSR outperforms five representative super-resolution methods, including BasicVSR++ and VRT (Video Restoration Transformer), in both objective metrics and subjective visual effects, verifying its effectiveness.

Table and Figures | Reference | Related Articles | Metrics
Semi-EM algorithm for solving Gamma mixture model of multimodal probability distribution
Jiaqi CHEN, Yulin HE, Yingchao CHENG, Zhexue HUANG
Journal of Computer Applications    2025, 45 (7): 2153-2161.   DOI: 10.11772/j.issn.1001-9081.2024070942
Abstract113)   HTML1)    PDF (4261KB)(38)       Save

Expectation-Maximization (EM) algorithm plays an important role in parameter estimation for mixture models. However, the existing EM algorithms for solving Gamma Mixture Model (GaMM) parameters have limitations, which mainly are the problems of low-quality parameter estimation led by approximate calculations and inefficient computation due to many numerical calculations. To address these limitations and fully exploit the multimodal nature of data, a Semi-EM algorithm was proposed to solve GaMM for estimating multimodal probability distributions. Firstly, spatial distribution characteristics of the data were explored by using clustering, thereby initializing GaMM parameters and so that a more precise characterization of data’s multimodality was obtained. Secondly, based on the framework of EM algorithm, a customized heuristic strategy was employed to address the challenge of parameter update difficulty caused by the absence of closed-updated expressions. The shape parameters of GaMM were updated by adopting this strategy towards maximizing the log-likelihood value gradually, while remaining parameters were updated in closed-form. A series of persuasive experiments were conducted to validate the feasibility, rationality, and effectiveness of the proposed Semi-EM algorithm. Experimental results demonstrate that the Semi-EM algorithm outperforms the four comparison algorithms in estimating multimodal probability distributions accurately. Specifically, the Semi-EM algorithm has lower error metrics and higher log-likelihood values, indicating that this algorithm can provide more accurate model parameter estimation and then obtain more precise representation of multimodal nature of the data.

Table and Figures | Reference | Related Articles | Metrics
Labeling certainty enhancement-oriented positive and unlabeled learning algorithm
Yulin HE, Peng HE, Zhexue HUANG, Weicheng XIE, Fournier-Viger PHILIPPE
Journal of Computer Applications    2025, 45 (7): 2101-2112.   DOI: 10.11772/j.issn.1001-9081.2024070953
Abstract133)   HTML8)    PDF (3586KB)(41)       Save

Positive and Unlabeled Learning (PUL) is used to train classifiers with performance that can be accepted by practical applications when negative samples are unknown by utilizing a few known positive samples and many unlabeled samples. The existing PUL algorithms have a common flaw: big uncertainty in labeling unlabeled samples, leading to inaccurate classification boundaries learnt by the classifier and limiting the classifier’s generalization ability on new data. To solve this issue, an unlabeled sample Labeling Certainty Enhancement-oriented PUL (LCE-PUL) algorithm was proposed. Firstly, reliable positive samples were selected on the basis of similarity between posterior probability mean on the validation set and center point of the positive sample set, and the labeling process was refined gradually through iterations, so as to increase the accuracy of preliminary category judgments of unlabeled samples, thereby improving the certainty of labeling unlabeled samples. Secondly, these reliable positive samples were merged with the original positive sample set to form a new positive sample set, and then this set was removed from the unlabeled sample set. Thirdly, the new unlabeled sample set was traversed, and reliable positive samples were selected again based on similarity of each sample and multiple neighboring points, so as to further improve the inference of potential labels, thereby reducing mislabeling and enhancing certainty of labeling. Finally, the positive sample set was updated, and the unselected unlabeled samples were treated as negative samples. The feasibility, rationality, and effectiveness of LCE-PUL algorithm were validated on representative datasets. With the increase of iterations, the training of the LCE-PUL algorithm shows a convergent characteristic. When the proportion of positive samples is 40%, 35%, and 30%, the test accuracy of the classifier constructed by the LCE-PUL algorithm is improved by 5.8, 8.8, and 7.6 percentage points at most compared with the five representative comparative algorithms, including the Biased Support Vector Machine based on a specific cost function (Biased-SVM) algorithm, the Dijkstra-based Label Propagation for PUL (LP-PUL) algorithm, and the PUL by Label Propagation (PU-LP) algorithm. Experimental results show that LCE-PUL is an effective machine learning algorithm for handling PUL problems.

Table and Figures | Reference | Related Articles | Metrics
Subspace Gaussian mixture model clustering ensemble algorithm based on maximum mean discrepancy
Yulin HE, Xu LI, Yingting HE, Laizhong CUI, Zhexue HUANG
Journal of Computer Applications    2025, 45 (6): 1712-1723.   DOI: 10.11772/j.issn.1001-9081.2024070943
Abstract226)   HTML14)    PDF (2129KB)(100)       Save

To address the problems of limited capability and parameter sensitivity of Gaussian Mixture Model (GMM) clustering algorithms in processing large-scale high-dimensional data clustering, a Subspace GMM Clustering Ensemble (SGMM-CE) algorithm based on Maximum Mean Discrepancy (MMD) was proposed. Firstly, Random Sample Partition (RSP) was performed to the original large-scale high-dimensional dataset to obtain multiple subsets of data, thereby reducing the size of clustering problem from the perspective of sample size. Secondly, subspace learning was performed in the high-dimensional feature space corresponding to each subset of data by considering the influence of features on optimal number of GMM components, so that multiple low-dimensional feature subspaces corresponding to each high-dimensional feature space were obtained, and then GMM clustering was conducted on each subspace to obtain a series of heterogeneous GMMs. Thirdly, GMM clustering results of different subspaces from the same subset of data were relabeled and merged on the basis of the proposed Average Shared Affiliation Probability (ASAP). Finally, the expanded Subspace MMD (SubMMD) was used as a criterion to measure distributional consistency between two clusters in the clustering results of different subsets of data, so as to relabel and merge clustering results of these subsets of data based on the above, thereby obtaining the final clustering ensemble result of the original dataset. Exhaustive experiments were conducted to validate the effectiveness of SGMM-CE algorithm. Experimental results show that compared with the best-performing comparison algorithm — Meta-CLustering Algorithm (MCLA), SGMM-CE algorithm increases 19%, 20%, and 52% for Normalized Mutual Information (NMI), Clustering Accuracy (CA) and Adjusted Rand Index (ARI) values, respectively, on the given clustering datasets. Besides, the feasibility and rationality experimental results reflect that SGMM-CE algorithm has parameter convergence and time efficiency, demonstrating that this algorithm can deal with large-scale high-dimensional data clustering problems effectively.

Table and Figures | Reference | Related Articles | Metrics
Distributed observation point classifier for big data with random sample partition
Xu LI, Yulin HE, Laizhong CUI, Zhexue HUANG, Fournier‑Viger PHILIPPE
Journal of Computer Applications    2024, 44 (6): 1727-1733.   DOI: 10.11772/j.issn.1001-9081.2023060847
Abstract411)   HTML17)    PDF (2503KB)(204)       Save

Observation Point Classifier (OPC) is a supervised learning model which tries to transform a multi-dimensional linearly-inseparable problem in original data space into a one-dimensional linearly-separable problem in projective distance space and it is good at high-dimensional data classification. In order to alleviate the high train complexity when applying OPC to handle the big data classification problem, a Random Sample Partition (RSP)-based Distributed OPC (DOPC) for big data was designed under the Spark framework. First, RSP data blocks were generated and transformed into Resilient Distributed Dataset (RDD) under the distributed computation environment. Second, a set of OPCs was collaboratively trained on RSP data blocks with high Spark parallelizability. Finally, different OPCs were fused into a DOPC to predict the final label of unknow sample. The persuasive experiments on eight big datasets were conducted to validate the feasibility, rationality and effectiveness of designed DOPC. Experimental results show that DOPC trained on multiple computation nodes gets the higher testing accuracy than OPC trained on single computation node with less time consumption, and meanwhile compared to the RSP model based Neural Network (NN), Decision Tree (DT), Naive Bayesian (NB), and K-Nearest Neighbor (KNN) classifiers under the Spark framework, DOPC obtains stronger generalization capability. The superior testing performances demonstrate that DOPC is a highly effective and low consumptive supervised learning algorithm for handling big data classification problems.

Table and Figures | Reference | Related Articles | Metrics