Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
VPNet: fatty liver ultrasound image classification method inspired by ventral pathway
Danni DING, Bo PENG, Xi WU
Journal of Computer Applications    2025, 45 (2): 662-669.   DOI: 10.11772/j.issn.1001-9081.2024020185
Abstract53)   HTML0)    PDF (1686KB)(272)       Save

An innovative fatty liver classification method based on ventral pathway was developed due to the crucial role of ventral pathway in visual information processing. By integrating Convolutional Neural Network (CNN) and biological visual cognition model, hierarchical information processing process from primary visual cortex (V1) to Inferior Temporal Cortex (IT Cortex) was simulated, resulting in the creation of a new neural network architecture named VPNet (Ventral Pathway Network). Besides, inspired by non-Classical Receptive Field (nCRF) inhibition mechanism in biological vision, which aids in background noise suppression, this mechanism was simulated to address the challenge of speckle noise in ultrasound images, thereby enhancing the feature recognition capability of the model. An accuracy of 88.37% was achieved by VPNet in identifying four categories of fatty liver variation degree on the self-made dataset, and best performance of 100% accuracy, sensitivity, and specificity was achieved by VPNet in diagnosing two categories of fatty liver on the public dataset. The experimental results show that, compared with the superior ResNet101-SVM in the existing public dataset research, the accuracy of VPNet increases by 11.63 and 0.7 percentage points on the self-made dataset and public dataset respectively, which proves the effectiveness of the proposed method in the diagnosis of fatty liver diseases.

Table and Figures | Reference | Related Articles | Metrics
Top- k high average utility sequential pattern mining algorithm under one-off condition
Keshuai YANG, Youxi WU, Meng GENG, Jingyu LIU, Yan LI
Journal of Computer Applications    2024, 44 (2): 477-484.   DOI: 10.11772/j.issn.1001-9081.2023030268
Abstract234)   HTML5)    PDF (519KB)(116)       Save

To address the issue that traditional Sequential Pattern Mining (SPM) does not consider pattern repetition and ignores the effects of utility (unit price or profit) and pattern length on user interest, a Top-k One-off high average Utility sequential Pattern mining (TOUP) algorithm was proposed. The TOUP algorithm mainly includes two core steps: average utility calculation and candidate pattern generation. Firstly, a CSP (Calculation Support of Pattern) algorithm based on the occurrence position of each item and the item repetition relation array was proposed to calculate pattern support, thereby achieving rapid calculation of the average utility of patterns. Secondly, candidate patterns were generated by itemset extension and sequence extension, and a maximum average utility upper bound was proposed. Based on this upper bound, effective pruning of candidate patterns was achieved. Experimental results on five real datasets and one synthetic dataset show that compared to the TOUP-dfs and HAOP-ms algorithms, TOUP algorithm reduces the number of candidate patterns by 38.5% to 99.8% and 0.9% to 77.6%, respectively, and decreases the running time by 33.6% to 97.1% and 57.9% to 97.2%, respectively. Therefore, the algorithm performance of TOUP is better, and it can mine patterns of interests to users more efficiently.

Table and Figures | Reference | Related Articles | Metrics
Contrast order-preserving pattern mining algorithm
Yufei MENG, Youxi WU, Zhen WANG, Yan LI
Journal of Computer Applications    2023, 43 (12): 3740-3746.   DOI: 10.11772/j.issn.1001-9081.2022121828
Abstract267)   HTML5)    PDF (909KB)(129)       Save

Aiming at the problem that the existing contrast sequential pattern mining methods mainly focus on character sequence datasets and are difficult to be applied to time series datasets, a new Contrast Order-preserving Pattern Mining (COPM) algorithm was proposed. Firstly, in the candidate pattern generation stage, a pattern fusion strategy was used to reduce the number of candidate patterns. Then, in the pattern support calculation stage, the support of super-pattern was calculated by using the matching results of sub-patterns. Finally, a dynamic pruning strategy of minimum support threshold was designed to further effectively prune the candidate patterns. Experimental results show that on six real time series datasets, the memory consumption of COPM algorithm is at least 52.1% lower than that of COPM-o (COPM-original) algorithm, 36.8% lower than that of COPM-e (COPM-enumeration) algorithm, and 63.6% lower than that of COPM-p (COPM-prune) algorithm. At the same time, the running time of COPM algorithm is at least 30.3% lower than that of COPM-o algorithm, 8.8% lower than that of COPM-e algorithm and 41.2% lower than that of COPM-p algorithm. Therefore, in terms of algorithm performance, COPM algorithm is superior to COPM-o, COPM-e and COPM-p algorithms. The experimental results verify that COPM algorithm can effectively mine the contrast order-preserving patterns to find the differences between different classes of time series datasets.

Table and Figures | Reference | Related Articles | Metrics
Fast failure recovery method based on local redundant hybrid code
Jingyu LIU, Qiuxia NIU, Xiaoyan LI, Qiaoshuo SHI, Youxi WU
Journal of Computer Applications    2022, 42 (4): 1244-1252.   DOI: 10.11772/j.issn.1001-9081.2021111917
Abstract462)   HTML7)    PDF (926KB)(68)       Save

The parity blocks of the Maximum-Distance-Separable (MDS) code are all global parity blocks. The length of the reconstruction chain increases with the expansion of the storage system, and the reconstruction performance gradually decreases. Aiming at the above problems, a new type of Non-Maximum-Distance-Separable (Non-MDS) code called local redundant hybrid code Code-LM(sc) was proposed. Firstly, two types of local parity blocks called horizontal parity block in the strip-set and horizontal-diagonal parity block were added in any strip-sets to reduce the length of the reconstruction chain, and the parity layout of the local redundant hybrid code was designed. Then, four reconstruction formulations of the lost data blocks were designed according to the generation rules of the parity blocks and the common block existed in the reconstruction chains of different data blocks. Finally, double-disk failures were divided into three situations depending on the distances of the strip-sets where the failed disks located and the corresponding reconstruction methods were designed. Theoretical analysis and experimental results show that with the same storage scale, compared with RDP (Row-Diagonal Parity), the reconstruction time of CodeM(sc) for single-disk failure and double-disk failure can be reduced by 84% and 77% respectively; compared with V2-Code, the reconstruction time of Code-LM(sc) for single-disk failure and double-disk failure can be reduced by 67% and 73% respectively. Therefore, local redundant hybrid code can support fast recovery from failed disks and improve reliability of storage system.

Table and Figures | Reference | Related Articles | Metrics
Image super-resolution algorithm based on improved sparse coding
SHENG Shuai CAO Liping HUANG Zengxi WU Pengfei
Journal of Computer Applications    2014, 34 (2): 562-566.  
Abstract567)      PDF (904KB)(575)       Save
The traditional Super-Resolution (SR) algorithm, based on sparse dictionary pairs, is slow in training speed, poor in dictionary quality and low in feature matching accuracy. In view of these disadvantages, a super-resolution algorithm based on the improved sparse coding was proposed. In this algorithm, a Morphological Component Analysis (MCA) method with adaptive threshold was used to extract picture feature, and Principal Component Analysis (PCA) algorithm was employed to reduce the dimensionality of training sets. In this way, the effectiveness of the feature extraction was improved, the training time of dictionary was shortened and the over-fitting phenomenon was reduced. An improved sparse K-Singular Value Decomposition (K-SVD) algorithm was adopted to train low-resolution dictionary, and the super-resolution dictionary was solved by utilizing overlapping relation, which enforced the effectiveness and self-adaptability of the dictionary. Meanwhile, the training speed was greatly increased. Through the reconstruction of color images in the Lab color space, the degradation of the reconstructed image quality, which may be caused by the color channel's correlation, was avoided. Compared with traditional methods, this proposed approach can get better high-resolution images and higher computational efficiency.
Related Articles | Metrics
Application of three-dimensional medical image registration algorithm in image-guided radiotherapy
WUQian JIA Jing CAO Ruifen PEI Xi WU Aidong WU Yichan FDS Team
Journal of Computer Applications    2013, 33 (09): 2675-2678.   DOI: 10.11772/j.issn.1001-9081.2013.09.2675
Abstract795)      PDF (714KB)(531)       Save
To acquire an accurate patient positioning in image-guided radiotherapy, an improved Demons deformable registration method was developed. The FDK algorithm was adopted to reconstruct Cone Beam CT (CBCT) and the reconstruction result was visualized by a volume rendering method with Visualization ToolKit (VTK). Based on the Insight segmentation and registration ToolKit (ITK), the Demons algorithm was completed incorporating the gradient information of fixed image and floating image by the concept of symmetric gradient, and a new formula of Demons force was demonstrated. Registrion experiments were carried out using medical images both from single modality and multi-modality. The results show that the improved Demons algorithm achieves a faster convergence speed and a higher precision compared with the original demons algorithm, which indicates that the Demons algorithm based on symmetric gradient is more suitable for the registration of CBCT reconstruction image and CT plan image in image-guided radiotherapy.
Related Articles | Metrics
Optimal hyperplane modification of support vector machine based on Fisher within-class scatter
YANG Ting MENG Xiangru WEN Xiangxi WU Wen
Journal of Computer Applications    2013, 33 (09): 2553-2556.   DOI: 10.11772/j.issn.1001-9081.2013.09.2553
Abstract612)            Save
The generalization of Support Vector Machines (SVM) will decline when the training data sets get imbalanced distribution. A modification method of the optimal hyperplane based on average divergence ratio according to Fisher within-class scatter was proposed to solve the problem. The normal vector of the optimal hyperplane was got after SVM training. The Fisher within-class scatter was introduced to evaluate the distribution of the two classes. On this basis, the optimal hyperplane was modified by the ratio of the average distribution scatter that was obtained according to the number of samples. The experimental results on benchmarks data sets show that the proposed method improves the classification accuracy of the class with less training data, so as to improve the SVM's generalization.
Related Articles | Metrics