Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Cervical cell nucleus image segmentation based on multi-scale guided filtering
Xinyao LINGHU, Yan CHEN, Pengcheng ZHANG, Yi LIU, Zhiguo GUI, Wei ZHAO, Zhanhao DONG
Journal of Computer Applications    2025, 45 (4): 1333-1339.   DOI: 10.11772/j.issn.1001-9081.2024040546
Abstract41)   HTML1)    PDF (2232KB)(14)       Save

Aiming at the problems such as lack of contextual information connection, inaccurate and low-precision segmentation of cervical cell nucleus images, a cervical cell nucleus segmentation network named DGU-Net (Dense-Guided-UNet) was proposed on the basis of improved U-net combined with dense block and U-shaped convolutional multi-scale guided filtering module, which could segment cervical cell nucleus images more completely and accurately. Firstly, the U-net model with encoder and decoder structures was used as backbone of the network to extract image features. Secondly, the dense block module was introduced to connect the features between different layers, so as to realize transmission of contextual information, thereby enhancing feature extraction ability of the model. Meanwhile, the multi-scale guided filtering module was introduced after each downsampling and before each upsampling to introduce obvious edge detail information in the grayscale guided image for enhancement of the image details and edge information. Finally, a side output layer was added to each decoder path, so as to fuse and average all the output feature information, thereby fusing the feature information of different scales and levels to increase accuracy and completeness of the results. Experiments were conducted on Herlev dataset and the proposed network was compared with three deep learning models: U-net, Progressive Growing of U-net+ (PGU-net+), and Lightweight Feature Attention Network (LFANet). Results show that compared with PGU-net+, DGU-Net increases the accuracy by 70.06%; compared with LFANet, DGU-Net increases the Intersection-over-Union (IoU) by 6.75%. It can be seen that DGU-Net is more accurate in processing edge detail information, and outperforms the comparison models in segmentation indicators generally.

Table and Figures | Reference | Related Articles | Metrics
Low-dose CT image reconstruction based on low-rank and total variation joint regularization
Yu LIU, Pengcheng ZHANG, Liyuan ZHANG, Yi LIU, Zhiguo GUI, Xueyi ZHANG, Chenyifei ZHU, Haowei TANG
Journal of Computer Applications    2025, 45 (3): 978-987.   DOI: 10.11772/j.issn.1001-9081.2024040478
Abstract37)   HTML1)    PDF (5600KB)(28)       Save

Aiming at the problems that the Total Variation (TV) minimization method easily leads to image over-smoothing and block effects in Low-Dose Computed Tomography (LDCT) image reconstruction, an LDCT image reconstruction method based on low-rank and TV joint regularization was proposed to improve the visual quality of LDCT reconstructed images. Firstly, a low-rank and TV joint regularization based image reconstruction model was established, thus, more accurate and natural reconstruction results were obtained theoretically. Secondly, a low-rank prior with non-local self-similarity property was introduced to overcome the limitations of only using the TV minimization method. Finally, the Chambolle-Pock (CP) algorithm was used to optimize and solve the model, which improved the solution efficiency of the model and ensured the effective solution of the model. The effectiveness of the proposed method was verified under three different LDCT scanning conditions. Experimental results on Mayo dataset show that compared with the PWLS-LDMM (Penalized Weighted Least-Squares based on Low-Dimensional Manifold) method, NOWNUNM (NOnlocal Weighted NUclear Norm Minimization) method and CP method, at 25% dose, the proposed method increases the Visual Information Fidelity (VIF) by 28.39%, 8.30% and 2.93%, respectively; at 15% dose, the proposed method increases the VIF by 29.96%, 13.83% and 4.53%, respectively; at 10% dose, the proposed method increases the VIF by 30.22%, 17.10% and 7.66%, respectively. It can be seen that the proposed method can retain more detailed texture information while removing noise and stripe artifacts, which verifies that the proposed method has better noise artifact suppression capability.

Table and Figures | Reference | Related Articles | Metrics
Efficient fine-tuning method of large language models for test case generation
Peng CAO, Guangqi WEN, Jinzhu YANG, Gang CHEN, Xinyi LIU, Xuechun JI
Journal of Computer Applications    2025, 45 (3): 725-731.   DOI: 10.11772/j.issn.1001-9081.2024111598
Abstract66)   HTML6)    PDF (1215KB)(28)       Save

Data-driven automated generation technology of unit test cases has problems of low coverage and poor readability, struggling to meet the increasing demand for testing. Recently, Large Language Model (LLM) has shown great potential in code generation tasks. However, due to the differences in functional and coding styles of code data, LLMs face the challenges of catastrophic forgetting and resource constraints. To address these problems, a transfer learning idea was proposed by fine-tuning coding and functional styles simultaneously, and an efficient fine-tuning training method was developed for LLMs in generating unit test cases. Firstly, the widely used instruction datasets were adopted to align LLM with instructions, and the instruction sets were divided by task types. At the same time, the weight increments with task-specific features were extracted and stored. Secondly, an adaptive style extraction module was designed for dealing with various coding styles with noise-resistant learning and coding style backtracking learning in the module. Finally, joint training of the functional and coding style increments was performed respectively on the target domain, thereby realizing efficient adaptation and fine-tuning on the target domains with limited resources. Experimental results of test case generation on SF110 Corpus of Classes dataset indicate that the proposed method outperforms the methods for comparison. Compared to the mainstream code generation LLMs — Codex, Code Llama and DeepSeek-Coder, the proposed method has the compilation rate increased by 0.8%, 43.5% and 33.8%, respectively; the branch coverage increased by 3.1%, 1.0%, and 17.2% respectively; and the line coverage increased by 4.1%, 6.5%, and 15.5% respectively; verifying the superiority of the proposed method in code generation tasks.

Table and Figures | Reference | Related Articles | Metrics
Architecture design of data fusion pipeline for unmanned systems
Yi LIU, Guoli YANG, Qibin ZHENG, Xiang LI, Yangsen ZHOU, Depeng CHEN
Journal of Computer Applications    2024, 44 (8): 2536-2543.   DOI: 10.11772/j.issn.1001-9081.2023081184
Abstract30)   HTML2)    PDF (2572KB)(21)       Save

Sensors are the basis for unmanned systems to perform intelligent actions. The fusion of multi-sensor data can enhance intelligent perception and autonomous decision-making capabilities of unmanned systems, and improve the reliability and robustness of these systems. Data fusion of unmanned systems encounters many challenges such as diverse sensor types, heterogeneous data formats, real-time needs of data fusion and analysis, as well as complex types and fast evolution of algorithm models. Traditional methods of developing fusion models through customization on front end and approaches based on fusion platform running on back end are difficult to apply in these cases. Therefore, a pipeline platform for data fusion was proposed. This platform has capabilities to support automatic data transformation, flexible algorithm combination, dynamic model configuration, and rapid iteration of functions to achieve dynamic and quick data fusion model construction and provide information service for different tasks. Based on the analysis of data fusion process and techniques, the pipeline framework and its key functions and components were characterized, the key technologies that urgently need breakthroughs were analyzed, the running way and actual case of the framework were given, and research directions for future development were pointed out.

Table and Figures | Reference | Related Articles | Metrics
Fast adversarial training method based on random noise and adaptive step size
Jinfu WU, Yi LIU
Journal of Computer Applications    2024, 44 (6): 1807-1815.   DOI: 10.11772/j.issn.1001-9081.2023060774
Abstract208)   HTML8)    PDF (1851KB)(181)       Save

Adversarial Training (AT) and its variants have been proven to be the most effective methods for defending against adversarial attacks. However, the process of generating adversarial examples requires extensive computational resources, resulting in low model training efficiency and limited feasibility. On the other hand, Fast AT (Fast-AT) uses single-step adversarial attacks to replace multi-step attacks for accelerating the training process, but its model robustness is much lower than that of multi-step AT methods, and it is susceptible to Catastrophic Overfitting (CO). To address these issues, a Fast-AT method based on random noise and adaptive step size was proposed. Firstly, in each iteration of generating adversarial examples, random noise was added to the original input images for data augmentation. Then, the gradients of each adversarial example during the training process were accumulated, and the step size of the adversarial examples was adaptively adjusted based on the gradient information. Finally, adversarial attacks were performed according to the perturbation step size and gradient information to generate adversarial examples for model training. Various adversarial attacks were conducted on the CIFAR-10 and CIFAR-100 datasets, and compared to N-FGSM (Noise Fast Gradient Sign Method), the proposed method achieved at least a 0.35 percentage point improvement in robust accuracy. The experimental results demonstrate that the proposed method can avoid CO issue in Fast-AT and enhance the robustness of deep learning models.

Table and Figures | Reference | Related Articles | Metrics
Iterative denoising network based on total variation regular term expansion
Ruifeng HOU, Pengcheng ZHANG, Liyuan ZHANG, Zhiguo GUI, Yi LIU, Haowen ZHANG, Shubin WANG
Journal of Computer Applications    2024, 44 (3): 916-921.   DOI: 10.11772/j.issn.1001-9081.2023030376
Abstract237)   HTML7)    PDF (2529KB)(341)       Save

For the shortcomings of poor interpretation ability and instability in neural network training, a Chambolle- Pock (CP) algorithm optimized denoising network based on Total Variational (TV) regularization, CPTV-Net, was proposed to solve the denoising problem of Low-Dose Computed Tomography (LDCT) images. Firstly, the TV constraint term was introduced into the L1 regularization term model to preserve the structural information of the image. Secondly, the CP algorithm was used to solve the denoising model and obtain specific iterative steps to ensure the convergence of the algorithm. Finally, the shallow CNN (Convolutional Neural Network) was used to learn the iterative formula of the primal dual variables of the linear operation. The neural network was used to calculate the solution of the model, and the network parameters were collected to optimize the combined data. The experimental results on simulated and real LDCT datasets show that compared with five advanced denoising methods such as REDCNN (Residual Encoder-Decoder Convolutional Neural Network) and TED-Net (Transformer Encoder-decoder Dilation Network), CPTV-Net has the best Peak Signal-to-Noise Ratio (PSNR), Structural SIMilarity (SSIM), and Visual Information Fidelity (VIF) evaluation values, and can generate LDCT images with significant denoising effect and the most details preserved.

Table and Figures | Reference | Related Articles | Metrics
Fast adversarial training method based on data augmentation and label noise
Yifei SONG, Yi LIU
Journal of Computer Applications    2024, 44 (12): 3798-3807.   DOI: 10.11772/j.issn.1001-9081.2023121835
Abstract164)   HTML12)    PDF (1924KB)(67)       Save

Adversarial Training (AT) has been an effective defense approach for protecting classification models against adversarial attacks. However, high computational cost of the generation of strong adversarial samples during the training process may lead to significantly large extra training time. To overcome this limitation, Fast Adversarial Training (FAT) based on single-step attacks was explored. Previous work improves FAT from different perspectives, such as sample initialization, loss regularization, and training strategies. However, Catastrophic Overfitting (CO) will be encountered when dealing with large perturbation budgets. Therefore, an FAT method based on data augmentation and label noise was proposed. Firstly, multiple image transformations were performed to the original samples and random noise was introduced to implement data enhancement. Secondly, a small amount of label noise was injected. Thirdly, the augmented data were used to generate adversarial samples for model training. Finally, the label noise rate was adjusted adaptively according to the adversarial robustness test results. Comprehensive experimental results on CIFAR-10 and CIFAR-100 datasets show that compared to FGSM-MEP (Fast Gradient Sign Method with prior from the Momentum of all Previous Epoch) method, the proposed method improves 4.63 and 5.38 percentage points respectively on AA (AutoAttack) on the two datasets under the condition of large perturbation budget. The experimental results demonstrate that the proposed method can effectively handle the catastrophic overfitting problem under large perturbation budgets and enhance the adversarial robustness of model significantly.

Table and Figures | Reference | Related Articles | Metrics
Image super-resolution reconstruction method based on iterative feedback and attention mechanism
Min LIANG, Jiayi LIU, Jie LI
Journal of Computer Applications    2023, 43 (7): 2280-2287.   DOI: 10.11772/j.issn.1001-9081.2022060877
Abstract276)   HTML7)    PDF (3427KB)(139)       Save

To address the difficulties in reconstructing high-frequency information in image super-resolution reconstruction due to the lack of dependency between low-resolution and high-resolution images and the lack of order during the reconstruction of feature map, a single-image super-resolution reconstruction method based on iterative feedback and attention mechanism was proposed. Firstly, high- and low-frequency information in the image was extracted respectively by using frequency decomposition block, and the two kinds of information was processed respectively, so that the network focused on the extracted high-frequency details to increase the restoration ability of the method on image details. Secondly, through the channel-wise attention mechanism, the reconstruction focus was put on the feature channels with effective features to improve the network ability of extracting the feature map information. Thirdly, the iterative feedback idea was adopted to increase quality of the restored image in the process of repeated comparison and reconstruction. Finally, the output image was generated through the reconstruction block. The proposed method shows better performance in comparison with mainstream super-resolution methods in the 2×, 4× and 8× experiments on Set5, Set14, BSD100, Urban100 and Manga109 benchmark datasets. In the 8× experiments on Manga109 dataset, the proposed method improves Peak Signal-to-Noise Ratio (PSNR) by about 3.01 dB and 2.32 dB averagely and respectively compared to the traditional interpolation method and the Super-Resolution Convolutional Neural Network (SRCNN). Experimental results show that the proposed method can reduce the errors in the reconstruction process and effectively reconstruct finer high-resolution images.

Table and Figures | Reference | Related Articles | Metrics
News recommendation method with knowledge graph and differential privacy
Li’e WANG, Xiaocong LI, Hongyi LIU
Journal of Computer Applications    2022, 42 (5): 1339-1346.   DOI: 10.11772/j.issn.1001-9081.2021030527
Abstract468)   HTML18)    PDF (1421KB)(184)       Save

The existing recommendation method with knowledge graph and privacy protection cannot effectively balance the noise of Differential Privacy (DP) and the performance of recommender system. In order to solve the problem, a News Recommendation method with Knowledge Graph and Privacy protection (KGPNRec) was proposed. Firstly, the multi-channel Knowledge-aware Convolutional Neural Network (KCNN) model was adopted to merge the multi-dimensional feature vectors of news title, entities and entity contexts of knowledge graph to improve the accuracy of recommendation. Secondly, based on the attention mechanism, the noise with different magnitudes was added in the feature vectors according to different sensitivities to reduce the impact of noise on data analysis. Then, the uniform Laplace noise was added to weighted user feature vectors to ensure the security of user data. Finally, the experimental analysis was conducted on real news datasets. Experimental results show that, compared with the baseline methods such as Privacy-Preserving Multi-Task recommendation Framework (PPMTF) and recommendation method based on Deep Knowledge-aware Network (DKN), the proposed KGPNRec can protect user privacy and ensure the prediction performance of method. For example, on the Bing News dataset, the Area Under Curve (AUC) value, accuracy and F1-score of the proposed method are improved by 0.019, 0.034 and 0.034 respectively compared with those of PPMTF.

Table and Figures | Reference | Related Articles | Metrics
Personal event detection method based on text mining in social media
Rui XIAO, Mingyi LIU, Zhiying TU, Zhongjie WANG
Journal of Computer Applications    2022, 42 (11): 3513-3519.   DOI: 10.11772/j.issn.1001-9081.2022010106
Abstract399)   HTML7)    PDF (2013KB)(98)       Save

Users’ social media contains their past personal experiences and potential life patterns, and the study of their patterns is of great value for predicting users’ future behaviors and performing personalized recommendations for users. By collecting Weibo data, 11 types of events were defined, and a three?stage Pipeline system was proposed to detect personal events by using BERT (Bidirectional Encoder Representations from Transformers) pre?trained models in three stages respectively, including BERT+BiLSTM+Attention, BERT+FullConnect and BERT+BiLSTM+CRF. The information of whether the text contained defined events, the event types of events contained, and the elements contained in each event were extracted from the Weibo, and the specific elements are Subject (subject of the event), Object (event element), Time (event occurrence time), Place (place where the event occurred) and Tense (tense of the event), thereby exploring the change law of user’s personal event timeline to predict personal events. Comparative experiments and analysis were conducted with classification algorithms such as logistic regression, naive Bayes, random forest and decision tree on a collected real user Weibo dataset. Experimental results show that the BERT+BiLSTM+Attention, BERT+FullConnect, BERT+BiLSTM+CRF methods used in three stages achieve the highest F1?score, verifying the effectiveness of the proposed methods. Finally, the personal event timeline was visually built according to the extracted events with time information.

Table and Figures | Reference | Related Articles | Metrics
Survey on imbalanced multi‑class classification algorithms
Mengmeng LI, Yi LIU, Gengsong LI, Qibin ZHENG, Wei QIN, Xiaoguang REN
Journal of Computer Applications    2022, 42 (11): 3307-3321.   DOI: 10.11772/j.issn.1001-9081.2021122060
Abstract996)   HTML99)    PDF (1861KB)(646)       Save

Imbalanced data classification is an important research content in machine learning, but most of the existing imbalanced data classification algorithms foucus on binary classification, and there are relatively few studies on imbalanced multi?class classification. However, datasets in practical applications usually have multiple classes and imbalanced data distribution, and the diversity of classes further increases the difficulty of imbalanced data classification, so the multi?class classification problem has become a research topic to be solved urgently. The imbalanced multi?class classification algorithms proposed in recent years were reviewed. According to whether the decomposition strategy was adopted, imbalanced multi?class classification algorithms were divided into decomposition methods and ad?hoc methods. Furthermore, according to the different adopted decomposition strategies, the decomposition methods were divided into two frameworks: One Vs. One (OVO) and One Vs. All (OVA). And according to different used technologies, the ad?hoc methods were divided into data?level methods, algorithm?level methods, cost?sensitive methods, ensemble methods and deep network?based methods. The advantages and disadvantages of these methods and their representative algorithms were systematically described, the evaluation indicators of imbalanced multi?class classification methods were summarized, the performance of the representative methods were deeply analyzed through experiments, and the future development directions of imbalanced multi?class classification were discussed.

Table and Figures | Reference | Related Articles | Metrics
Optimization model of hospital emergency resource redundancy configuration under emergencies
Zhiyuan WAN, Qinming LIU, Chunming YE, Wenyi LIU
Journal of Computer Applications    2020, 40 (2): 584-588.   DOI: 10.11772/j.issn.1001-9081.2019071235
Abstract544)   HTML1)    PDF (539KB)(447)       Save

Before an emergency occurs, the hospitals need to maintain a certain amount of emergency resource redundancy. Aiming at the problem of configuration optimization of hospital emergency resource redundancy under emergencies, firstly, based on the utility theory, by analyzing the utility performance of the hospital emergency resource redundancy, the emergency resource redundancy was defined and classified, and the utility function conforming to the marginal law was determined. Secondly, the redundancy configuration model of hospital emergency resources with maximal total utility was established, and the upper limit of emergency resource storage and the lower limit of emergency rationality were given as the constraints of the model. Finally, the combination of particle swarm optimization and sequential quadratic programming method was used to solve the model. Through case analysis, four optimization schemes for the emergency resource redundancy of the hospital were obtained, and the demand degree of the hospital emergency level to the hospital emergency resource redundancy was summarized. The research shows that with the emergency resource redundancy configuration optimization model, the emergency rescue of hospitals under emergencies can be carried out well, and the utilization efficiency of hospital emergency resources can be improved.

Table and Figures | Reference | Related Articles | Metrics
Vision-based gesture recognition method and its implementation on digital signal processor
ZHANG Yi LIU Yuran LUO Yuan
Journal of Computer Applications    2014, 34 (3): 833-836.   DOI: 10.11772/j.issn.1001-9081.2014.03.0833
Abstract445)      PDF (762KB)(559)       Save

The existing gesture recognition algorithms perform inefficiently on the embedded devices for their high complexity. A shape feature-based algorithm with major fixed-point arithmetic was proposed, which used the most significant internal circle algorithm and the circle cutting algorithm to obtain the features. This method could extract the center of a palm by finding the largest circle inside the palm, and could extract the finger tips by drawing circles at the edge of the hand. Finally gestures could be classified and recognized according to the feature information of the number of fingers, orientation and the position of the palm. This algorithm had been transplanted to Digital Signal Processor (DSP) by improving it. The experimental results show that the proposed method can adapt to different hands of different people and it is ideal for DSP. Compared with other shape-based algorithms, the average recognition rate has increased from 1.6%~8.6%, and the speed of the computer processing has increased by 2% by using this algorithm. Therefore, the proposed method facilitates the implementation of embedded gesture recognition systems and lays the foundation for the embedded gesture recognition system.

Related Articles | Metrics
Image resampling tampering detection based on further resampling
LIU Yi LIU Yongben
Journal of Computer Applications    2014, 34 (3): 815-819.   DOI: 10.11772/j.issn.1001-9081.2014.03.0815
Abstract831)      PDF (771KB)(541)       Save

Resampling is a typical operation in image forgery, since most of the existing resampling tampering detection algorithms for JPEG images are not so powerful and inefficient in estimating the zoom factor accurately, an image resampling detection algorithm via further resampling was proposed. First, a JPEG compressed image was resampled again with a scaling factor less than 1, to reduce the effects of JPEG compression in image file saving. Then the cyclical property of the second derivative of a resampled signal was adopted for resampling operation detection. The experimental results show that the proposed algorithm is robust to JPEG compression, and in this manner, the real zoom factor may be accurately estimated and thus useful for resampling operation detection when a synthesized image is formed from resampled original images with different scaling factors.

Related Articles | Metrics
Electrooculogram assisted electromyography human-machine interface system based on multi-class support vector machine
ZHANG Yi LIU Rui LUO Yuan
Journal of Computer Applications    2014, 34 (11): 3357-3360.   DOI: 10.11772/j.issn.1001-9081.2014.11.3353
Abstract190)      PDF (714KB)(585)       Save

Concerning the low correct recognition rate of the Electromyography (EMG) control system, a new Human-Computer Interaction (HCI) system based on Electrooculogram (EOG) assisted EMG was designed and implemented. The feature vectors of EOG and EMG were extracted by threshold method and improved wavelet transform separately, and the feature vectors were integrated together. Then the features were classified by multi-class Support Vector Machine (SVM), and the different control commands were generated according to the result of pattern recognition. The experimental results prove that, compared with the single EMG control system, the new system has better operability and stability with higher correct recognition rate.

Reference | Related Articles | Metrics
Hand gesture recognition based on bag of features and support vector machine
ZHANG Qiu-yu WANG Dao-dong ZHANG Mo-yi LIU Jing-man
Journal of Computer Applications    2012, 32 (12): 3392-3396.   DOI: 10.3724/SP.J.1087.2012.03392
Abstract831)      PDF (855KB)(984)       Save
According to the influence of approximate skin color information or complex background, it is hard to get precise gesture contour by hand gesture segmentation, which will have effect on later gesture recognition rate and real-time interaction. Therefore, this paper proposed a gesture recognition method based on the BOF-SVM (Bag Of Features-Support Vector Machine). At first, local invariant features of the gesture images were extracted by the Scale Invariant Feature Transformation (SIFT) algorithm. Then the visual code book was generated by gesture local eigenvector (SIFT descriptors) through K-means clustering. And visual code set of every image got quantized by visual code book. As a result, the characterized vector of gesture images with fixed dimensional was obtained to train multi-class SVM classifier. This method only needed to frame the gesture area instead of segmenting gesture accurately. The experimental results indicate that the average recognition rate of the nine interactive hand gestures based on this method can reach 92.1%. Besides, it has good robustness and efficiency, and can adapt to the changes of environment.
Related Articles | Metrics
Objective quality evaluation of image fusion based on visual attention mechanism and regional structural similarity
REN Xian-yi LIU Xiu-jian HU Tao ZHANG Ji-hong
Journal of Computer Applications    2011, 31 (11): 3022-3026.   DOI: 10.3724/SP.J.1087.2011.03022
Abstract1356)      PDF (859KB)(780)       Save
To handle the problem of low consistency between the objective and subjective evaluations of image fusion, considering the features of Human Visual System (HVS), a new metric to evaluate the quality of the fusion image based on the Visual Attention Mechanism (VAM) and the regional structural similarity was proposed. This quality metric utilized the global salience got by VAM and the local salient information to estimate how well the salient information contained within the sources was presented by the composite image. Since human eyes are more sensitive to region, by giving higher weight to those regions with high saliency value in the source images, the new metric evaluated the quality of the fused image by computing the weighted regional structural similarity of the fused image and source images in all regions. The correlation analysis between objective measure and subjective evaluation was performed and the results demonstrate that the new metric is more consistent with human subjective evaluation, compared with the traditional objective measurements and the widely used EFQI.
Related Articles | Metrics
Rough-set based approach to solve the inference conflict in qualitative probabilistic network
Shuang-xian LIU Wei-yi LIU Yue-kun YUE
Journal of Computer Applications   
Abstract1874)      PDF (593KB)(1156)       Save
Qualitative Probabilistic Networks (QPNs) are the qualitative abstraction of Bayesian networks by substituting the conditional probabilistic parameters by qualitative influences on directed edges. Efficient algorithms have been developed for QPN reasoning. Due to the high abstraction, unresolved trade-offs (i.e., conflicts) during inferences with qualitative probabilistic networks may be produced. Motivated by avoiding the conflicts of QPN reasoning, a rough-set-theory based approach was proposed. The attribute association degrees between node peers were calculated based on the rough-set-theory while the QPNs were constructed. The association degrees were adopted as the weights to solve the conflicts during QPN inferences. Accordingly, the algorithm of QPN reasoning was improved by incorporating the attribute association degrees. By applying this method, the efficiency of QPNs inferences can be preserved, and the inference conflict can be well addressed at the same time.
Related Articles | Metrics
Robust classifier based two-layer Adaboost for precise eye location
Yi LIU Wei-guo GONG Wei-hong LI
Journal of Computer Applications   
Abstract1652)      PDF (712KB)(1152)       Save
A two-layer eye classifier for eye detection was proposed. Two layers, double-eye layer and single-eye layer, had been trained and cascaded into a strong one for eye detection. Two-layer classifier was more robust in illumination invariance eye detection compared with YCbCr space eye map algorithm. Also, it kept the same detection rate as the commonly trained Adaboost eye classifier with a much lower error detection rate. Relationship among stages, training sample number and error detection rate had been analyzed to facilitate the training procedure.
Related Articles | Metrics