Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Complex query-based question-answering model integrating bidirectional sequence embeddings
Hao LIANG, Shaojie QIAO
Journal of Computer Applications    2026, 46 (4): 1096-1103.   DOI: 10.11772/j.issn.1001-9081.2025040497
Abstract57)   HTML1)    PDF (764KB)(14)       Save

Traditional Knowledge Graph (KG) embedding methods mainly focus on link prediction for simple triples, and their modeling paradigm of “head entity-relation-tail entity” have significant limitations in handling conjunctive queries containing multiple unknown variables. To address the above issues, a complex query-based question-answering model integrating Bidirectional Sequence Embedding (BSE) was proposed. Firstly, a query encoder was constructed on the basis of a bidirectional Transformer architecture to convert the query structure into a serialized representation. Secondly, positional encoding was utilized to preserve graph structure information. Thirdly, the deep semantic associations among all elements in the query graph were modeled dynamically through Additive Attention Mechanism (AAM). Finally, global information interaction across nodes was realized, and the shortcomings of traditional methods in modeling long-distance dependencies were addressed effectively. Experiments were conducted on different benchmark datasets to verify the performance advantages of BSE model. The experimental results show that on the WN18RR-PATHS dataset, compared with GQE-DistMult-MP, BSE model achieves a 53.01% improvement in the Mean Reciprocal Rank (MRR) metric; on the EDUKG dataset, BSE model outperforms GQE-Bilinear with a 6.09% increase in the Area Under the Curve (AUC) metric. To sum up, the proposed model can be applied to query-based question-answering in different fields, and has high scalability and application value.

Table and Figures | Reference | Related Articles | Metrics
Novel message passing network for neural Boolean satisfiability problem solver
Yonghao LIANG, Jinlong LI
Journal of Computer Applications    2025, 45 (9): 2934-2940.   DOI: 10.11772/j.issn.1001-9081.2024091362
Abstract232)   HTML1)    PDF (2084KB)(54)       Save

In order to optimize the structure of the Message Passing Neural Network (MPNN), reduce the number of iterations in the solving process, and improve performance of end-to-end neural Boolean SATisfiability problem (SAT) solvers, a More and Deeper Message Passing Network (MDMPN) was proposed. In this network, to pass more messages, an overall message passing module was introduced, thereby realizing transmission of additional overall messages from literal nodes to clause nodes during each message passing iteration. At the same time, to pass deeper messages, a message jumping module was incorporated to realize transmission of messages from the literal nodes to their second-order neighbors. To assess the performance and generalizability of MDMPN, it was applied to the state-of-the-art neural SAT solver QuerySAT and basic neural SAT solver NeuroSAT. Experimental results on the dataset of difficult random 3-SAT problems show that QuerySAT with MDMPN outperforms the standard QuerySAT with an accuracy improvement of 46.12 percentage points on difficult 3-SAT problem with 600 variables and iteration upper limit of 212; NeuroSAT with MDMPN also outperforms the standard NeuroSAT with an accuracy improvement of 35.69 percentage points on difficult 3-SAT problem with 600 variables and iteration upper limit of 212.

Table and Figures | Reference | Related Articles | Metrics
Blind period estimation of PN sequence for multipath tamed spread spectrum signal
YANG Qiang, ZHANG Tianqi, ZHAO Liang
Journal of Computer Applications    2017, 37 (7): 1837-1842.   DOI: 10.11772/j.issn.1001-9081.2017.07.1837
Abstract732)      PDF (893KB)(601)       Save
To estimate pseudo code period of multipath tamed spread spectrum signal, a blind method based on power spectrum reprocessing was proposed to estimate the pseudo code period of the tamed spread spectrum signal in multipath channel. Firstly, the general single path tamed spectrum signal was extended to multipath model. Then, the primary power spectrum of the signal was calculated on the basis of the tamed spread spectrum signal model in multipath environment. Next, the obtained primary power spectrum was used as the input signal to calculate the secondary power spectrum of the signal, and the theoretical analyses showed that the peak line of the secondary power spectrum of the signal would appear in the integral multiple of the pseudo code period. Finally, the pseudo code period of the tamed spread spectrum signal could be estimated by detecting the spacing between the peak spectrum lines. In the comparison experiments with time domain correlation method, the Signal-to-Noise Ratio (SNR) of the proposed method was improved by about 1 dB and 2 dB when the correct rate of pseudo code period was 100% and the length of pseudo code sequence was 127 bits and 255 bits, and the average accumulation times of the proposed method was less under the same condition. The experimental results show that the proposed method not only has less computational complexity, but also improves the estimation correct rate.
Reference | Related Articles | Metrics
W-POS language model and its selecting and matching algorithms
QIU Yunfei, LIU Shixing, WEI Haichao, SHAO Liangshan
Journal of Computer Applications    2015, 35 (8): 2210-2214.   DOI: 10.11772/j.issn.1001-9081.2015.08.2210
Abstract605)      PDF (877KB)(412)       Save

n-grams language model aims to use text feature combined of some words to train classifier. But it contains many redundancy words, and a lot of sparse data will be generated when n-grams matches or quantifies the test data, which badly influences the classification precision and limites its application. Therefore, an improved language model named W-POS (Word-Parts of Speech) was proposed based on n-grams language model. After words segmentation, parts of speeches were used to replace the words that rarely appeared and were redundant, then the W-POS language model was composed of words and parts of speeches. The selection rules, selecting algorithm and matching algorithm of W-POS language model were also put forward. The experimental results in Fudan University Chinese Corpus and 20Newsgroups show that the W-POS language model can not only inherit the advantages of n-grams including reducing amount of features and carrying parts of semantics, but also overcome the shortages of producing large sparse data and containing redundancy words. The experiments also verify the effectiveness and feasibility of the selecting and matching algorithms.

Reference | Related Articles | Metrics
Face recognition algorithm based on cluster-sparse of active appearance model
FEI Bowen, LIU Wanjun, SHAO Liangshan, LIU Daqian, SUN Hu
Journal of Computer Applications    2015, 35 (7): 2051-2055.   DOI: 10.11772/j.issn.1001-9081.2015.07.2051
Abstract745)      PDF (864KB)(616)       Save

The recognition accuracy rate of traditional Sparse Representation Classification (SRC) algorithm is relatively low under the interference of complex non-face ingredient, large training sample set and high similarity between the training samples. To solve these problems, a novel face recognition algorithm based on Cluster-Sparse of Active Appearance Model (CS-AAM) was proposed. Firstly, Active Appearance Model (AAM) rapidly and accurately locate facial feature points and to get the main information of the face. Secondly, K-means clustering was run on the training sample set, the images with high similarity degree were assigned to a category and the clustering center was calculated. Then, the center was used as atomic to structure over-complete dictionary and do sparse decomposition. Finally, face images were classified and recognized by computing sparse coefficients and reconstruction residuals. The face images with different samples and different dimensions from ORL face database and Extended Yale B face database were tested for comparing CS-AAM with Nearest Neighbor (NN), Support Vector Machine (SVM), Sparse Representation Classification (SRC), and Collaborative Representation Classification (CRC). The recognition rate of CS-AAM algorithm is higher than other algorithms with the same samples or the same dimensions. Under the same dimensions, the recognition rate of CS-AAM is 95.2% when the selected number of samples is 210 on ORL face database; the recognition rate of CS-AAM is 96.8% when the selected number of samples is 600 on Extended Yale B face database. The experimental results demonstrate that the proposed method has higher recognition accuracy rate.

Reference | Related Articles | Metrics
Feature transfer weighting algorithm based on distribution and term frequency-inverse class frequency
QIU Yunfei, LIU Shixing, LIN Mingming, SHAO Liangshan
Journal of Computer Applications    2015, 35 (6): 1643-1648.   DOI: 10.11772/j.issn.1001-9081.2015.06.1643
Abstract698)      PDF (908KB)(386)       Save

Traditional machine learning faces a problem: when the training data and test data no longer obey the same distribution, the classifier trained by training data can't classify test data accurately. To solve this problem, according to the transfer learning principle, the features were weighted according to the improved distribution similarity of source domain and target domain's intersection features. The semantic similarity and Term Frequency-Inverse Class Frequency (TF-ICF) were used to weight non-intersection features in source domain. Lots of labeled source domain data and a little labeled target domain were used to obtain the required features for building text classifier quickly. The experimental results on test dataset 20Newsgroups and non-text dataset UCI show that feature transfer weighting algorithm based on distribution and TF-ICF can transfer and weight features rapidly while guaranteeing precision.

Reference | Related Articles | Metrics
Flame recognition algorithm based on Codebook in video
SHAO Liangshan, GUO Yachan
Journal of Computer Applications    2015, 35 (5): 1483-1487.   DOI: 10.11772/j.issn.1001-9081.2015.05.1483
Abstract852)      PDF (814KB)(756)       Save

In order to improve the accuracy of flame recognition in video, a flame recognition algorithm based on Codebook was proposed. The algorithm which combined with static and dynamic features of flame was innovatively applied with YUV color space in Codebook background model to detect flame region, and update the background regularly. Firstly, the algorithm extracted frames from video, and used the liner relation between R, G, B component as the color model to get the flame color candidate area. Second, because of the advantage of the YUV color space, the images were transformed from RGB format to YUV format, a flame color dynamic prospect was extracted with background learning and background subtraction by using Codebook background model. At last, Back Propagation (BP) neural network was trained with the features vectors such as flame area change rate, flame area overlap rate and flame centroid displacement. Flame was judged by using the trained BP neural network in video. The recognition accuracy of the proposed algorithm in the complex video scene was above 96% in fixed camera position and direction videos. The experimental results show that compared with three state-of-art detection algorithms, the proposed algorithm has higher accuracy and lower misrecognition rate.

Reference | Related Articles | Metrics
Consumption sentiment classification based on two-dimensional coordinate mapping method
LIN Mingming QIU Yunfei SHAO Liangshan
Journal of Computer Applications    2014, 34 (9): 2571-2576.   DOI: 10.11772/j.issn.1001-9081.2014.09.2571
Abstract289)      PDF (1043KB)(573)       Save

Aiming at the sentiment classification for Chinese consumption comments, a method called two-dimensional coordinate mapping for sentiment classification based on corpus was constructed. According to the Chinese language characteristics, firstly, a more pertinent searching method based on corpus was proposed. Secondly, the rules of extracting the Chinese subjective phrases were defined. Thirdly, the choosing optimal seed words algorithm of the specific field was constructed. Finally, the two-dimensional coordinate mapping algorithm was constructed, which mapped the comment in two-dimensional Cartesian coordinates through calculating the coordinate values of the comment and decided the semantic orientation of it. Experiments were conducted on 1200 comments of milk (half of them are positive or negative comments) in Amazon. In the experiments, word “henhao-lou” was chosen as the optimal seed word by using choosing optimal seed words algorithm, then the sentiment orientation of it was decided according to two-dimensional coordinate mapping algorithm. The average F-measure of the proposed algorithm reached more than 85%. The result shows that the proposed algorithm can classify the sentiment of Chinese consumption comments.

Reference | Related Articles | Metrics
Microblog bursty topic detection based on topic tree
QIU Yunfei GUO Milun SHAO Liangshan
Journal of Computer Applications    2014, 34 (8): 2332-2335.   DOI: 10.11772/j.issn.1001-9081.2014.08.2332
Abstract378)      PDF (623KB)(485)       Save

A kind of topic tree detection method based on Latent Dirichlet Allocation (LDA) model was put forward, in order to solve the problems of nonstandard terms, randomness, uncertainty of reference and large number of network terms in microblog texts, which can not be solved in traditional detection method. Relevant microblogs were reorganized into a topic tree by increasing information entropy in Natural Language Processing (NLP), combining with the design idea that Dirichelet prior experience value α and experience value β vary with the topic number, then the contribution statistics of every word in the text was achieved using the specific dual probability statistical method of this model. Thus, the interference information would be disposed in advance and the influence of garbage data on topic detection was excluded. Using this contribution as the parameter value of the improved Vector Space Model (VSM), bursty topics were extracted through calculating the similarity between texts, in order to improve the detection precision of bursty topics. Experiments of the proposed detection method were made from two aspects: comparison of the value of F and the manual detection. The experimental data show that, this algorithm not only can detect the bursty topics, but also can improve the precision about 3% and 7% respectively compared with the HowNet model and the TF-IDF (Term Frequency-Inverse Document Frequency) algorithm, and it is more in accordance with human's logic judgments than the traditional ones.

Reference | Related Articles | Metrics
Multivariate linear regression forecasting model based on MapReduce
DAI Liang XU Hongke CHEN Ting QIAN Chao LIANG Dianpeng
Journal of Computer Applications    2014, 34 (7): 1862-1866.   DOI: 10.11772/j.issn.1001-9081.2014.07.1862
Abstract380)      PDF (730KB)(721)       Save

According to the characteristics of traditional multivariate linear regression method for long processing time and limited memory, a parallel multivariate linear regression forecasting model was designed based on MapReduce for the time-series sample data. The model was composed of three MapReduce processes which were used to solve the eigenvector and standard orthogonal vector of cross product matrix composed by historical data, to forecast the future parameter of the eigenvalues and eigenvectors matrix, and to estimate the regression parameters in the next moment respectively. Experiments were designed and implemented to the validity effectiveness of the proposed parallel multivariate linear regression forecasting model. The experimental results show multivariate linear regression prediction model based on MapReduce has good speedup and scaleup, and suits for analysis and forecasting of large data.

Reference | Related Articles | Metrics
Optimized data association algorithm based on visual simultaneous localization and mapping
ZHAO Liang CHEN Min LI Hongchen
Journal of Computer Applications    2014, 34 (2): 576-579.  
Abstract564)      PDF (612KB)(667)       Save
The scale of data association increases as the map grows, which is one of the major reasons for the poor real-time performance of robot in the process of Simultaneous Localization And Mapping (SLAM). In visual SLAM system, SIFT (Scale Invariant Feature Transform) algorithm was used to extract the natural landmarks. Two improvements were introduced to improve the real-time of data association:firstly,extracted the "interest region"; secondly,took into account the physical location of current landmarks. The experimental results indicate that this kind of improvement method is reliable, and the capability of reducing computational complexity is obvious.
Related Articles | Metrics
Establishment and application of consumption sentiment ontology library based on three-dimensional coordinate
QIU Yunfei LIN Mingming SHAO Liangshan
Journal of Computer Applications    2013, 33 (09): 2540-2545.   DOI: 10.11772/j.issn.1001-9081.2013.09.2540
Abstract772)      PDF (925KB)(759)       Save
Since the positive comments may have the non-truly satisfied comments, a method which can truly reflect the sentiment of the consumers was constructed in order to decrease the non-truly satisfied comments from the positive comments. The research oriented to the consumption sentiment shows that the consumption sentiment vocabulary should be extracted at first. According to the consumption sentimental features, consumption sentiment got classified into seven classes and twenty-five subclasses, and the three-dimensional coordinate model was established. Afterwards, Protégé was used to build a consumption sentiment ontology library so that the consumption sentiment can be automatically classified by the three-dimensional coordinate vocabulary classification algorithm. Moreover, the consumption sentiment judging algorithm was applied to automatically judge consumer comments based on the completed library. Finally, compared with the positive comment ratio of Taobao, the F-measure can reach more than 95%. It can eliminate the non-truly satisfied comments from positive comments and reflect the consumer's real emotion.
Related Articles | Metrics
Method of no-reference quality assessment for blurred infrared image
DU Shaobo ZHANG Chong WANG Chao LIANG Xiaobin SUN Shibao
Journal of Computer Applications    2013, 33 (08): 2306-2309.  
Abstract797)      PDF (659KB)(576)       Save
The image quality assessment is to give a reasonable assessment for the quality of image processing algorithm, and No-Reference (NR) quality evaluation method is applied in a lot of situations of being unable to get the original reference image. The result of structure analysis of the infrared image shows that the uncertainty of the image is fuzzy, but not random. Therefore, the concept of fuzzy entropy was introduced into the quality assessment of infrared image. A method of no-reference quality assessment for blurred infrared image was proposed, comparisons and analysis on performance of the algorithm were given from the following aspects: efficiency, consistency and accuracy. The simulation results show that this method has the characteristics of low computation complexity, fast operation speed and consistence of subjective and objective evaluations. And the general performance is better than the assessment based on Mean Squared Error (MSE) and Peak Singal-to-Noise Ratio (PSNR).
Related Articles | Metrics
Routing algorithm for wireless sensor networks based on improved method of cluster heads selection
YAO Guangshun WEN Weiming ZHANG Yongding DONG Zaixiu ZHAO Liang
Journal of Computer Applications    2013, 33 (04): 908-911.   DOI: 10.3724/SP.J.1087.2013.00908
Abstract1011)      PDF (770KB)(803)       Save
In order to alleviate the energy hole in wireless sensor network caused by the energy overconsumption of cluster heads, an improved algorithm was put forward. And the algorithm makes improvement on the selection and replacement of cluster heads. During the cluster heads selection, the algorithm divided the network into unequal clusters and selected the nodes with the most residual energy as cluster heads. And the cluster heads recorded the change of nodes' energy. During the cluster heads replacement, the cluster heads adopted local replacement strategy and appointed the node with the most residual energy as the next cluster head. Therefore, the algorithm modified cluster heads' energy efficiency and balanced the energy consumption among cluster heads. Finally, a simulation experiment was carried out and the experimental results show that the improved algorithm can effectively improve network performance and prolong the network life cycle.
Reference | Related Articles | Metrics
Comparative analysis on three ultra wideband chip design methods
Liang ZHAO Liang JIN Zhong-heng JI Jin-yu CHEN Shuang-ping LIU
Journal of Computer Applications    2011, 31 (07): 1971-1975.   DOI: 10.3724/SP.J.1087.2011.01971
Abstract1162)      PDF (800KB)(830)       Save
At present, in consideration of the carrier mode, the chip design methods of ultra wideband systems mainly involve no-carrier ultra wideband, single carrier ultra wideband and multi-carrier ultra wideband. Although the three chip design methods are relatively mature, there still exist some technical difficulties, and none of them have achieved absolute advantage or extensive application. Through the researches on the related technologies and chip design examples, a comparative analysis was made on the three ultra wideband chip design methods in terms of system complexity, peak to average power ratio, overall system power consumption, frequency selective fading resistance, carrier synchronization, symbol synchronization, and spreading gain. The conclusions may provide some useful reference for the selection of ultra wideband chip design methods in different application scenarios.
Reference | Related Articles | Metrics
WH-CoT: 6W2H-based chain-of-thought prompting framework on large language models
Mengke CHEN, Yun BIAN, Yunhao LIANG, Haiquan WANG
Journal of Computer Applications    0, (): 1-6.   DOI: 10.11772/j.issn.1001-9081.2024050667
Abstract354)   HTML5)    PDF (1396KB)(775)       Save

Concerning the limitations of Chain-of-Thought (CoT) prompting technology, such as insufficient integration of human strategies and poorly performance for small-scale Large Language Models (LLMs), a CoT prompting framework based on the 6W2H (Why, What, Which, When, Where, Who, How, How much) problem decomposition strategy, WH-CoT (6W2H Chain-of-Thought), was proposed. Firstly, the task dataset was clustered, sampled and divided into training and test datasets by using the Sentence-BERT model. Then, in the training dataset, all samples were subjected to element extraction, problem decomposition, answer paragraph construction, and answer generation to form the CoT, thereby constructing a task-specific corpus. Finally, during the reasoning stage, demonstration samples were extracted adaptively from the corpus and added to the prompts, allowing the model to combine the prompts to generate answers to test questions. For the Qwen-turbo model, on arithmetic reasoning task, the average accuracy of WH-CoT is improved by 3.35 and 4.27 percentage points respectively compared with those of the mainstream Zero-Shot-CoT and Manual-CoT; on multi-hop reasoning task, compared with Zero-Shot-CoT and Manual-CoT, WH-CoT has the total performance improvement ratio on EM (Exact Matching ratio) increased by 36 and 111 percentage points respectively. In addition, for the Qwen-14B-Chat and Qwen-7B-Chat models, the total performance improvement ratios of WH-CoT are higher than those of Zero-Shot-CoT and Manual-CoT on both EM and F1. It can be seen that by further integrating human strategies with machine intelligence, WH-CoT can improve the reasoning performance of LLMs of different sizes effectively on both arithmetic reasoning and multi-hop reasoning tasks.

Table and Figures | Reference | Related Articles | Metrics