Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Shadow detection method based on hybrid attention model
TAN Daoqiang, ZENG Cheng, QIAO Jinxia, ZHANG Jun
Journal of Computer Applications    2021, 41 (7): 2076-2081.   DOI: 10.11772/j.issn.1001-9081.2020081308
Abstract425)      PDF (1583KB)(324)       Save
The shadow regions in an image may lead to uncertainty of the image content, which is not conducive to other computer vision tasks, so shadow detection is often considered as a pre-processing process of computer vision algorithms. However, most of the existing shadow detection algorithms use a multi-level network structure, which leads to difficulties in model training, and although some algorithms adopting single-layer network structure have been proposed, they only focus on local shadows and ignore the relation between shadows. To solve this problem, a shadow detection algorithm based on hybrid attention model was proposed to improve the accuracy and robustness of shadow detection. Firstly, the pre-trained deep network ResNext101 was used as the front-end feature extraction network to extract the basic features of the image. Secondly, the bidirectional pyramid structure was used for feature fusion from shallow to deep and deep to shallow, and an information compensation mechanism was proposed to reduce the loss of deep semantic information. Thirdly, a hybrid attention model was proposed for feature fusion by combining spatial attention and channel attention, so as to capture differences between shaded and non-shaded regions. Finally, the prediction results of two directions were merged to obtain the final shadow detection result. Comparison experiments were conducted on public datasets SBU and UCF. The results show that compared with DSC (Direction-aware Spatial Context) algorithm, the Balance Error Rate (BER) of the proposed algorithm is reduced by 30% and 11% respectively, proving that the proposed method can better suppress shadow error detection and enhance shadow details.
Reference | Related Articles | Metrics
Patent text classification based on ALBERT and bidirectional gated recurrent unit
WEN Chaodong, ZENG Cheng, REN Junwei, ZHANG Yan
Journal of Computer Applications    2021, 41 (2): 407-412.   DOI: 10.11772/j.issn.1001-9081.2020050730
Abstract741)      PDF (979KB)(903)       Save
With the rapid increase in the number of patent applications, the demand for automatic classification of patent text is increasing. Most of the existing patent text classification algorithms utilize methods such as Word2vec and Global Vectors (GloVe) to obtain the word vector representation of the text, while a lot of word position information is abandoned and the complete semantics of the text cannot be expressed. In order to solve these problems, a multilevel patent text classification model named ALBERT-BiGRU was proposed by combining ALBERT (A Lite BERT) and BiGRU (Bidirectional Gated Recurrent Unit). In this model, dynamic word vector pre-trained by ALBERT was used to replace the static word vector trained by traditional methods like Word2vec, so as to improve the representation ability of the word vector. Then, the BiGRU neural network model was used for training, which preserved the semantic association between long-distance words in the patent text to the greatest extent. In the effective verification on the patent text dataset published by State Information Center, compared with Word2vec-BiGRU and GloVe-BiGRU, the accuracy of ALBERT-BiGRU was increased by 9.1 percentage points and 10.9 percentage points respectively at the department level of patent text, and was increased by 9.5 percentage points and 11.2 percentage points respectively at the big class level. Experimental results show that ALBERT-BiGRU can effectively improve the classification effect of patent texts of different levels.
Reference | Related Articles | Metrics
Housing recommendation method based on user network embedding
LIU Tong, ZENG Cheng, HE Peng
Journal of Computer Applications    2019, 39 (11): 3398-3402.   DOI: 10.11772/j.issn.1001-9081.2019040721
Abstract443)      PDF (793KB)(342)       Save
With the rapid development of the hotel industry, the online hotel reservation system has become popular. How to let users quickly find the housing they need from massive housing information is the problem to be solved in the reservation system. Aiming at the cold start and data sparseness of users in the housing recommendation, the User Network Embedding Recommendation (UNER) method based on the network embedding method was proposed. Firstly, two kinds of user networks were constructed by the user's historical behavior data and tag information in the system. Then, the network was mapped into the low-dimensional vector space based on the network embedding method, and the vector representation of the user node was obtained and the user similarity matrix was calculated by the user vector. Finally, according to the matrix, the housing recommendation was performed for the user. The experimental data come from the hotel reservation system of "Shuidongxiangshe" in Guizhou. The experimental results show that compared with the user-based collaborative filtering algorithm, the proposed method has the comprehensive evaluation index (F1) increased by 20 percentage points and the Mean Average Precision (MAP) increased by 11 percentage points, reflecting the superiority of the method.
Reference | Related Articles | Metrics
Semi-supervised adaptive multi-view embedding method for feature dimension reduction
SUN Shengzi, WAN Yuan, ZENG Cheng
Journal of Computer Applications    2018, 38 (12): 3391-3398.   DOI: 10.11772/j.issn.1001-9081.2018051050
Abstract558)      PDF (1212KB)(521)       Save
Most of the semi-supervised multi-view feature reduction methods do not take into account of the differences in feature projections among different views, and it is not able to avoid the effects of noise and other unrelated features because of the lack of sparse constraints on the low-dimensional matrix after dimension reduction. In order to solve the two problems, a new Semi-Supervised Adaptive Multi-View Embedding method for feature dimension reduction (SS-AMVE) was proposed. Firstly, the projection was extended from the same embedded matrix in a single view to different matrices in multi-view, and the global structure maintenance term was introduced. Then, the unlabeled data was embedded and projected by the unsupervised method, and the labeled data was linearly projected in combination with the classified discrimination information. Finally, the two types of multi-projection were mapped to a unified low-dimensional space, and the combined weight matrix was used to preserve the global structure, which largely eliminated the effects of noise and unrelated factors. The experimental results show that, the clustering accuracy of the proposed method is improved by about 9% on average. The proposed method can better preserve the correlation of features between multiple views, and capture more features with discriminative information.
Reference | Related Articles | Metrics
Eyeball control accuracy improvement method based on digital image processing
YAN Desai, ZENG Cheng
Journal of Computer Applications    2018, 38 (10): 3013-3016.   DOI: 10.11772/j.issn.1001-9081.2018040778
Abstract726)      PDF (661KB)(609)       Save
To improve the accuracy of the eyeball control on the screen and complete high-accuracy operation to mobile phones or computers, based on the principle that the focus of human eye in screen and the image point in retinal can determine a line through the center of pupil, and the screen's luminous contour reflects on the eyeball to form a rectangular outline, an eyeball control accuracy improvement method based on digital image processing was proposed. The relationship between the pupil center and the rectangular contour is the specific position of the human eye focus on the screen. Real-time video of the eyeball was obtained through a high-definition camera, then digital image processing technology was used to analyze and process each frame of image for obtaining the position coordinates of human eye focus on the screen. The calculated coordinates of each frame were output to the mouse cursor to track the focus of the eyeball, and the position was output to a controlled device with a screen via wireless technology to achieve eyeball control. Simulation results show that the average accuracy of eye control of the proposed mapping method achieves 0.7 degree.
Reference | Related Articles | Metrics
Selection of training data for cross-project defect prediction
WANG Xing, HE Peng, CHEN Dan, ZENG Cheng
Journal of Computer Applications    2016, 36 (11): 3165-3169.   DOI: 10.11772/j.issn.1001-9081.2016.11.3165
Abstract681)      PDF (926KB)(738)       Save
Cross-Project Defect Prediction (CPDP), which uses data from other projects to predict defects in the target project, provides a new perspective to resolve the shortcoming of limited training data encountered in traditional defect prediction. The data more similar to target project should be given priority in the context, because the quality of train cross-project data will directly affect the performance of cross-project defect prediction. In this paper, to analyze the impact of different similarity measures on the selection of training data for cross-project defect prediction, experiments were performed on 34 datasets from the PROMISE repository. The results show that the quality of training data selected by different similarity measure methods is various, and cosine similarity and correlation coefficient can achieve better performance as a whole. The greatest improvement rate is up to 6.7%. According to defect rate of target project, cosine similarity is seem to be more suitable when the defect rate is more than 0.25.
Reference | Related Articles | Metrics
Improvement of WordNet application programming interface and its application in Mashup services discovery
ZENG Cheng, TANG Yong, ZHU Zilong, LI Bing
Journal of Computer Applications    2015, 35 (11): 3182-3186.   DOI: 10.11772/j.issn.1001-9081.2015.11.3182
Abstract663)      PDF (755KB)(895)       Save
The process of using the traditional WordNet Application Programming Interface (API) is based on the file operation, so each execution of API in WordNet library would lead to serious problem of time-consuming in process of text analysis and similarity calculation. Therefore, the improved solution of WordNet API was proposed. The process of constructing semantic net of WordNet concept was transferred into the computer memory; meanwhile several APIs which were convenient to calculate the similarity were added. These improvements would accelerate the process of tracking the relationship of concepts and text similarity calculation. This solution was already applied in the process of Mashup services discovery. The experimental results show that the use of improved API can effectively improve the query efficiency and recall of Mashup service.
Reference | Related Articles | Metrics
Improvement of Boyer-Moore string matching algorithm
HAN Guanghui ZENG Cheng
Journal of Computer Applications    2014, 34 (3): 865-868.   DOI: 10.11772/j.issn.1001-9081.2014.03.0865
Abstract471)      PDF (489KB)(574)       Save

A new variant of Boyer-Moore (BM) algorithm was proposed on the basis of analyzing BM algorithm. The basic idea of the improvement was to form match heuristic (i.e. good-suffix rule) for the expanded pattern Pa in preprocessing phase, where P was the pattern and a was an arbitrary character that belonged to the alphabet, so both to increase length of the matched suffix and to imply Sunday's occurrence heuristic (i.e. bad-character rule), therefore a larger shift distance of scanning window was obtained. The theoretical analyses show that the improvement has linear time complexity even in the worst case and sublinear behavior on the average case, and space complexity of O(m(σ+1)). The experimental results also show that implementation performance of the improved one is significantly improved, especially in the case of small alphabet.

Related Articles | Metrics
Research on function shift in Boyer-Moore algorithm
HAN Guanghui ZENG Cheng
Journal of Computer Applications    2013, 33 (08): 2379-2382.  
Abstract597)      PDF (536KB)(379)       Save
For the research and improvement of Boyer-Moore (BM) algorithm and its variants, it is very necessary to establish strict formal theory of the function shift in Boyer-Moore's and its construction algorithm. A clear formal definition of shift was given. Then, characteristic sets of the pattern suffixes and its minimum value function were introduced, and the construction of shift was described by the characteristic sets, thus theoretical basis of shift and its construction algorithm were strictly established. Finally, a new construction algorithm of shift was presented based on the construction theorem of shift and iterative computing method of the minimum value function. It is proved that the algorithm has linear time and space complexity. The theoretical analysis and computing results show that the algorithm is simpler, and its complexity of computation is lower, so it is more suitable for hardware implementation compared with several existing algorithms.
Related Articles | Metrics
Multi-source automatic annotation for deep Web
CUI Xiao-Jun PENG Zhi-Yong ZENG Cheng
Journal of Computer Applications   
Abstract1728)      PDF (795KB)(820)       Save
A large number of data on the World Wide Web are hidden behind form-like interfaces. These interfaces interact with a hidden backend database to provide answers to users’ queries. But results returned by Web databases seldom have proper annotations, so it is necessary to assign meaningful labels to them. A framework of multi-source automatic annotation that used multi-annotator to annotate results from different aspects was proposed, especially searching engine-based annotator constructs validate queries and posting them to the search engine. It found the most appropriate terms to annotate the data units by calculating the similarities between terms and instances. Information for annotating can be acquired automatically without the support of domain ontology. Experiments over four real world domains indicate that the proposed approach is highly effective.
Related Articles | Metrics