Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Multi-focus image fusion network with cascade fusion and enhanced reconstruction
Benchen YANG, Haoran LI, Haibo JIN
Journal of Computer Applications    2025, 45 (2): 594-600.   DOI: 10.11772/j.issn.1001-9081.2024030302
Abstract68)   HTML1)    PDF (2477KB)(298)       Save

Aiming at the problem of semi-focus images caused by improper focusing of far and near visual fields during digital image shooting, a multi-focus image fusion Network with Cascade fusion and enhanced reconstruction (CasNet) was proposed. Firstly, a cascade sampling module was constructed to calculate and merge the residuals of feature maps sampled at different depths for efficient utilization of focused features at different scales. Secondly, a lightweight multi-head self-attention mechanism was improved to perform dimensional residual calculation on feature maps for feature enhancement of the image and make the feature maps present better distribution in different dimensions. Thirdly, convolution channel attention stacking was used to complete feature reconstruction. Finally, interval convolution was used for up- and down-sampling during the sampling process, so as to retain more original image features. Experimental results demonstrate that CasNet achieves better results in metrics such as Average Gradient (AG) and Gray-Level Difference (GLD) on multi-focus image benchmark test sets Lytro, MFFW, grayscale, and MFI-WHU compared to popular methods such as SESF-Fuse (Spatially Enhanced Spatial Frequency-based Fusion) and U2Fusion (Unified Unsupervised Fusion network).

Table and Figures | Reference | Related Articles | Metrics
Relay computation and dynamic diversion of computing-intensive large flow data
LIAO Jia, CHEN Yang, BAO Qiulan, LIAO Xuehua, ZHU Zhousen
Journal of Computer Applications    2021, 41 (9): 2646-2651.   DOI: 10.11772/j.issn.1001-9081.2020111725
Abstract369)      PDF (1199KB)(361)       Save
In view of the problems such as the slow computation of large flow data, the high computation pressure on the server, a set of relay computation and dynamic diversion model of computing-intensive large flow data was proposed. Firstly, in the distributed environment, the in-memory data storage technology was used to determine the computation amounts and complexity levels of the computation tasks. At the same time, the nodes were sorted by the node resource capacity, and the tasks were dynamically allocated to different nodes for parallel computing. Meanwhile, the computation tasks were decomposed by a relay processing mode, so as to guarantee the performance and accuracy requirements of high flow complex computing tasks. Through analysis and comparison, it can be seen that the running time of multiple nodes is shorter than that of the single node, and the computation speed of multiple nodes is faster than that of the single node when dealing with data volume of more than 10 000 levels. At the same time, when the model is applied in practice, it can be seen that the model can not only reduce the running time in high concurrency scenarios but also save more computing resources.
Reference | Related Articles | Metrics
Incidence trend prediction of hand-foot-mouth disease based on long short-term memory neural network
MA Tingting, JI Tianjiao, YANG Guanyu, CHEN Yang, XU Wenbo, LIU Hongtu
Journal of Computer Applications    2021, 41 (1): 265-269.   DOI: 10.11772/j.issn.1001-9081.2020060936
Abstract430)      PDF (892KB)(791)       Save
In order to solve the problems of the traditional Hand-Foot-Mouth Disease (HFMD) incidence trend prediction algorithm, such as low prediction accuracy, lack of the combination of other influencing factors and short prediction time, a method of long-term prediction using meteorological factors and Long Short-Term Memory (LSTM) network was proposed. First, the sliding window was used to convert the incidence sequence into the input and output of the network. Then, the LSTM network was used for data modeling and prediction, and the iterative prediction was used to obtain the long-term prediction results. Finally, the temperature and humidity variables were added to the network to compare the impact of these variables on the prediction results. Experimental results show that adding meteorological factors can improve the prediction accuracy of the model. The proposed model has the Mean Absolute Error (MAE) on the Jinan dataset of 74.9, and the MAE on the Guangzhou dataset of 427.7. Compared with the commonly used Seasonal Autoregressive Integrated Moving Average (SARIMA) model and Support Vector Regression (SVR) model, the proposed model has the prediction accuracy higher, which proves that the model is an effective experimental method for the prediction of the incidence trend of HFMD.
Reference | Related Articles | Metrics
Prediction of indoor thermal comfort level of high-speed railway station based on deep forest
CHEN Yanru, ZHANG Tujingwa, DU Qian, RAN Maoliang, WANG Hongjun
Journal of Computer Applications    2021, 41 (1): 258-264.   DOI: 10.11772/j.issn.1001-9081.2020060888
Abstract527)      PDF (1166KB)(846)       Save
Since the semi-closed and semi-opened spaces such as high-speed railway station have the indoor thermal comfort level difficult to predict, a Deep Forest (DF)-based deep learning method was proposed to realize the scientific prediction of thermal comfort level. Firstly, the heat exchange environment of high-speed railway station was modeled based on field survey and Energy Plus platform. Secondly, 8 influence factors, such as passenger density, operating number of multi-evaporator air conditioners and setting temperatures of multi-evaporator air conditioners, were presented, and 424 operating conditions were designed to obtain massive data. Finally, DF was used to obtain the relationship between thermal comfort and influence factors in order to predict the indoor thermal comfort level of high-speed rail station. Deep Neural Network (DNN) and Support Vector Machine (SVM) were provided as comparison algorithms for the verification. Experimental results show that, among the three models, DF performs best in terms of the prediction accuracy and weighted- F 1, and has the best prediction accuracy of 99.76% and the worst of 98.11%. Therefore, DF can effectively predict the indoor thermal comfort level of high-speed railway stations.
Reference | Related Articles | Metrics
Routing protocol optimized for data transmission delay in wireless sensor networks
REN Xiuli, CHEN Yang
Journal of Computer Applications    2020, 40 (1): 196-201.   DOI: 10.11772/j.issn.1001-9081.2019060987
Abstract479)      PDF (947KB)(371)       Save
Concerning the serious packet loss and high end-to-end delay in wireless sensor networks, a Routing Protocol Optimized for Data Transmission Delay (RPODTD) was proposed. Firstly, according to the data transmission result, the channel detection conditions were classified, and the effective detection ratio and transmission efficiency were introduced as the evaluation indexes of nodes. Then, the queuing delay of data packet was estimated by the difference between actual delay and theoretical delay. Finally, the maximum and minimum queuing delay thresholds were given for judging whether to change the transmission path according to the interval that the queuing delay belongs to. In the simulation experiment on OMNeT++, compared with link quality and delay based Composite Load Balancing routing protocol (ComLoB) and Congestion Avoidance multipath routing protocol based on Routing Protocol for Low-power and lossy network (CA-RPL), RPODTD has the average end-to-end delay of nodes reduced by 78.87% and 51.81% respectively, and the node loss rate reduced by 40.71% and 68.43% respectively, and the node mortality rate reduced by 25.42% and 44.62% respectively. The simulation results show that the proposed RPODTD can effectively reduce the end-to-end delay, decrease the packet loss rate and extend the network life cycle.
Reference | Related Articles | Metrics
Residents' travel origin and destination identification method based on naive Bayes classification
ZHAO Guanghua, LAI Jianhui, CHEN Yanyan, SUN Haodong, ZHANG Ye
Journal of Computer Applications    2020, 40 (1): 36-42.   DOI: 10.11772/j.issn.1001-9081.2019061076
Abstract504)      PDF (1036KB)(485)       Save
Mobile signaling data has the characteristics of low accuracy, large time interval and the existence of signal "ping-pong switching". In order to identify residents' travel Origin and Destination (OD) using mobile location data, a method based on Naive Bayesian Classification (NBC) was proposed. Firstly, according to the distance between places of residence and working, the travel log data measured by 80 volunteers for one month were classified statistically, and the conditional probability distribution of moving and staying states was obtained. Then, the feature parameters used to represent the user's states of moving and staying were established, including angular separation and minimum covering circle diameter. Finally, the conditional probability distribution of moving and staying states was calculated according to NBC theory, the processes with more than two consecutive moving states were clustered into travel OD. The analysis results on Xiamen mobile location data indicate that the travel time per capita obtained by proposed method has the Mean Absolute Percentage Error (MAPE) of 7.79%, which has a high precision, and the analysis results of travel OD can better reflect real travel rules.
Reference | Related Articles | Metrics
Multi-dimensional text clustering with user behavior characteristics
LI Wanying, HUANG Ruizhang, DING Zhiyuan, CHEN Yanping, XU Liyang
Journal of Computer Applications    2018, 38 (11): 3127-3131.   DOI: 10.11772/j.issn.1001-9081.2018041357
Abstract983)      PDF (970KB)(524)       Save
Traditional multi-dimensional text clustering generally extracts features from text contents, but seldom considers the interaction information between users and text data, such as likes, forwards, reviews, concerns, references, etc. Moreover, the traditional multi-dimension text clustering mainly integrates linearly multiple spatial dimensions and fails to consider the relationship between attributes in each dimension. In order to effectively use text-related user behavior information, a Multi-dimensional Text Clustering with User Behavior Characteristics (MTCUBC) was proposed. According to the principle that the similarity between texts should be consistent in different spaces, the similarity was adjusted by using the user behavior information as the constraints of the text content clustering, and then the distance between the texts was improved by the metric learning method, so that the clustering effect was improved. Extensive experiments conduct and verify that the proposed MTCUBC model is effective, and the results present obvious advantages in high-dimensional sparse data compared to linearly combined multi-dimensional clustering.
Reference | Related Articles | Metrics
Multi-source text topic mining model based on Dirichlet multinomial allocation model
XU Liyang, HUANG Ruizhang, CHEN Yanping, QIAN Zhisen, LI Wanying
Journal of Computer Applications    2018, 38 (11): 3094-3099.   DOI: 10.11772/j.issn.1001-9081.2018041359
Abstract488)      PDF (1100KB)(481)       Save
With the rapid increase of text data sources, topic mining for multi-source text data becomes the research focus of text mining. Since the traditional topic model is mainly oriented to single-source, there are many limitations to directly apply to multi-source. Therefore, a topic model for multi-source based on Dirichlet Multinomial Allocation model (DMA) was proposed considering the difference between sources of topic word-distribution and the nonparametric clustering quality of DMA, namely MSDMA (Multi-Source Dirichlet Multinomial Allocation). The main contributions of the proposed model are as follows:1) it takes into account the characteristics of each source itself when modeling the topic, and can learn the source-specific word distributions of topic k; 2) it can improve the topic discovery performance of high noise and low information through knowledge sharing; 3) it can automatically learn the number of topics within each source without the need for human pre-given. The experimental results in the simulated data set and two real datasets indicate that the proposed model can extract topic information more effectively and efficiently than the state-of-the-art topic models.
Reference | Related Articles | Metrics
Night-time vehicle detection based on Gaussian mixture model and AdaBoost
CHEN Yan, YAN Teng, SONG Junfang, SONG Huansheng
Journal of Computer Applications    2018, 38 (1): 260-263.   DOI: 10.11772/j.issn.1001-9081.2017071763
Abstract472)      PDF (819KB)(370)       Save
Focusing on the issue that the accuracy of night-time vehicle detection is relatively low, a method of accurately detecting the night-time vehicles by constructing a Gaussian Mixture Model (GMM) for the geometric relationship of the headlights and an AdaBoost (Adaptive Boosting) classifier using inverse projected vehicle samples was proposed. Firstly, the inverse projection plane was set according to the spatial position relation of the headlights in the traffic scene, and the headlights area was roughly positioned by the image preprocessing. Secondly, the geometrical relationship of the headlights was used to construct the GMM with the inverse projected images, and the headlights were initially matched. Finally, the vehicles were detected by using the AdaBoost classifier for inverse projected vehicle samples. In the comparison experiments with the AdaBoost classifier for the original image, the proposed method increased detection rate by 1.93%, decreased omission ratio by 17.83%, decreased false detection rate by 27.61%. Compared with D-S (Dempster-Shafer) evidence theory method, the proposed method increased detection rate by 2.03%, decreased omission ratio by 7.58%, decreased false detection rate by 47.51%. The proposed method can effectively improve the relative detection accuracy, reduces the interference of ground reflection and shadow, and satisfies the requirements of reliability and accuracy of night-time vehicle detection in traffic scene.
Reference | Related Articles | Metrics
Survey on construction of measurement matrices in compressive sensing
WANG Qiang, ZHANG Peilin, WANG Huaiguang, YANG Wangcan, CHEN Yanlong
Journal of Computer Applications    2017, 37 (1): 188-196.   DOI: 10.11772/j.issn.1001-9081.2017.01.0188
Abstract801)      PDF (1425KB)(1190)       Save
The construction of measurement matrix in compressive sensing varies widely and is on the development constantly. In order to sort out the research results and acquire the development trend of measurement matrix, the process of measurement matrix construction was introduced systematically. Firstly, compared with the traditional signal acquisition theory, the advantages of high resource utilization and small storage space were expounded. Secondly, on the basis of the framework of compressive sensing and focusing on four aspects:the construction principle, the generation method, the structure design of measurement matrix and the optimal method, the construction of measurement matrix in compressive sensing was summarized, and advantages of different principles, generations and structures were introduced in detail. Finally, based on the research results, the development directions of measurement matrix were prospected.
Reference | Related Articles | Metrics
Advances in automatic image annotation
LIU Mengdi, CHEN Yanli, CHEN Lei
Journal of Computer Applications    2016, 36 (8): 2274-2281.   DOI: 10.11772/j.issn.1001-9081.2016.08.2274
Abstract393)      PDF (1305KB)(486)       Save
Existing image annotation algorithms can be roughly divided into four categories:the semantics based methods, the probability based methods, the matrix decomposition based methods and the graph learning based methods. Some representative algorithms for every category were introduced and the problem models and characteristics of these algorithms were analyzed. Then the main optimization methods of these algorithms were induced, and the common image datasets and the evaluation metrics of these algorithms were introduced. Finally, the main problems of automatic image annotation were pointed out, and the solutions to these problems were put forward. The analytical results show that the full use of complementary advantages of the current algorithms, or taking multi-disciplinary advantages may provide more efficient algorithm for automatic image annotation.
Reference | Related Articles | Metrics
Combination of improved diffusion and bilateral filtering for low-dose CT reconstruction
ZHANG Pengcheng, ZHANG Quan, ZHANG Fang, CHEN Yan, HAN Jianning, HAO Huiyan, GUI Zhiguo
Journal of Computer Applications    2016, 36 (4): 1100-1105.   DOI: 10.11772/j.issn.1001-9081.2016.04.1100
Abstract543)      PDF (973KB)(484)       Save
Median Prior (MP) reconstruction algorithm combined with nonlocal means fuzzy diffusion and extended neighborhood bilateral filter was proposed to reduce the streak artifacts in low-dose Computed Tomography (CT) reconstruction. In the new algorithm, the nonlocal means fuzzy diffusion method was used to improve the median of the prior distribution Maximum A Posterior (MAP) reconstruction algorithm at first, which reduced the noise in the reconstruction image; then, the bilateral filtering method based on the expended neighborhood was applied to preserve the edges and details of the reconstruction image and improve the Signal-to-Noise Ratio (SNR). The Shepp-Logan model and the thorax phantom were used to test the effectiveness of the proposed algorithm. The experimental results show that the proposed method has the smaller values of the Normalized Mean Square Distance (NMSD) and Mean Absolute Error (MAE) and the highest SNR (10.20 dB and 15.51 dB, respectively) in the two experiment images, compared with Filtered Back Projection (FBP), Median Root Prior (MRP), NonLocal Mean MP (NLMMP) and NonLocal Mean Bilateral Filter MP (NLMBFMP) algorithms. The experimental results show that the proposed reconstruction algorithm can reduce noise while keeping the edges and details of the image, which improves the deterioration problem of the low-dose CT image and obtains the image with higher SNR and quality.
Reference | Related Articles | Metrics
Strength model of user relationship based on latent regression
HAN Zhongming, TAN Xusheng, CHEN Yan, YANG Weijie
Journal of Computer Applications    2016, 36 (2): 336-341.   DOI: 10.11772/j.issn.1001-9081.2016.02.0336
Abstract526)      PDF (1017KB)(1041)       Save
To effectively measure the strength of the directed relationship among the users in social network, based on the directed interaction frequency, a smooth model for computing the interaction strength of the user was proposed. Furthermore, user interaction strength was taken as dependent variable and user relationship strength was taken as latent variable, a latent regression model was constructed, and an Expectation-Maximization (EM) algorithm for parameter estimation of the latent regression model was given. Comprehensive experiments were conducted on two datasets extracted from Renren and Sina Weibo in the aspects of the best friends and the intensity ranking. On Renren dataset, the result of TOP-10 best friends chosen by the proposed model was compared with that of manual annotation, the mean of Normalized Discounted Cumulative Gain (NDCG) of the model was 69.48%, the average of Mean Average Precision (MAP) of the model was 66.3%, both of the parameters were significantly improved; on Sina Weibo dataset, the range of infection spread by nodes with higher relationship strength increased by 80% compared to the other nodes. The experimental results show that the proposed model can effectively measure user relationship strength.
Reference | Related Articles | Metrics
Lightweight privacy-preserving data aggregation algorithm
CHEN Yanli FU Chunjuan XU Jian YANG Geng
Journal of Computer Applications    2014, 34 (8): 2336-2341.   DOI: 10.11772/j.issn.1001-9081.2014.08.2336
Abstract411)      PDF (986KB)(521)       Save

Private data is easy to suffer from the attacks about data confidentiality, integrity and freshness. To resolve this problem, a secure data aggregation algorithm based on homomorphic Hash function was proposed, called HPDA (High-Efficiency Privacy Preserving Data Aggregation) algorithm. Firstly, it used homomorphic encryption scheme to provide data privacy-preserving. Secondly, it adopted homomorphic Hash function to verify the integrity and freshness of aggregated data. Finally, it reduced the communication overhead of the system by improved ID transmission mechanism. The theoretical analyses and experimental simulation results show that HPDA can effectively preserve data confidentiality, check data integrity, satisfy data freshness, and bring low communication overhead.

Reference | Related Articles | Metrics
Parallel algorithm of polygon topology validation for simple feature model
REN Yibin CHEN Zhenjie LI Feixue ZHOU Chen YANG Liyun
Journal of Computer Applications    2014, 34 (7): 1852-1856.   DOI: 10.11772/j.issn.1001-9081.2014.07.1852
Abstract211)      PDF (789KB)(489)       Save

Methods of parallel computation are used in validating topology of polygons stored in simple feature model. This paper designed and implemented a parallel algorithm of validating topology of polygons stored in simple feature model. The algorithm changed the master-slave strategy based on characteristics of topology validation and generated threads in master processor to implement task parallelism. Running time of computing and writing topology errors was hidden in this way. MPI and PThread were used to achieve the combination of processes and threads. The land use data of 5 cities in Jiangsu, China, was used to check the performance of this algorithm. After testing, this parallel algorithm is able to validate topology of massive polygons stored in simple feature model correctly and efficiently. Compared with master-slave strategy, the speedup of this algorithm increases by 20%.

Reference | Related Articles | Metrics
MapReduce Based Image Classification Approach
WEI Han ZHANG Xueqing CHEN Yang
Journal of Computer Applications    2014, 34 (6): 1600-1603.   DOI: 10.11772/j.issn.1001-9081.2014.06.1600
Abstract277)      PDF (642KB)(495)       Save

Many existing image classification algorithms cannot be used for big image data. A new approach was proposed to accelerate big image classification based on MapReduce. The whole image classification process was reconstructed to fit the MapReduce programming model. First, the Scale Invariant Feature Transform (SIFT) feature was extracted by MapReduce, then it was converted to sparse vector using sparse coding to get the sparse feature of the image. The MapReduce was also used to distributed training of random forest, and on the basis of it, the big image classification was achieved parallel. The MapReduce based algorithm was evaluated on a Hadoop cluster. The experimental results show that the proposed approach can classify images simultaneously on Hadoop cluster with a good speedup rate.

Reference | Related Articles | Metrics
Digital watermarking scheme of vector animation based on least significant bit algorithm and changed elements
WANG Tao LI Fudan XU Chao CHEN Yan
Journal of Computer Applications    2014, 34 (5): 1304-1308.   DOI: 10.11772/j.issn.1001-9081.2014.05.1304
Abstract178)      PDF (803KB)(264)       Save

For the vacancies on digital watermarking technology based on 2D-vector animation, this paper proposed a blind watermarking scheme which made full use of vector characteristics and the timing characteristics. This scheme adopted color values of adjacent frames in vector animation changed elements as embedded target. And it used Least Significant Bit(LSB) algorithm as embedding/extraction algorithm, which embedded multiple group watermarks to vector animation. Finally the accurate watermark could be obtained by verifying the extracted multiple group watermarks. Theoretical analysis and experimental results show that this scheme is not only easy to implement and well in robustness, but also can realize tamper-proofing. What's more, the vector animation can be played in real-time during the watermark embedding and extraction.

Reference | Related Articles | Metrics
High-speed data acquisition and transmission system for low-energy X-ray industrial CT
YANG Lei GAOFuqiang LI Ling CHEN Yan LI Ren
Journal of Computer Applications    2014, 34 (11): 3361-3364.   DOI: 10.11772/j.issn.1001-9081.2014.11.3361
Abstract291)      PDF (623KB)(601)       Save

To meet the application demand of high speed scanning and massive data transmission in industrial Computed Tomography (CT) of low-energy X-ray, a system of high-speed data acquisition and transmission for low-energy X-ray industrial CT was designed. X-CARD 0.2-256G of DT company was selected as the detector. In order to accommodate the needs of high-speed analog to digital conversion, high-speed time division multiplexing circuit and ping-pong operation for the data cache were combined; a gigabit Ethernet design was conducted with Field Programmable Gate Array (FPGA) selected as the master chip,so as to meet the requirements of high-speed transmission of multi-channel data. The experimental result shows that the speed of data acquisition system reaches 1MHz, the transmission speed reaches 926Mb/s and the dynamic range is greater than 5000. The system can effectively shorten the scanning time of low energy X-ray detection, which can meet the requirements of data transmission of more channels.

Reference | Related Articles | Metrics
Real-time simulation for 3D-dressing of random clothes and human body
CHEN Yan XUE Yuan YANG Ruoyu
Journal of Computer Applications    2014, 34 (1): 124-128.   DOI: 10.11772/j.issn.1001-9081.2014.01.0124
Abstract714)      PDF (768KB)(479)       Save
Recently, the research on clothing simulation is becoming hotter. But the flexibility, sense of reality, real-time and integrity are always difficult to be unified. Therefore, a new dressing simulation system was designed concerning the automatic fitting of any human body and clothes. At first, the surface of Non-Uniform Rational B-Spline (NURBS) was used to complete deformable body modeling. Then, particles were reconstructed from the 3DMAX model and multi-type springs were created to complete arbitrary cloth modeling. Finally, Verlet integrator was adopted to complete dressing simulation, while a new simplification algorithm for cloth models and a new method for judging interior point with a triangle were implemented. The results show that the proposed modeling approach for body and clothes guarantees the diversity of dressing effect, and the model simplification and interior point judgment can increase the simulation performance by 30% or so, which ensures the real-time quality.
Related Articles | Metrics
Video surveillance system-based motion-adaptive de-interlacing algorithm
NIE miao LI Ying SHI Lizhuo JIANG Jiachen YAN Yachao
Journal of Computer Applications    2013, 33 (10): 2922-2925.  
Abstract493)      PDF (823KB)(708)       Save
This paper proposed a motion-adaptive de-interlacing algorithm with high performance based on the analysis of the advantages and disadvantages of traditional de-interlacing algorithm for video surveillance systems. The algorithm divided the picture into static region and motion region on the basis of the motion state of interpolation points through 4-field motion detection which could detect the spatial-periodic pattern moving. Field insertion algorithm was exploited for interpolation of the static region. A modified edge-adaptive interpolation algorithm was used for the interpolation of the motion region which could increase the function of horizontal edge detection and enhance the level of consistency edge direction estimation. The proposed interpolation algorithm was implemented on DSP for experimental verification. The results show that the algorithm improves Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) and restrains saw-tooth, interline flicker, motion virtual image and other adverse effects and gets bettter visual effects.
Related Articles | Metrics
New inverted index storage scheme for Chinese search engine
MA Jian ZHANG Taihong CHEN Yanhong
Journal of Computer Applications    2013, 33 (07): 2031-2036.   DOI: 10.11772/j.issn.1001-9081.2013.07.2031
Abstract798)      PDF (844KB)(733)       Save
After analyzing inverted index structure and access mode of an open source search engine-ASPSeek, this paper gave an abstract definition of "inverted index". In order to solve the difficulties of inverted index updating and the efficiency issues caused by directly accessing inverted index through file caching of operating system in ASPSeek, considering the characteristics of 1.25 million Chinese agricultural Web pages, this article proposed a new blocking inverted index storage scheme with a buffer mechanism which was based on CLOCK replacement algorithm. The experimental results show that the new scheme is more efficient than ASPSeek whether the buffer system is disabled or enabled. When the buffer system got enabled and 160 thousand Chinese terms or 50 thousand high-frequency Chinese terms were used as a test set, the retrieval time of new scheme tended to be a constant after one million accesses. Even when using entire 827309 terms as a test set, the retrieval time of new scheme began to converge after two million accesses.
Reference | Related Articles | Metrics
Enhanced clustering algorithm based on fuzzy C-means and support vector machine
HU Lei NIU Qinzhou CHEN Yan
Journal of Computer Applications    2013, 33 (04): 991-993.   DOI: 10.3724/SP.J.1087.2013.00991
Abstract1154)      PDF (467KB)(559)       Save
To improve the accuracy and efficiency of clustering algorithm, this paper proposed an enhanced algorithm based on Fuzzy C-Means (FCM) and Support Vector Machine (SVM). The sets of data were clustered into c kinds by FCM, and then they were classified by SVM in detail. The cascade SVM model based on fully binary decision tree was constructed, so as to enhance clustering. In order to solve the problem of losing balance in making new features, the idea of using division in a set of data to eliminate the bad effect was put forward. Some correlation algorithms were compared on Iris data set. The experimental results show that the algorithm can improve the precision, save the system resources and enhance the efficiency of clustering.
Reference | Related Articles | Metrics
Travel route identification method of subway passengers based on mobile phone location data
LAI Jianhui CHEN Yanyan ZHONG Yuan WU Decang YUAN Yifang
Journal of Computer Applications    2013, 33 (02): 583-586.   DOI: 10.3724/SP.J.1087.2013.00583
Abstract1096)      PDF (696KB)(638)       Save
Traditional theory-deduced route choice always has large deviation from the actual one in complex rail transit network. The signaling data were collected from the passengers' mobile phone in rail wireless communication network. According to these data, a new travel route identification algorithm was proposed based on normal location update. Meanwhile, concerning the data missing, a repair algorithm was also put forward by using other signaling data of users to deduce their actual travel route by the K shortest paths. And the route validity would be checked to get the actual travel route. Finally, typical application in Beijing rail transit network was selected to validate this algorithm. The application results show that the algorithm has a good performance in illustrating the actual travelers' travel behaviors.
Related Articles | Metrics
Visibility estimation on road based on lane detection and image inflection
SONG Hong-jun CHEN Yang-zhou GAO Yuan-yuan
Journal of Computer Applications    2012, 32 (12): 3397-3403.   DOI: 10.3724/SP.J.1087.2012.03397
Abstract971)      PDF (1112KB)(765)       Save
The traditional visibility meters are expensive, their sampling is limited, and some of the existing video measurement methods need artificial markers and are of poor stability. In order to solve these problems, a new algorithm for weather recognition and traffic visibility estimation through fixed camera was proposed based on lane detection and image inflection. Different from previous research, our traffic model added homogenous fog factor in traffic scenes. The algorithm consisted of three steps. Firstly, calculate the scene activity map. With the help of the Area Search Algorithm (ASA) combined with texture features, extract area for identifying. The current weather condition is foggy if the pixels from top to bottom in the extracted area change in hyperbolic fashion. At the same time calculate inflection point of image brightness curve in the extracted area. Secondly, detect traffic lane based on the retractable window algorithm, extract the lane’s endpoint and calibrate the fixed camera. Finally, according to the visibility definition, calculate traffic scene visibility by International Meteological Organization based on monocular camera model and light propagation model in fog weather condition. Through experiments of visibility estimation for three different scenes, the experimental results show that the algorithm is consistent with human eye’s observation and the accuracy rate is up to 86% while the inspection error is within 20m.
Related Articles | Metrics
Network coding based reliable data transmission policy in wireless sensor network
CHEN Zhuo CHEN Yang FENG Da-quan
Journal of Computer Applications    2012, 32 (11): 3102-3106.   DOI: 10.3724/SP.J.1087.2012.03102
Abstract1194)      PDF (853KB)(465)       Save
With reference to network coding theory, a reliable data transmission policy,MGrowth Codes was proposed, for wireless sensor network environment. Through a gradientbased routing design, all data can converge to sink node (Sink). In addition, the data transmission policy can also use encoded packet to decode other encoded packets, which can further enhance the data recoverability. After the network simulation, MGrowth Codes can effectively increase the throughput of the wireless sensor network and improve the reliability of data transmission.
Reference | Related Articles | Metrics
Side information interpolation algorithm based on spatio-temporal correlations at decoder
WANG Feng-qin CHEN Xiao-lei CHEN Yan
Journal of Computer Applications    2012, 32 (08): 2324-2327.   DOI: 10.3724/SP.J.1087.2012.02324
Abstract1016)      PDF (686KB)(417)       Save
In Wyner-Ziv video coding, the coding efficiency depends mainly on the quality of side information. However, the poor motion vector, which is obtained from motion estimation at decoder, will result in quality degradation of the side information. To improve the performance of Wyner-Ziv video coding, a side information interpolation algorithm was proposed, which applied multi-macroblock partition modes technology to bi-directional motion estimation. Sum of Bilateral Absolute Difference (SBAD) was utilized to measure the temporal continuity of motion vector, and Boundary Absolute Difference (BAD) was used to measure the spatial continuity of motion vector, and spatio-temporal matching criterion was applied to find the optimal motion vector. The simulation results show that the proposed algorithm can maximally achieve 1.41dB improvement in the Peak Signal-to-Noise Ratio (PSNR) of side information with reducing coding bit rate.
Reference | Related Articles | Metrics
Dynamic taint analysis based on virtual technology
CHEN Yan-ling ZHAO Jing
Journal of Computer Applications    2011, 31 (09): 2367-2372.   DOI: 10.3724/SP.J.1087.2011.02367
Abstract1333)      PDF (951KB)(573)       Save
The record of the current taint analysis tool is not accurate. To solve this, dynamic taint analysis based on the virtual technology was studied and implemented. A virtualization based dynamic taint analysis framework was designed, and two kinds of taint signature models based on Hook technology and Hash-traversal technology were given respectively for memory taint and hard disk taint. A taint propagation strategy was put forward according to the instruction type which was classified by instruction encoding format of Inter&AMD, and a taint record strategy based on instruction filtering was given to solve the problem of redundant information records. The experimental results prove that the proposed method is effective, and can be well used in test case generation and vulnerability detection of fuzzy test.
Related Articles | Metrics
Blind watermarking algorithm for 2D vector map
Xiao-guang CHEN Yan LI
Journal of Computer Applications    2011, 31 (08): 2174-2177.   DOI: 10.3724/SP.J.1087.2011.02174
Abstract1320)      PDF (610KB)(882)       Save
The vector digital watermark is one of the most important means of copyright protection for graphics and vector maps. The authors discussed a blind watermarking method for 2D vector map. First, the entire vector map was traversed to get the tolerance dynamically, then the classical Douglas-Peucker algorithm was used to get the entire feature nodes from vector map, and finally, the watermark was embedded into feature nodes in tolerance range. And with the inversed procedure of the embedding process, the watermark could be extracted. Through the attack method including random adding points, random deleting points, compression and cropping, the correlation coefficient of original watermark bits and the extracted watermark bits from attacked watermarked maps were calculated. The experimental results show that the proposed method has a great robustness.
Reference | Related Articles | Metrics
Active intelligent parking guidance system
Long-fei WANG Hong CHEN Yang LI Hai-peng SHAO
Journal of Computer Applications    2011, 31 (04): 1141-1144.   DOI: 10.3724/SP.J.1087.2011.01141
Abstract1302)      PDF (652KB)(646)       Save
Based on the intrinsic features of spatial distribution, temporal distribution and high dynamic of parking activities, a negotiation approach was introduced to the design of an intelligent parking guidance system. The IEEE FIPA compliant multi-Agent system called active negotiation-based intelligent parking guidance system (AIPGIS) was proposed. The architecture, operation mechanism, negotiation algorithms and characteristics were analyzed and presented. The AIPGIS can implement effective sharing of urban traffic state information and strengthen the coordination and decision-making capacities of the active Agents.
Related Articles | Metrics
Some key technologies of deploying High Availability Redundant Cluster System and Dependability Analysis
HAO li_Rui XUE Hong -Ye CHEN Yan
Journal of Computer Applications   
Abstract1388)      PDF (599KB)(916)       Save
Some key technologies for deploying high availability cluster system were discussed in this paper. An adaptive fault-tolerance method based on task table was presented. The designing idea of the adaptive fault-tolerance algorithm and the implementation of the algorithm were described in detail. Finally, the dependability model with Petri-net was also presented. According to the dependability analysis of the high availability cluster, it shows that the cluster system in this paper has high availability to be used in critical application field.
Related Articles | Metrics