Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Analysis and improvement of certificateless signature scheme
ZHAO Hong, YU Shuhan, HAN Yanyan, LI Zhaobin
Journal of Computer Applications    2023, 43 (1): 147-153.   DOI: 10.11772/j.issn.1001-9081.2021111919
Abstract532)   HTML37)    PDF (910KB)(232)       Save
For nine certificateless signature schemes proposed by Y L Tang, et al. (TANG Y L, WANG F F, YE Q, et al. Improved provably secure certificateless signature scheme. Journal of Beijing University of Posts and Telecommunications, 2016, 39(1): 112-116), firstly, the linearized equation analysis method was used. It was found that there was a linear relationship between the public keys in all schemes. This defect was exploited to complete a signature forgery attack on all schemes. Secondly, in order to break the linear relationship between the public keys, the method of modifying the parameters of hash function was used to improve the scheme, and the security of the improved scheme was proved under the random oracle model. Thirdly, a public key construction format of certificateless signature scheme was proposed. The signature scheme constructed by this format could not be attacked by adversaries using public key replacement. Finally, the efficiency of the improved scheme was compared with those of the existing certificateless signature schemes through simulation. Experimental results show that the improved scheme promotes the security without reducing the computational efficiency.
Reference | Related Articles | Metrics
Chinese description of image content based on fusion of image feature attention and adaptive attention
ZHAO Hong, KONG Dongyi
Journal of Computer Applications    2021, 41 (9): 2496-2503.   DOI: 10.11772/j.issn.1001-9081.2020111829
Abstract440)      PDF (1520KB)(538)       Save
Aiming at the problem that the existing Chinese description models of image content based on attention mechanism cannot focus on the key content without weakening or missing attention information, a Chinese description model of image content based on fusion of image feature attention and adaptive attention was proposed. An encode-decode structure was used in this model. Firstly, the image features were extracted in the encoder network, and the attention information of all feature regions of the image was extracted by the image feature attention. Then, the decoder network was used to decode the image features with attention weights to generate hidden information, so as to ensure that the attention information was not weakened or missed. Finally, the visual sentry module of self-adaptive attention was used to focus on the key content in the image features again, so that the main content of the image was able to be extracted more accurately. Several evaluation indices including BLEU, METEOR, ROUGEL and CIDEr were used to verify the models, the proposed model was compared with the image description models based on self-adaptive attention or image feature attention only, and the proposed model had the evaluation value of CIDEr improved by 10.1% and 7.8% respectively. Meanwhile, compared with the baseline model Neural Image Caption (NIC) and the Bottom-Up and Top-Down (BUTD) attention based image description model, the proposed model had the evaluation index value of CIDEr increased by 10.9% and 12.1% respectively. Experimental results show that the image understanding ability of the proposed model is effectively improved, and the score of each evaluation index of the model is better than those of the comparison models.
Reference | Related Articles | Metrics
Design and implementation of SIFT algorithm for UAV remote sensing image based on DSP platform
SUN Peng, XIAO Jing, ZHAO Haimeng, LIU Fan, YAN Lei, ZHAO Hongying
Journal of Computer Applications    2020, 40 (4): 1237-1242.   DOI: 10.11772/j.issn.1001-9081.2019091689
Abstract1091)      PDF (1347KB)(939)       Save
To satisfy the requirement of real-time and rapid processing of Scale-Invariant Feature Transform(SIFT) algorithm for remote sensing images of large-scale Unmanned Aerial Vehicle (UAV) network on the scene, an implementation scheme of the algorithm by using the hardware multiplier of Digital Signal Processor(DSP)kernel was proposed to process the multiplication of single-precision floating-point pixel data. Firstly,according to the characteristics of data input and output of the hardware multiplier with DSP kernel,the image data structure and the image function of SIFT algorithm were reconstructed in order to make the hardware multiplier perform the multiplication calculation of single-precision floating-point pixel data of SIFT algorithm. Secondly,the software pipelining technology was adopted to rearrange the iterative computation,so as to enhance the parallel computing ability of the algorithm. Finally,the dynamic data produced in the algorithm calculation process were transferred to the Double Data Rate 3 synchronous dynamic random access memory(DDR3)to enlarge the storage space of the algorithm data. Experimental results show that the SIFT algorithm on DSP platform is able to achieve high-precision and fast processing for 1 000×750 remote sensing images of UAV,and the scheme satisfies the requirement of real-time and rapid processing of SIFT algorithm for remote sensing images of UAV network on the scene.
Reference | Related Articles | Metrics
Human activity recognition based on improved particle swarm optimization-support vector machine and context-awareness
WANG Yang, ZHAO Hongdong
Journal of Computer Applications    2020, 40 (3): 665-671.   DOI: 10.11772/j.issn.1001-9081.2019091551
Abstract466)      PDF (754KB)(439)       Save
Concerning the problem of low accuracy of human activity recognition, a recognition method combining Support Vector Machine (SVM) with context-awareness (actual logic or statistical model of human motion state transition) was proposed to identify six types of human activities (walking, going upstairs, going downstairs, sitting, standing, lying). Logical relationships existing between human activity samples were used by the method. Firstly, the SVM model was optimized by using the Improved Particle Swarm Optimization (IPSO) algorithm. Then, the optimized SVM was used to classify the human activities. Finally, the context-awareness was used to correct the error recognition results. Experimental results show that the classification accuracy of the proposed method reaches 94.2% on the Human Activity Recognition Using Smartphones (HARUS) dataset of University of California, Irvine (UCI), which is higher than that of traditional classification method based on pattern recognition.
Reference | Related Articles | Metrics
Indoor robot simultaneous localization and mapping based on RGB-D image
ZHAO Hong, LIU Xiangdong, YANG Yongjuan
Journal of Computer Applications    2020, 40 (12): 3637-3643.   DOI: 10.11772/j.issn.1001-9081.2020040518
Abstract421)      PDF (1227KB)(629)       Save
Simultaneous Localization and Mapping (SLAM) is a key technology for robots to realize autonomous navigation in unknown environments. Aiming at the poor real-time performance and low accuracy of the commonly used RGB-Depth (RGB-D) SLAM system, a new RGB-D SLAM system was proposed to further improve the real-time performance and accuracy. Firstly, the Oriented FAST and Rotated BRIEF (ORB) algorithm was used to detect the image feature points, and the extracted feature points were processed by using the quadtree-based homogenization strategy, and the Bag of Words (BoW) was used to perform feature matching. Then, in the stage of system camera pose initial value estimation, an initial value which was closer to the optimal value was provided for back-end optimization by combining the Perspective n Point (P nP) and nonlinear optimization methods. In the back-end optimization, the Bundle Adjustment (BA) was used to optimize the initial value of the camera pose iteratively for obtaining the optimal value of the camera pose. Finally, according to the correspondence between the camera pose and the point cloud map of each frame, all the point cloud data were registered in a coordinate system to obtain the dense point cloud map of the scene, and the octree was used to compress the point cloud map recursively, so as to obtain a 3D map for robot navigation. On the TUM RGB-D dataset, the proposed RGB-D SLAM system, RGB-D SLAMv2 system and ORB-SLAM2 system were compared. Experimental results show that the proposed RGB-D SLAM system has better comprehensive performance on real-time and accuracy.
Reference | Related Articles | Metrics
Detection of negative emotion burst topic in microblog text stream
LI Yanhong, ZHAO Hongwei, WANG Suge, LI Deyu
Journal of Computer Applications    2020, 40 (12): 3458-3464.   DOI: 10.11772/j.issn.1001-9081.2020060880
Abstract385)      PDF (1188KB)(492)       Save
How to find negative emotion burst topic in time from massive and noisy microblog text stream is essential for emergency response and handling of emergencies. However, the traditional burst topic detection methods often ignore the differences between negative emotion burst topic and non-negative emotion burst topic. Therefore, a Negative Emotion Burst Topic Detection (NE-BTD) algorithm for microblog text stream was proposed. Firstly, the accelerations of keyword pairs in microblog and the change rate of negative emotion intensity were used as the basis for judging the topics of negative emotion. Secondly, the speeds of burst word pairs were used to determine the window range of negative emotion burst topics. Finally, a Gibbs Sampling Dirichlet Multinomial Mixture model (GSDMM) clustering algorithm was used to obtain the topic structures of the negative emotion burst topics in the window. In the experiments, the proposed NE-BTD algorithm was compared with an existing Emotion-Based Method of Topic Detection (EBM-TD) algorithm. The results show that the NE-BTD algorithm was at least 20% higher in accuracy and recall than the EBM-TD algorithm, and it can detect negative emotion burst topic at least 40 minutes earlier.
Reference | Related Articles | Metrics
Environment sound recognition based on lightweight deep neural network
YANG Lei, ZHAO Hongdong
Journal of Computer Applications    2020, 40 (11): 3172-3177.   DOI: 10.11772/j.issn.1001-9081.2020030433
Abstract466)      PDF (903KB)(919)       Save
The existing Convolutional Neural Network (CNN) models have a large number of redundant parameters. In order to address this problem, two lightweight network models named Fnet1 and Fnet2, based on the SqueezeNet core structure Fire module, were proposed. Then, in the view of the characteristics of distributed data collection and processing of mobile terminals, based on Fnet2, a new network model named FnetDNN, with Fnet2 integrated with Deep Neural Network (DNN), was proposed according to Dempster-Shafer (D-S) evidence theory. Firstly, a neural network named Cent with four convolutional layers was used as the benchmark, and Mel Frequency Cepstral Coefficient (MFCC) as the input feature. From aspects of the network structure characteristics, calculation cost, number of convolution kernel parameters and recognition accuracy, Fnet1, Fnet2 and Cent were analyzed. Results showed that Fnet1 only used 10.3% parameters of that of Cnet, and had the recognition accuracy of 86.7%. Secondly, MFCC and the global feature vector were input into the FnetDNN model, which improved the recognition accuracy of the model to 94.4%. Experimental results indicate that the proposed Fnet network model can compress redundant parameters as well as integrate with other networks, which has the ability to expand the model.
Reference | Related Articles | Metrics
Text sentiment analysis based on serial hybrid model of bi-directional long short-term memory and convolutional neural network
ZHAO Hong, WANG Le, WANG Weijie
Journal of Computer Applications    2020, 40 (1): 16-22.   DOI: 10.11772/j.issn.1001-9081.2019060968
Abstract767)      PDF (1101KB)(770)       Save
Aiming at the problems of low accuracy, poor real-time performance and insufficient feature extraction in existing text sentiment analysis methods, a serial hybrid model based on Bi-directional Long Short-Term Memory neural network and Convolutional Neural Network (BiLSTM-CNN) was constructed. Firstly, the context information was extracted from the text by using Bi-directional Long Short Term Memory (BiLSTM) neural network. Then, the local semantic features were extracted from the context information by using Convolutional Neural Network (CNN). Finally, the emotional tendency of text was obtained by using Softmax. Compared with single models such as CNN, Long Short-Term Memory (LSTM) and BiLSTM, the proposed text sentiment analysis model increases the comprehensive evaluation index F1 by 2.02 percentage points, 1.18 percentage points and 0.85 percentage points respectively; and compared with the hybrid models such as LSTM and CNN (LSTM-CNN) and parallel features fusion of BiLSTM-CNN, the proposed text sentiment analysis model improves the comprehensive evaluation index F1 by 1.86 percentage points and 0.76 percentage points respectively. The experimental results show that the serial hybrid model based on BiLSTM-CNN has great value in practical applications.
Reference | Related Articles | Metrics
Fast malicious domain name detection algorithm based on lexical features
ZHAO Hong, CHANG Zhaobin, WANG Le
Journal of Computer Applications    2019, 39 (1): 227-231.   DOI: 10.11772/j.issn.1001-9081.2018051118
Abstract594)      PDF (863KB)(366)       Save
Aiming at the problem that malicious domain name attacks frequently occur on the Internet and existing detection methods are not effective enough in performance of real time, a fast malicious domain name detection algorithm based on lexical features was proposed. According to characteristics of malicious domain name, all domain names to be tested were firstly normalized according to their lengths and the weights were given to them in the algorithm. Then a clustering algorithm was used to divide domain names to be tested into several groups, and the priority of each domain group was calculated by the improved heap sorting algorithm according to the sum of weights in group, the editing distance between each domain name in each domain name group and the domain name on blacklist was calculated in turn. Finally, malicious domain name was quickly determined according to the editing distance value. The running results of algorithm show that compared with the malicious domain name detection algorithm of only using domain name semantics and the algorithm of only using domain name lexical features, the accuracy of fast malicious domain name detection algorithm based on lexical features is increased by 1.7% and 2.5% respectively, the detection rate is increased by 13.9% and 6.8% respectively. The proposed algorithm has higher accuracy and performance of real-time.
Reference | Related Articles | Metrics
List-wise matrix factorization algorithm with combination of item popularity
ZHOU Ruihuan, ZHAO Hongyu
Journal of Computer Applications    2018, 38 (7): 1877-1881.   DOI: 10.11772/j.issn.1001-9081.2017123066
Abstract697)      PDF (805KB)(372)       Save
For the difference of transmutative Singular Value Decomposition (SVD++) algorithm's rating rule in two stages of model training and prediction, and the same probability of List-wise Matrix Factorization (ListRank-MF) algorithm's Top-1 ranking probability caused by a large number of same rating items, an algorithm of list-wise matrix factorization combining with item popularity was proposed. Firstly, the current item to be rated was removed from the set of items that the user had used in the rating rule. Secondly, the item popularity was used to improve the Top-1 ranking probability. Then the stochastic gradient descent algorithm was used to solve the objective function and make Top- N recommendation. Based on the modified SVD++ rating rule, the proposed algorithm and the SVD++ algorithms whose objective functions are point-wise and list-wise were compared on MovieLens and Netflix datasets. Compared with the list-wise SVD++ algorithm, the value of Normalized Discounted Cumulative Gain (NDCG) of Top- N recommendation accuracy was increased by 5%-8% on MovieLens datasets and about 1% on Netflix datasets. The experimental results show that the proposed algorithm can effectively improve the Top- N recommendation accuracy.
Reference | Related Articles | Metrics
Disparity map generation technology based on convolutional neural network
ZHU Junpeng, ZHAO Hongli, YANG Haitao
Journal of Computer Applications    2018, 38 (1): 255-259.   DOI: 10.11772/j.issn.1001-9081.2017071659
Abstract581)      PDF (1010KB)(483)       Save
Focusing on the issues such as high cost, long time consumption and background holes in the disparity map in naked-eye 3D applications, learning and prediction algorithm based on Convolutional Neural Network (CNN) was introduced. Firstly, the change rules of a dataset could be mastered through training and learning the dataset. Secondly, the depth map with continuous lasting depth value was attained by extracting and predicting the features of the left view in the input CNN. Finally, the right view was produced by the superposition of diverse stereo pairs after folding the predicted depth and original maps. The simulation results show that the pixel-wise reconstruction error of the proposed algorithm is 12.82% and 10.52% lower than that of 3D horizontal disparity algorithm and depth image-based rendering algorithm. In addition, the problems of background hole and background adhesion have been greatly improved. The experimental results show that CNN can improve the image quality of disparity maps.
Reference | Related Articles | Metrics
Fingerprint matching indoor localization algorithm based on dynamic time warping distance for Wi-Fi network
ZHANG Mingyang, CHEN Jian, WEN Yingyou, ZHAO Hong, WANG Yugang
Journal of Computer Applications    2017, 37 (6): 1550-1554.   DOI: 10.11772/j.issn.1001-9081.2017.06.1550
Abstract730)      PDF (856KB)(699)       Save
Focusing on the low accuracy problem of regular fingerprint matching indoor localization algorithm for Wi-Fi network confronted with signal fluctuation or jamming, the fingerprint matching indoor localization algorithm based on Dynamic Time Warping (DTW) similarity for Wi-Fi network was proposed. Firstly, the Wi-Fi signal characteristics in localization area were converted to the time-series fingerprints according to the sequence of sampling. The similarity between the locating data and sampling data was obtained by computing the fingerprint DTW distance of Wi-Fi signal. Then, according to the structural characteristics of the sampling area, the fingerprint sampling problem of Wi-Fi signal was divided into three kinds of basic sampling methods based on dynamic path. Finally, the accuracy and completeness of the fingerprint feature information were increased by the combination of multiple dynamic path sampling methods, which improved the accuracy and location precision of fingerprint matching. The extensive experimental results show that, compared with the instantaneous fingerprint matching indoor localization algorithm, within the location error of 3 m, the cumulative error frequency of the proposed localization algorithm, was 10% higher for uniform motion and 13% higher for variable motion within routing area, and 9% higher for crossed curvilinear motion and 3% higher for S-type curvilinear motion within open area. The proposed localization algorithm can improve accuracy and location precision of fingerprint matching effectively in real indoor localization applications.
Reference | Related Articles | Metrics
Hadoop adaptive task scheduling algorithm based on computation capacity difference between node sets
ZHU Jie, LI Wenrui, WANG Jiangping, ZHAO Hong
Journal of Computer Applications    2016, 36 (4): 918-922.   DOI: 10.11772/j.issn.1001-9081.2016.04.0918
Abstract566)      PDF (783KB)(558)       Save
Aiming at the problems of the fixed task progress proportions and passive selection of slow tasks in the task speculation execution algorithm for heterogeneous cluster, an adaptive task scheduling algorithm based on the computation capacity difference between node sets was proposed. The computation capacity difference between node sets was quantified to schedule tasks by fast and slow node sets, and dynamic feedback of nodes and tasks speed were calculated to update slow node sets timely to improve the resource utilization rate and task parallelism. Within two node sets, task progress proportions were adjusted dynamically to improve the accuracy of slow tasks identification, and the fast node which executed backup tasks dynamically for slow tasks by substitute execution implementation was selected to improve the task execution efficiency. The experimental results showed that, compared with the Longest Approximate Time to End (LATE) algorithm, the proposed algorithm reduced the running time by 5.21%, 20.51% and 23.86% respectively in short job set, mixed-type job set and mixed-type job set with node performance degradation, and reduced the number of initiated backup tasks significantly. The proposed algorithm can make the task adapt to the node difference, and improves the overall job execution efficiency effectively with reducing slow backup tasks.
Reference | Related Articles | Metrics
Space coordinate transformation algorithm for built-in accelerometer data of smartphone
ZHAO Hong, GUO Lilu
Journal of Computer Applications    2016, 36 (2): 301-306.   DOI: 10.11772/j.issn.1001-9081.2016.02.0301
Abstract1943)      PDF (896KB)(1601)       Save
The coordinate system for smartphones' built-in acceleration sensor is fixed on the equipment itself, the data collected by the smartphone is constantly drifting due to the change of smartphone's posture. Affected by this, even the same movement process, the acceleration is difficult to keep consistent with the previous one. To solve this problem, the acceleration was mapped from smartphone to inertial coordinate system by using space coordinate transformation algorithm, to ensure that the sensor data can accurately reflect actual motion state no matter in what gesture the smartphone is. To verify the effectiveness of this method, a new method for online acquiring and real-time processing smartphone's sensor data was designed. With this method, the feasibilities of direction cosine algorithm and quaternion algorithm were tested in rotation experiments. Then, the performance of quaternion algorithm was further tested in pedometer experiments. The experimental results show that the direction cosine algorithm fails to achieve comprehensive coordinate transformation due to the measurement range limit; while the quaternion algorithm based on rotation vector sensor data can achieve full conversion, and the recognition rate of gait using transformed acceleration is over 95%, which can accurately reflect the actual state of motion.
Reference | Related Articles | Metrics
Novel secure network coding scheme against global wiretapping
HE Keyan, ZHAO Hongyu
Journal of Computer Applications    2016, 36 (12): 3317-3321.   DOI: 10.11772/j.issn.1001-9081.2016.12.3317
Abstract636)      PDF (879KB)(416)       Save
The existing schemes against global wiretapping attacks for network coding have the problems of bringing bandwidth overhead and incuring high computational complexity. In order to reduce the bandwidth overhead and enhance the actual coding efficiency, a novel secure network coding scheme against global wiretapping was proposed. For the network coding with the size of field is q, two permutation sequences of length q were generated by using the key, and the source message was mixed and replaced by using the permutation sequences so as to resist global wiretapping attacks. The source message was only encrypted at the source node and had no change at the intermediate nodes. The proposed scheme has a simple encryption algorithm, low coding complexity and doesn't need pre-coding, so it doesn't bring bandwidth overhead and has high actual coding efficiency. The analysis results show that the proposed scheme can resist not only the ciphertext-only attacks but also the known-plaintext attacks efficiently.
Reference | Related Articles | Metrics
Resource matching maximum set job scheduling algorithm under Hadoop
ZHU Jie, LI Wenrui, ZHAO Hong, LI Ying
Journal of Computer Applications    2015, 35 (12): 3383-3386.   DOI: 10.11772/j.issn.1001-9081.2015.12.3383
Abstract675)      PDF (725KB)(345)       Save
Concerning the problem that jobs of high proportion of resources execute inefficiently in job scheduling algorithms of the present hierarchical queues structure, the resource matching maximum set algorithm was proposed. The proposed algorithm analysed job characteristics, introduced the percentage of completion, waiting time, priority and rescheduling times as urgent value factors. Jobs with high proportion of resources or long waiting time were preferentially considered to improve jobs fairness. Under the condition of limited amount of available resources, the double queues was applied to preferentially select jobs with high urgent values, select the maximum job set from job sets with different proportion of resources in order to achieve scheduling balance. Compared with the Max-min fairness algorithm, it is shown that the proposed algorithm can decrease average waiting time and improve resource utilization. The experimental results show that by using the proposed algorithm, the running time of the same type job set which consisted of jobs of different proportion of resources is reduced by 18.73%, and the running time of jobs of high proportion of resources is reduced by 27.26%; the corresponding percentages of reduction of the running time of the mixed-type job set are 22.36% and 30.28%. The results indicate that the proposed algorithm can effectively reduce the waiting time of jobs of high proportion of resources and improve the overall jobs execution efficiency.
Reference | Related Articles | Metrics
Multi-stream based Tandem feature method for mispronunciation detection
YUAN Hua CAI Meng ZHAO Hongjun ZHANG Weiqiang LIU Jia
Journal of Computer Applications    2014, 34 (6): 1694-1698.   DOI: 10.11772/j.issn.1001-9081.2014.06.1694
Abstract308)      PDF (760KB)(586)       Save

To deal with the under-resourced labeled pronunciation data in mispronunciation detection, some other data were used to improve the discriminability of feature in the framework of Tandem system. Taking Chinese learning of English as object, unlabeled data, native Mandarin data and native English data which can be relatively easily accessed were selected as the assisted data. The experiments show that these types of data can effectively improve the performance of system, and the unlabeled data performs the best. And the effect to system performance was discussed with different length of frame context, the shallow and deep neural network typically represented by Multi-Layer Perception (MLP) and Deep Neural Network (DNN), and different structure of Tandem feature. Finally the strategy of merging multiple data streams was used to further improve the system performance, and the best system performance was achieved by combining the DNN based unlabeled data stream and native English stream. Compared with the baseline system, the recognition accuracy is increased by 7.96%, and the diagnostic accuracy of mispronunciation type is increased by 14.71%.

Reference | Related Articles | Metrics
Three-queue job scheduling algorithm based on Hadoop
ZHU Jie ZHAO Hong LI Wenrui
Journal of Computer Applications    2014, 34 (11): 3227-3230.   DOI: 10.11772/j.issn.1001-9081.2014.11.3227
Abstract222)      PDF (756KB)(585)       Save

Single queue job scheduling algorithm in homogeneous Hadoop cluster causes short jobs waiting and low utilization rate of resources; multi-queue scheduling algorithms solve problems of unfairness and low execution efficiency, but most of them need setting parameters manually, occupy resources each other and are more complex. In order to resolve these problems, a kind of three-queue scheduling algorithm was proposed. The algorithm used job classifications, dynamic priority adjustment, shared resource pool and job preemption to realize fairness, simplify the scheduling flow of normal jobs and improve concurrency. Comparison experiments with First In First Out (FIFO) algorithm were given under three kinds of situations, including that the percentage of short jobs is high, the percentages of all types of jobs are similar, and the general jobs are major with occasional long and short jobs. The proposed algorithm reduced the running time of jobs. The experimental results show that the execution efficiency increase of the proposed algorithm is not obvious when the major jobs are short ones; however, when the assignments of all types of jobs are balanced, the performance is remarkable. This is consistent with the algorithm design rules: prioritizing the short jobs, simplifying the scheduling flow of normal jobs and considering the long jobs, which improves the scheduling performance.

Reference | Related Articles | Metrics
Fundamental matrix estimation based on three-view constraint
LI Cong ZHAO Hongrui FU Gang
Journal of Computer Applications    2014, 34 (10): 2930-2933.   DOI: 10.11772/j.issn.1001-9081.2014.10.2930
Abstract384)      PDF (627KB)(583)       Save

The matching points cant be decided absolutely by its residuals just relying on epipolar geometry residuals, which influences the selection of optimum inlier set. So a novel fundamental matrix calculation algorithm was proposed based on three-view constraint. Firstly, the initial fundamental matrices were estimated by traditional RANdom SAmple Consensus (RANSAC) method. Then matching points existed in every view were selected, and the epipolar lines of points not in the common view were calculated in fundamental matrix estimation. Distances between the points in common view and the intersection of its matching points epipolar lines were calculated. Under judgment based on the distances, a new optimum inlier set was obtained. Finally, the M-Estimators (ME) algorithm was used to calculate the fundamental matrices based on the new optimum inlier set. Through a mass of experiments in case of mismatching and noise, the results indicate that the algorithm can effectively reduce the influence of mismatch and noise on accurate calculation of fundamental matrices. It gets better accuracy than traditional robust algorithms by limiting distance between point and epipolar line to about 0.3 pixels, in addition, an improvement in stability. So, it can be widely applied to fields such as 3D reconstruction based on image sequence and photogrammetry.

Reference | Related Articles | Metrics
Automation system for computing geographic sunshine hours based on GIS
ZHAO Hongwei LIAO Shunbao
Journal of Computer Applications    2013, 33 (04): 1165-1168.   DOI: 10.3724/SP.J.1087.2013.01165
Abstract739)      PDF (616KB)(533)       Save
Because the model for computing geographic sunshine hours based on Digital Elevation Model (DEM) is complex and time-consuming, it is difficult to consider both high resolution and vast area at the same time when a study on geographic sunshine hours is conducted on a national scale in China. Some scholars proposed some methods for calculating high-resolution geographic sunshine hours across nationwide, however they didn't specify the computing platform and calculation methods. In this study, authors developed a set of automation system for calculating geographic sunshine hours based on existing models and DEM in which the part of curvature of the earth was corrected. This system was developed based on VS2008 platform and ArcGIS Engine component technology. The system could be used to calculate geographic sunshine hours at multi-spatial-scales and multi-resolutions. The raster data of geographic sunshine hours were generated as long as the user input DEM data with geographic coordinates of the region and the specific date.
Reference | Related Articles | Metrics
Mixed collaborative recommendation algorithm based on factor analysis of user and item
ZHAO Hong-xia WANG Xin-hai YANG Jiao-ping
Journal of Computer Applications    2011, 31 (05): 1382-1386.   DOI: 10.3724/SP.J.1087.2011.01382
Abstract1654)      PDF (803KB)(834)       Save
In order to solve the problems of data overload and data sparsity in Collaborative Filtering Recommendation (CFR) algorithm, the method of factor analysis was adopted to reduce the dimension of the data, and regression analysis was used to forecast the value that needs to be evaluated. Through these two methods, it not only reduces the amount of data but also maximizes the information retained. The ideas of the algorithm are as follows: first of all, the algorithm reduces the dimensions of user and item vector by use of factor analysis and some representative users and item factors could be got. And then, two regression models were established, with target users and the evaluated items as the dependent variables respectively, and the user factors and item factors as the independent variables respectively, which two predictive values of the evaluated items were achieved. Finally, the final predictive value was achieved weighted by the two. By experimental simulation, the algorithm is demonstrated effective and feasible. Furthermore, the results show that the accuracy of algorithm proposed here has somewhat increased compared with that of the collaborative filtering recommendation algorithm based on item.
Related Articles | Metrics
Study on semantic similarity algorithm based on ontology
Yong-jin ZHAO Hong-yuan ZHENG Qiu-lin ZHENG
Journal of Computer Applications    2009, 29 (11): 3074-3076.  
Abstract1621)      PDF (596KB)(1287)       Save
The research about concept similarity is very important in knowledge representation and information retrieval. After studying the current classic distance-based semantic similarity algorithm, a more standardized similarity algorithm was proposed by analyzing the other key factors of semantic concept and increasing the impact of the node density and attributes of the concept for the semantic similarity. Through the experimental analysis, the similarity value of the improved algorithm is more reasonable; and compared with human subjective judgements under certain condition of the mediation parameter, the compatibility of the improved algorithm increases about 15% than that of the original algorithm.
Related Articles | Metrics