Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Hybrid ant colony optimization algorithm with brain storm optimization
LI Mengmeng, QIN Wei, LIU Yi, DIAO Xingchun
Journal of Computer Applications    2021, 41 (8): 2412-2417.   DOI: 10.11772/j.issn.1001-9081.2020101562
Abstract382)      PDF (946KB)(437)       Save

Feature selection can improve the performance of data classification effectively. In order to further improve the solving ability of Ant Colony Optimization (ACO) on feature selection, a hybrid Ant colony optimization with Brain storm Optimization (ABO) algorithm was proposed. In the algorithm, the information communication archive was used to maintain the historical better solutions, and a longest time first method based on relaxation factor was adopted to update archive dynamically. When the global optimal solution of ACO was not updated for several times, a route-idea transformation operator based on Fuch chaotic map was used to transform the route solutions in the archive to the idea solutions. With the obtained solutions as initial population, the Brain Storm Optimization (BSO) was adopted to search for better solutions in wider space. On six typical binary datasets, experiments were conducted to analyze the sensibility of parameters of the proposed algorithm, and the algorithm was compared to three typical evolutionary algorithms:Hybrid Firefly and Particle Swarm Optimization (HFPSO) algorithm, Particle Swarm Optimization and Gravitational Search Algorithm (PSOGSA) and Genetic Algorithm (GA). Experimental results show that compared with the comparison algorithms, the proposed algorithm can improve the classification accuracy by at least 2.88% to 5.35%, and the F1-measure by at least 0.02 to 0.05, which verify the effectiveness and superiority of the proposed algorithm.

Reference | Related Articles | Metrics
Indoor intrusion detection based on direction-of-arrival estimation algorithm for single snapshot
REN Xiaokui, LIU Pengfei, TAO Zhiyong, LIU Ying, BAI Lichun
Journal of Computer Applications    2021, 41 (4): 1153-1159.   DOI: 10.11772/j.issn.1001-9081.2020071030
Abstract424)      PDF (1270KB)(650)       Save
Intrusion detection methods based on Channel State Information(CSI) are vulnerable to environment layout and noise interference, resulting in low detection rate. To solve this problem, an indoor intrusion detection method based on the algorithm of Direction-Of-Arrival(DOA) estimation for single snapshot was proposed. Firstly, the CSI data received by the antenna array was mathematically decomposed by combining the feature of spatial selective fading of the wireless signals, and the unknown DOA estimation problem was transformed into an over-complete representation problem. Secondly, the sparsity of the sparse signal was constrained by l1 norm, and the accurate DOA information was obtained by solving the sparse regularized optimization problem, so as to provide the reliable feature parameters for the final detection results at data level. Finally, the Indoor Safety Index Number(ISIN) was evaluated according to the DOA changes before and after the moments, and then indoor intrusion detection was realized. In the experiment, the method was verified by real indoor scenes and compared with traditional data preprocessing methods of principal component analysis and discrete wavelet transform. Experimental results show that the proposed method can accurately detect the occurrence of intrusion in different complex indoor environments, with an average detection rate of more than 98%, and has better performance in robustness compared to comparison algorithms.
Reference | Related Articles | Metrics
Stylistic multiple features mining based on attention network
WU Haiyan, LIU Ying
Journal of Computer Applications    2020, 40 (8): 2171-2181.   DOI: 10.11772/j.issn.1001-9081.2019122204
Abstract521)      PDF (1584KB)(823)       Save
To solve the problem that it is difficult to mine the features of different registers in large-scale corpus and it needs a lot of professional knowledge and manpower, a method to mine the features of distinguishing different registers automatically was proposed. First, the register was expressed as words, parts-of-speech, punctuations, and their bigrams, syntactic structure as well as multiple combined features. Then, the combination model of attention mechanism and Multi-Layer Perceptron (MLP) (i.e. attention network) was used to classify the registers into novel, news and textbook. And, the important features that were able to help to distinguish the registers were automatically extracted in this process. Finally, through the further analysis of these features, the characteristics of different registers and some linguistic conclusions were obtained. Experimental results show that novel, news, and textbook have significant differences in words, topic words, word dependencies, parts-of-speech, punctuations and syntactic structures, which implies that there will naturally present some diversity in the use of words, parts-of-speech, punctuations, and syntactic structures due to the different communication objects, purposes, contents, and environments when people utilize language.
Reference | Related Articles | Metrics
Low-resolution image recognition algorithm with edge learning
LIU Ying, LIU Yuxia, BI Ping
Journal of Computer Applications    2020, 40 (7): 2046-2052.   DOI: 10.11772/j.issn.1001-9081.2019112041
Abstract528)      PDF (6039KB)(439)       Save
Due to the influence of lighting conditions, shooting angles, transmission equipments and the surrounding environments, target objects in criminal investigation video images often have low-resolution, which are difficult to recognize. In order to improve the recognition rate of low-resolution images, on the basis of the classic LeNet-5 recognition network, a low-resolution image recognition algorithm based on adversarial edge learning was proposed. Firstly, the adversarial edge learning network was used to generate the fantasy edge of low-resolution image, which is similar to the edge of high-resolution image. Secondly, the edge information of this low-resolution image was fused into the recognition network as prior information for the recognition of the low-resolution image. Experiments were performed on three datasets:MNIST, EMNIST and Fashion-mnist. The results show that fusing the fantasy edge of low-resolution image into the recognition network can effectively increase the recognition rate of low-resolution images.
Reference | Related Articles | Metrics
High dynamic range imaging algorithm based on luminance partition fuzzy fusion
LIU Ying, WANG Fengwei, LIU Weihua, AI Da, LI Yun, YANG Fanchao
Journal of Computer Applications    2020, 40 (1): 233-238.   DOI: 10.11772/j.issn.1001-9081.2019061032
Abstract525)      PDF (1027KB)(369)       Save
To solve the problems of color distortion and local detail information loss caused by the histogram expansion of High Dynamic Range (HDR) image generated by single image, an imaging algorithm of high dynamic range image based on luminance partition fusion was proposed. Firstly, the luminance component of normal exposure color image was extracted, and the luminance was divided into two intervals according to luminance threshold. Then, the luminance ranges of images of two intervals were extended by the improved exponential function, so that the luminance of low-luminance area was increased, the luminance of high-luminance area was decreased, and the ranges of two areas were both expanded, increasing overall contrast of image, and preserving the color and detail information. Finally, the extended image and original normal exposure image were fused into a high dynamic image based on fuzzy logic. The proposed algorithm was analyzed from both subjective and objective aspects. The experimental results show that the proposed algorithm can effectively expand the luminance range of image and keep the color and detail information of scene, and the generated image has better visual effect.
Reference | Related Articles | Metrics
Welding ball edge bubble segmentation for ball grid array based on full convolutional network and K-means clustering
ZHAO Ruixiang, HOU Honghua, ZHANG Pengcheng, LIU Yi, TIAN Zhu, GUI Zhiguo
Journal of Computer Applications    2019, 39 (9): 2580-2585.   DOI: 10.11772/j.issn.1001-9081.2019030523
Abstract476)      PDF (1006KB)(446)       Save

For inaccurate segmentation results caused by the existence of edge bubbles in welding balls and the grayscale approximation of background due to the diversity of image interference factors in Ball Grid Array (BGA) bubble detection, a welding ball bubble segmentation method based on Fully Convolutional Network (FCN) and K-means clustering was proposed. Firstly, a FCN network was constructed based on the BGA label dataset, and trained to obtain an appropriate network model, and then the rough segmentation result of the image were obtained by predicting and processing the BGA image to be detected. Secondly, the welding ball region mapping was extracted, the bubble region identification was improved by homomorphic filtering method, and then the image was subdivided by K-means clustering segmentation to obtain the final segmentation result. Finally, the welding balls and bubble region in the original image were labeled and identified. Comparing the proposed algorithm with the traditional BGA bubble segmentation algorithm, the experimental results show that the proposed algorithm can segment the edge bubbles of complex BGA welding balls accurately, and the image segmentation results highly match the true contour with higher accuracy.

Reference | Related Articles | Metrics
Hyperspectral image unmixing algorithm based on spectral distance clustering
LIU Ying, LIANG Nannan, LI Daxiang, YANG Fanchao
Journal of Computer Applications    2019, 39 (9): 2541-2546.   DOI: 10.11772/j.issn.1001-9081.2019020351
Abstract868)      PDF (997KB)(414)       Save

In order to solve the problem of the effect of noise on the unmixing precision and the insufficient utilization of spectral and spatial information in the actual Hyperspectral Unmixing (HU), an improved unmixing algorithm based on spectral distance clustering for group sparse nonnegative matrix factorization was proposed. Firstly, the HYperspectral Signal Identification by Minimum Error (Hysime) algorithm for the large amount of noise existing in the actual hyperspectral image was introduced, and the signal matrix and the noise matrix were estimated by calculating the eigenvalues. Then, a simple clustering algorithm based on spectral distance was proposed and used to merge and cluster the adjacent pixels generated by multiple bands, whose spectral reflectance distances are less than a certain value, to generate the spatial group structure. Finally, sparse non-negative matrix factorization was performed on the basis of the generated group structure. Experimental analysis shows that for both simulated data and actual data, the algorithm produces smaller Root-Mean-Square Error (RMSE) and Spectral Angle Distance (SAD) than traditional algorithms, and can produce better unmixing effect than other advanced algorithms.

Reference | Related Articles | Metrics
Functional module mining in uncertain protein-protein interaction network based on fuzzy spectral clustering
MAO Yimin, LIU Yinping, LIANG Tian, MAO Dinghui
Journal of Computer Applications    2019, 39 (4): 1032-1040.   DOI: 10.11772/j.issn.1001-9081.2018091880
Abstract465)      PDF (1499KB)(346)       Save
Aiming at the problem that Protein-Protein Interaction (PPI) network functional module mining method based on spectral clustering and Fuzzy C-Means (FCM) clustering has low accuracy and low running efficiency, and is susceptible to false positive, a method for Functional Module mining in uncertain PPI network based on Fuzzy Spectral Clustering (FSC-FM) was proposed. Firstly, in order to overcome the effect of false positives, an uncertain PPI network was constructed, in which every protein-protein interaction was endowed with a existence probability measure by using edge aggregation coefficient. Secondly, based on edge aggregation coefficient and flow distance, the similarity calculation of spectral clustering was modified using Flow distance of Edge Clustering coefficient (FEC) strategy to overcome the sensitivity problem of the spectral clustering to the scaling parameters. Then the spectral clustering algorithm was used to preprocess the uncertain PPI network data, reducing the dimension of the data and improving the accuracy of clustering. Thirdly, Density-based Probability Center Selection (DPCS) strategy was designed to solve the problem that FCM algorithm was sensitive to the initial cluster center and clustering numbers, and the processed PPI data was clustered by using FCM algorithm to improve the running efficiency and sensitivity of the clustering. Finally, the mined functional module was filtered by Edge-Expected Density (EED) strategy. Experiments on yeast DIP dataset show that, compared with Detecting protein Complexes based on Uncertain graph model (DCU) algorithm, FSC-FM has F-measure increased by 27.92%, running efficiency increased by 27.92%; compared with an uncertain model-based approach for identifying Dynamic protein Complexes in Uncertain protein-protein interaction Networks (CDUN), Evolutionary Algorithm (EA) and Medical Gene or Protein Prediction Algorithm (MGPPA), FSC-FM also has higher F-measure and running efficiency. The experimental results show that FSC-FM is suitable for the functional module mining in the uncertain PPI network.
Reference | Related Articles | Metrics
New simplified model of discounted {0-1} knapsack problem and solution by genetic algorithm
YANG Yang, PAN Dazhi, LIU Yi, TAN Dailun
Journal of Computer Applications    2019, 39 (3): 656-662.   DOI: 10.11772/j.issn.1001-9081.2018071580
Abstract665)      PDF (1164KB)(464)       Save
Current Discounted {0-1} Knapsack Problem (D{0-1}KP) model takes the discounted relationship as a new individual, so the repair method must be adopted in the solving process to repair the individual coding, making the model have less solving methods. In order to solve the problem of single solving method, by changing the binary code expression in the model, an expression method with discounted relationship out of individual code was proposed. Firstly, if and only if each involved individual encoding value was one (which means the product was one), the discounted relationship was established. According to this setting, a Simplified Discounted {0-1} Knapsack Problem (SD{0-1}KP) model was established. Then, an improved genetic algorithm-FG (First Gentic algorithm) was proposed based on Elitist Reservation Strategy (EGA) and GREedy strategy (GRE) for SD{0-1}KP model. Finally, combining penalty function method, a high precision penalty function method-SG (Second Genetic algorithm) for SD{0-1}KP was proposed. The results show that the SD{0-1}KP model can fully cover the problem domain of D{0-1}KP. Compared with FirEGA (First Elitist reservation strategy Genetic Algorithm), the two algorithms proposed have obvious advantages in solving speed. And SG algorithm introduces the penalty function method for the first time, which enriches the solving methods of the problem.
Reference | Related Articles | Metrics
Industrial X-ray image enhancement algorithm based on gradient field
ZHOU Chong, LIU Huan, ZHAO Ailing, ZHANG Pengcheng, LIU Yi, GUI Zhiguo
Journal of Computer Applications    2019, 39 (10): 3088-3092.   DOI: 10.11772/j.issn.1001-9081.2019040694
Abstract585)      PDF (843KB)(402)       Save
In the detection of components with uneven thickness by X-ray, the problems of low contrast or uneven contrast and low illumination often occur, which make it difficult to observe and analyze some details of components in the images obtained. To solve this problem, an X-ray image enhancement algorithm based on gradient field was proposed. The algorithm takes gradient field enhancement as the core and is divided into two steps. Firstly, an algorithm based on logarithmic transformation was proposed to compress the gray range of an image, remove redundant gray information of the image and improve image contrast. Then, an algorithm based on gradient field was proposed to enhance image details, improve local image contrast and image quality, so that the details of components were able to be clearly displayed on the detection screen. A group of X-ray images of components with uneven thickness were selected for experiments, and the comparisons with algorithms such as Contrast Limited Adaptive Histogram Equalization (CLAHE) and homomorphic filtering were carried out. Experimental results show that the proposed algorithm has more obvious enhancement effect and can better display the detailed information of the components. The quantitative evaluation criteria of calculating average gradient and No-Reference Structural Sharpness (NRSS) texture analysis further demonstrate the effectiveness of this algorithm.
Reference | Related Articles | Metrics
Research on factors affecting quality of mobile application crowdsourced testing
CHENG Jing, GE Luqi, ZHANG Tao, LIU Ying, ZHANG Yifei
Journal of Computer Applications    2018, 38 (9): 2626-2630.   DOI: 10.11772/j.issn.1001-9081.2018030575
Abstract689)      PDF (807KB)(407)       Save
To solve the problem that the influencing factors of crowdsourced testing are complex and diverse, and the test quality is difficult to assess, a method for analyzing the quality influencing factors based on Spearman correlation coefficient was proposed. Firstly, the potential quality influencing factors were obtained through the analysis of test platforms, tasks, and testers. Secondly, Spearman correlation coefficient was used to calculate the correlation degrees between potential factors and test quality and to screen out key factors. Finally, the multiple stepwise regression was used to establish a linear evaluation relationship between key factors and test quality. The experimental results show that compared with the traditional expert artificial evaluation method, the proposed method can maintain smaller fluctuations in evaluation error when facing a large number of test tasks. Therefore, the method can accurately screen out the key influencing factors of mobile application crowdsourced test quality.
Reference | Related Articles | Metrics
Stationary wavelet domain deep residual convolutional neural network for low-dose computed tomography image estimation
GAO Jingzhi, LIU Yi, BAI Xu, ZHANG Quan, GUI Zhiguo
Journal of Computer Applications    2018, 38 (12): 3584-3590.   DOI: 10.11772/j.issn.1001-9081.2018040833
Abstract481)      PDF (1168KB)(405)       Save
Concerning the problem of a large amount of noise in Low-Dose Computed Tomography (LDCT) reconstructed images, a deep residual Convolutional Neural Network for Stationary Wavelet Transform (SWT-CNN) model was proposed to estimate Normal-Dose Computed Tomography (NDCT) image from LDCT image. In training phase, the high-frequency coefficients of LDCT images after Stationary Wavelet Transform (SWT) three-level decomposition were taken as inputs, the residual coefficients were obtained by subtracting the high-frequency coefficients of NDCT images from high-frequency coefficients of LDCT images were taken as labels, and the mapping relationship between inputs and labels could be learned by deep CNN. In testing phase, the high-frequency coefficients of NDCT image could be predicted from the high-frequency coefficients of LDCT image by using this mapping relationship. Finally, the predicted NDCT image could be reconstructed by Stationary Wavelet Inverse Transform (ISWT). With the size of 512 x 512, 50 pairs of normal-dose chest and abdominal scan sections of the same phantom and reconstructed images with noise added to the projection field were used as data sets, of which 45 pairs constituted a training set and the remaining 5 pairs constituted a test set. The SWT-CNN model was compared with the-state-of-the-art methods, such as Non-Local Means (NLM), K-Singular Value Decomposition (K-SVD) algorithm, Block-Matching and 3D filtering (BM3D), and Image domain CNN (Image-CNN). The experimental results show that, the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) of NDCT image predicted by SWT-CNN model are higher, and its Root Mean Square Error (RMSE) is smaller than that of other algorithms. The proposed model is feasible and effective in improving the quality of low-dose CT images.
Reference | Related Articles | Metrics
Segmentation of cervical nuclei based on fully convolutional network and conditional random field
LIU Yiming, ZHANG Pengcheng, LIU Yi, GUI Zhiguo
Journal of Computer Applications    2018, 38 (11): 3348-3354.   DOI: 10.11772/j.issn.1001-9081.2018050988
Abstract1111)      PDF (1095KB)(929)       Save
Aiming at the problem of inaccurate cervical nuclei segmentation due to complex and diverse shape in cervical cancer screening, a new method that combined Fully Convolutional Network (FCN) and dense Conditional Random Field (CRF) was proposed for nuclei segmentation. Firstly, a Tiny-FCN (T-FCN) was built according to the characteristics of the Herlev data set. Utilizing the priori information at the pixel level of the nucleus region, the multi-level features were learned autonomously to obtain the rough segmentation of the cell nucleus. Then, the small incorrect segmentation regions in the rough segmentation were eliminated and the segmentation was refined, by minimizing the energy function of the dense CRF that contains the label, intensity and position information of all pixels in a cell image. The experiment results on Herlev Pap smear dataset show that the precision, recall and Zijdenbos Similarity Index (ZSI) are all higher than 0.9, indicating that the nuclei segmentation boundary obtained by the proposed method is matched excellently with the ground truth, and the segmentation is accurate. Compared to the traditional method in which the indexes of segmentation of abnormal nuclei are lower than those of normal nuclei, the segmentation indexes of abnormal nuclei are superior to those of normal nulei by the proposed method.
Reference | Related Articles | Metrics
Downlink beamforming design based user scheduling for MIMO-NOMA systems
LIU Yi, HU Zhe, JING Xiaorong
Journal of Computer Applications    2018, 38 (11): 3282-3286.   DOI: 10.11772/j.issn.1001-9081.2018040876
Abstract468)      PDF (774KB)(513)       Save
Focused on the large inter-user interference in Multiple Input Multiple Output-Non-Orthogonal Multiple Access (MIMO-NOMA) technology, an algorithm merging user scheduling and BeamForming (BF) was proposed. Firstly, during the course of user scheduling, in order to simultaneously take intra-cluster user interference and inter-cluster user interference into account, all user groupings were initially sparsely processed by the L1-norm regularization method according to the channel difference among users. In the respect of user channel correlation, two users with large channel correlation were divided into a cluster. Secondly, Fractional Transmit Power Control (FTPC) was used to implement the power allocation of the intra-cluster users. Finally, an objective optimization function based on sum rate maximization criterion was constructed, which was solved by Successive Convex Approximation (SCA) method to obtain the BF matrix. Compared with OMA (Orthogonal Multiple Access), the proposed scheme achieves 84.3% improvement in system capacity, and compared with the traditional correlation user clustering method, it achieves 20.2% improvement in fairness. The theoretical analysis and simulation results show that the proposed scheme not only suppresses the intra-cluster interference and inter-cluster user interference effectively, but also ensures the fairness among users.
Reference | Related Articles | Metrics
Automatic cloud detection algorithm based on deep belief network-Otsu hybrid model
QIU Meng, YIN Haoyu, CHEN Qiang, LIU Yingjian
Journal of Computer Applications    2018, 38 (11): 3175-3179.   DOI: 10.11772/j.issn.1001-9081.2018041350
Abstract549)      PDF (996KB)(454)       Save
More than half of the earth's surface is covered by cloud. Current cloud detection methods from satellite remote sensing imageries are mainly manual or semi-automatic, depending upon manual intervention with low efficiency. Such methods can hardly be utilized in real-time or quasi real-time applications. To improve the availability of satellite remote sensing data, an automatic cloud detection method based on Deep Belief Network (DBN) and Otsu's method was proposed, named DOHM (DBN-Otsu Hybrid Model). The main contribution of DOHM is to replace the empirical fixed thresholds with adaptive ones, therefore achieve full-automatic cloud detection and increase the accuracy to greater than 95%. In addition, a 9-dimensional feature vector is adopted in network training. Diversity of the input feature vector helps to capture the characteristics of cloud more effectively.
Reference | Related Articles | Metrics
Privacy-preserving equal-interval approximate query algorithm in two-tiered sensor networks
WANG Taochun, CUI Zhuangzhuang, LIU Ying
Journal of Computer Applications    2017, 37 (9): 2563-2566.   DOI: 10.11772/j.issn.1001-9081.2017.09.2563
Abstract661)      PDF (790KB)(482)       Save
Privacy preservation, a key factor in expanding the application of Wireless Sensor Network (WSN), is the current research hotspot. In view of the privacy of sensory data in WSN, Privacy-preserving Equal-Interval Approximate Query (PEIAQ) algorithm in two-tiered sensor networks based on data aggregation was proposed. Firstly, sensor node IDs and sensory data were concealed in a random vector, and then linear equations were worked out by the base station based on the random vector. As a result, a histogram containing global statistics was formed, and finally the results of approximate query were obtained. In addition, sensory data were encrypted through perturbation technique and sharing key between the sensor node and base station, which can ensure the privacy of sensory data. Simulation experiments show that the PEIAQ has a 60% decrease approximately in the traffic compared with PGAQ (Privacy-preserving Generic Approximate Query) in the query phase. Therefore, PEIAQ is efficient and costs low-energy.
Reference | Related Articles | Metrics
Space target sphere grid index based on orbit restraint and region query application
LYU Liang, SHI Qunshan, LAN Chaozhen, CHEN Yu, LIU Yiping, LIANG Jing
Journal of Computer Applications    2017, 37 (7): 2095-2099.   DOI: 10.11772/j.issn.1001-9081.2017.07.2095
Abstract538)      PDF (768KB)(419)       Save
Since the efficiency of retrieval and query of mass and high-speed space targets remains in a low level, a construction method of sphere grid index to the space targets based on the orbit restraint was proposed. The advantage that the orbit of space target is relatively stable in earth inertial coordinate system was used in the method to achieve the stabilized index to high-speed moving objects by maintaining the list of the space targets that pass through the sphere subdivision grid. On this basis, a region query application scheme was proposed. Firstly, the query time period was dispersed according to a particular step value. Secondly, the boundary coordinates of the query region in the inertial space were calculated and the staggered mesh was confirmed. Then the space targets in the grid were extracted and the spatial relationship between targets and the region was calculated and estimated. Finally, the whole time period was queried recursively and the space targets transit query analysis was accomplished. In the simulation experiment, the consumed time of the traditional method by calculating one by one has a linear positive correlation with the target number, but it has no relevance with the region size. One target costs 0.09 ms on average. By contrast, the time of the proposed method in the paper shows a linear decrease with the decline of area size. When the number of the region grids is less than 2750, the time efficiency is higher than that of the comparison method. Furthermore, it can maintain a fairly good accuracy. The experimental results show that the proposed method can improve the efficiency of the query in the actual region application effectively.
Reference | Related Articles | Metrics
Mining algorithm of maximal fuzzy frequent patterns
ZHANG Haiqing, LI Daiwei, LIU Yintian, GONG Cheng, YU Xi
Journal of Computer Applications    2017, 37 (5): 1424-1429.   DOI: 10.11772/j.issn.1001-9081.2017.05.1424
Abstract687)      PDF (1047KB)(435)       Save
Combinatorial explosion and the effectiveness of mining results are the essential challenges of meaningful pattern extraction, a Maximal Fuzzy Frequent Pattern Tree Algorithm (MFFP-Tree) based on base-(second-order-effect) pattern structure and uncertainty consideration of items was proposed. Firstly, the fuzziness of items was analyzed comprehensively, the fuzzy support was given, and the fuzzy weight of items in the transaction data set was analyzed, the candidate item set was trimmed according to the fuzzy pruning strategy. Secondly, the database was scanned once to build FFP-Tree, and the overhead of pattern extraction was reduced based on fuzzy pruning strategy. The FFP-array structure was used to streamline the search method to further reduce the space and time complexity. The experimental results gained from the benchmark datasets reveal that the proposed MFFP-Tree has outstanding performance by comparing with PADS and FPMax * algorithms:the time complexity of the proposed algorithm is optimized by twice to one order of magnitude for different datasets, and the spatial complexity of the proposed algorithm is optimized by one order of magnitude to two orders of magnitude, respectively.
Reference | Related Articles | Metrics
Research and application for terminal location management system based on firmware
SUN Liang, CHEN Xiaochun, ZHENG Shujian, LIU Ying
Journal of Computer Applications    2017, 37 (2): 417-421.   DOI: 10.11772/j.issn.1001-9081.2017.02.0417
Abstract687)      PDF (848KB)(629)       Save
Pasting the Radio Frequency Identification (RFID) tag on the shell of computer so that to trace the location of computer in real time has been the most frequently used method for terminal location management. However, RFID tag would lose the direct control of the computer when it is out of the authorized area. Therefore, the terminal location management system based on the firmware and RFID was proposed. First of all, the authorized area was allocated by RFID radio signal. The computer was allowed to boot only if the firmware received the authorized signal of RFID on the boot stage via the interaction between the firmware and RFID tag. Secondly, the computer could function normally only if it received the signal of RFID when operation system is running. At last, the software Agent of location management would be protected by the firmware to prevent it from being altered and deleted. The scenario of the computer out of the RFID signal coverage would be caught by the software Agent of the terminal; and the terminal would then be locked and data would be destroyed. The terminal location management system prototype was deployed in the office area to control almost thirty computers so that they were used normally in authorized areas and locked immediately once out of authorized areas.
Reference | Related Articles | Metrics
Evaluation model of mobile application crowdsourcing testers
LIU Ying, ZHANG Tao, LI Kun, LI Nan
Journal of Computer Applications    2017, 37 (12): 3569-3573.   DOI: 10.11772/j.issn.1001-9081.2017.12.3569
Abstract523)      PDF (937KB)(725)       Save
Mobile application crowdsourcing testers are anonymous, non-contractual, which makes it difficult for task publishers to accurately evaluate the ability of crowdsourcing testers and quality of test results.To solve these problems, a new evaluation model of Analytic Hierarchy Process (AHP) for mobile application crowdsouring testers was proposed. The ability of crowdsourcing testers was evaluated comprehensively and hierarchically by using the multiple indexes, such as activity degree, test ability and integrity degree. The combination weight vector of each level index was calculated by constructing the judgment matrix and consistency test. Then, the proposed model was improved by introducing the requirement list and description list, which made testers and crowdsourcing tasks match better. The experimental results show that the proposed model can evaluate the ability of testers accurately, support the selection and recommendation of crowdsourcing testers based on the evaluation results, and improve the efficiency and quality of mobile application crowdsourcing testing.
Reference | Related Articles | Metrics
Statistical iterative algorithm based on adaptive weighted total variation for low-dose CT
HE Lin, ZHANG Quan, SHANGGUAN Hong, ZHANG Wen, ZHANG Pengcheng, LIU Yi, GUI Zhiguo
Journal of Computer Applications    2016, 36 (10): 2916-2921.   DOI: 10.11772/j.issn.1001-9081.2016.10.2916
Abstract508)      PDF (888KB)(513)       Save
Concerning the streak artifacts and impulse noise of the Low-Dose Computed Tomography (LDCT) reconstructed images, a statistical iterative reconstruction method based on adaptive weighted Total Variation (TV) for LDCT was presented. Considering the shortage that traditional TV may bring staircase effect while suppressing streak artifacts, an adaptive weighted TV model that combined the weighting factor based on weighted variation and TV model was proposed. Then, the new model was applied to the Penalized Weighted Least Square (PWLS). Different areas of the image were processed with different de-noising intensities, so as to achieve a good effect of noise suppression and edge preservation. The Shepp-Logan model and the digital pelvis phantom were used to test the effectiveness of the proposed algorithm. Experimental results show that the proposed method has smaller Normalized Mean Square Distance (NMSD) and Normal Average Absolute Distance (NAAD) in the two experiment images, compared with the Filtered Back Projection (FBP), PWLS, PWLS-Median Prior (PWLS-MP) and PWLS-TV algorithms. Meanwhile, the proposed method get Peak Signal-To-Noise Ratio (PSNR) of 40.91 dB and 42.25 dB respectively. Experimental results show that the proposed algorithm can well preserve image details and edges, while eliminating streak artifacts effectively.
Reference | Related Articles | Metrics
Fast intra-depth decision algorithm for high efficiency video coding
LIU Ying, GAO Xueming, LIM Kengpang
Journal of Computer Applications    2016, 36 (10): 2854-2858.   DOI: 10.11772/j.issn.1001-9081.2016.10.2854
Abstract473)      PDF (780KB)(485)       Save
To reduce the high computational complexity of intra coding in High Efficiency Video Coding (HEVC), a fast intra depth decision algorithm for Coding Unit (CU) based on spatial correlation property of images was proposed. First, the depth of the current Coded Tree Unit (CTU) was estimated by linearly weighting the adjacent CTU depths. Then appropriate double thresholds were set to terminate the CTU splitting process or skip some depths of CTU, thereby reducing unnecessary depth calculation. Experimental results show that compared with HM12.0, the proposed intra-depth decision optimization algorithm can significantly decrease the coding time of simple video sequence with only negligible drop in quality, when the Y-PSNR dropped by an average of 0.02 dB, the encoding time is reduced by an average of 34.6%. Besides, the proposed algorithm is easy to be fused with other methods to further reduce the computational complexity for HEVC intra coding, and ultimately achieves the purpose of real-time transmission of high-definition video.
Reference | Related Articles | Metrics
Adaptive total generalized variation denoising algorithm for low-dose CT images
HE Lin, ZHANG Quan, SHANGGUAN Hong, ZHANG Fang, ZHANG Pengcheng, LIU Yi, SUN Weiya, GUI Zhiguo
Journal of Computer Applications    2016, 36 (1): 243-247.   DOI: 10.11772/j.issn.1001-9081.2016.01.0243
Abstract520)      PDF (796KB)(435)       Save
A new denoising algorithm, Adaptive Total Generalized Variation (ATGV), was proposed for removing streak artifacts within the reconstructed image of low-dose Computed Tomography (CT). Considering the shortage that the traditional Total Generalized Variation (TGV) would blur the edge details, the intuitionistic fuzzy entropy which can distinguish the smooth and detail regions was introduced into the TGV algorithm. Different areas of the image were processed with different denoising intensities. As a result, the image details could be well preserved. Firstly, the Filtered Back Projection (FBP) algorithm was used to obtain a reconstructed image. Secondly, the edge indicator function based on intuitive fuzzy entropy was applied to improve the TGV algorithm. Finally, the new algorithm was employed to reduce the noise in the reconstructed image. The simulations of the low-dose CT image reconstruction for the Shepp-Logan model and the thorax phantom were used to test the effectiveness of the proposed algorithm. The experimental results show that the proposed algorithm has the smaller values of the Normalized Mean Square Distance (NMSD) and Normalized Average Absolute Distance (NAAD) in the two experiment images, compared with the Total Variation (TV) algorithm and TGV algorithm. Meanwhile, the two experiment images processed with the new method can obtain high Peak Signal-to-Noise Ratios (PSNR) of 26.90 dB and 44.58 dB, respectively. So the proposed algorithm can effectively preserve image details and edges, while reducing streak artifacts.
Reference | Related Articles | Metrics
Key frame preprocessing of H.264 real-time streaming based on timing control algorithm for analyzing unit of group of pictures
DU Dan, FENG Lijun, LIU Yintian
Journal of Computer Applications    2016, 36 (1): 57-60.   DOI: 10.11772/j.issn.1001-9081.2016.01.0057
Abstract470)      PDF (802KB)(323)       Save
Network-based audio and video calls as well as video conferencing may loss packets because of limited network bandwidth, which results in video streaming quality problems and reduces the effects of video calls and video conferencing. The real-time streaming quality control algorithm was proposed. This algorithm adopted time control approach to detect and remove the bad key frames, thus reducing the occurrence of blurred screen. The proposed algorithm can efficiently reduce the costs of time and space, which eventually enhances the streaming fluency. The experiments were conducted in terms of original frame playback, post-processing playback, and key frame pretreatment playback. The experimental comparisons showed that the proposed algorithm could efficiently improve the quality and fluency of playback, moreover, the computational complexity of playing processing was also decreased by more than 40%. The results show that the proposed algorithm has outstanding effect to improve the quality of live streaming and reduce the occurrence of blurred screen.
Reference | Related Articles | Metrics
Design and implementation of local processor in a distributed system
WEI Min, LIU Yi'an, WU Hongyan
Journal of Computer Applications    2015, 35 (5): 1290-1295.   DOI: 10.11772/j.issn.1001-9081.2015.05.1290
Abstract546)      PDF (860KB)(708)       Save

Concerning the problem that there is a lot of data which need to be real-time processed during the production process, the local processor, based on multi-thread and co-processing architecture and two data buffer mechanisms was accomplished. As a reference, multi-functional thread in Hadoop's parallel architecture has an impressed impact on the design of the local processor, especially MapReduce principle. Based on the user-defined architecture, the local processor ensures data concurrency and correctness during receiving, computing and uploading. The system has been put into production for over one year. It can meet the enterprise requirements and has good stability, real-time, effectiveness and scalablility. The application result shows that the local processor can achieve synchronized analysis and processing of mass data.

Reference | Related Articles | Metrics
Function pointer attack detection with address integrity checking
DAI Wei, LIU Zhi, LIU Yihe
Journal of Computer Applications    2015, 35 (2): 424-429.   DOI: 10.11772/j.issn.1001-9081.2015.02.0424
Abstract570)      PDF (973KB)(487)       Save

Traditional detection techniques of function pointer attack cannot detect Return-Oriented-Programming (ROP) attack. A new approach by checking the integrity of jump address was proposed to detect a variety of function pointer attacks on binary code. First, function address was obtained with static analysis, and then target addresses of jump instructions were checked dynamically whether they fell into allowed function address space. The non-entry function call was analyzed, based on which a new method was proposed to detect ROP attack by combining static and dynamic analysis. The prototype system named fpcheck was developed using binary instrumentation tool, and evaluated with real-world attacks and normal programs. The experimental results show that fpcheck can detect various function pointer attacks including ROP, the false positive rate reduces substantially with accurate policies, and the performance overhead only increases by 10% to 20% compared with vanilla instrumentation.

Reference | Related Articles | Metrics
Automatic implementation scheme of implementing access control rules in OpenFlow network
LIU Yi, ZHANG Hongqi, DAI Xiangdong, LEI Cheng
Journal of Computer Applications    2015, 35 (11): 3270-3274.   DOI: 10.11772/j.issn.1001-9081.2015.11.3270
Abstract464)      PDF (933KB)(690)       Save
Focusing on the issue that OpenFlow network can't meet access control policy constantly resulted from its data plane changing frequently, an automatic implementation scheme of implementing access control rules in OpenFlow network was proposed. Firstly, reachable space was obtained by building real-time forwarding paths, and conflicts among access control rules were resolved by using dynamical synthesis algorithm. Then, denied space was extracted from synthetic set of access control rules by using rule space division algorithm, which was compared with reachable space subsequently to detect direct and indirect violations. According to network update situations and violation detection results, automatic violation resolutions were adopted flexibly, such as rejecting rule update, removing rule sequence, deploying rule near source based on Linear Programming (LP) and deploying rule terminally. Lastly, the format of access control rule was converted. The theoretical analysis and simulation results demonstrate that the proposed scheme is applicable under the condition that multiple security applications are running on the controller and memory of switch is limited, and show that deploying rule near source based on LP can minimize unwanted traffic of network.
Reference | Related Articles | Metrics
Network interconnection model based on trusted computing
LIU Yibo YIN Xiaochuan GAO Peiyong ZHANG Yibo
Journal of Computer Applications    2014, 34 (7): 1936-1940.   DOI: 10.11772/j.issn.1001-9081.2014.07.1936
Abstract245)      PDF (767KB)(610)       Save

Problem of intranet security is almost birth with network interconnection, especially when the demand for network interconnection is booming throughout the world. The traditional technology can not achieve both security and connectivity well. In view of this,a method was put forward based on trusted computing technology. Basic idea is to build a trusted model about the network interconnection system,and the core part of this model is credible on access to the person's identity and conduct verification:first, the IBA algorithm is reformed to design an cryptographic protocol between authentication system and accessors,and the effectiveness is analyzed in two aspects of function and accuracy; second,an evaluation tree model is established through the analysis of the entity sustainable behavior, so the security situation of access terminals can be evaluated.At last,the evaluation method is verified through an experiment.

Reference | Related Articles | Metrics
Symmetry optimization of polar coordinate back-projection reconstruction algorithm for fan beam CT
ZHANG Jing ZHANG Quan LIU Yi GUI Zhiguo
Journal of Computer Applications    2014, 34 (6): 1711-1714.   DOI: 10.11772/j.issn.1001-9081.2014.06.1711
Abstract386)      PDF (592KB)(313)       Save

To improve the speed of image reconstruction based on fan-beam Filtered Back Projection (FBP), a new optimized fast reconstruction method was proposed for polar back-projection algorithm. According to the symmetry feature of trigonometric function, the preprocessing projection datum were back-projected on the polar coordinates at the same time. During the back-projection data coordinate transformation, the computation of bilinear interpolation could be reduced by using the symmetry of the pixel position parameters. The experimental result shows that, compared with the traditional convolution back-projection algorithm, the speed of reconstruction can be improved more than eight times by the proposed method without sacrificing image quality. The new method is also applicable to 3D cone-beam reconstruction, and can be extended to multilayer spiral three-dimensional reconstruction.

Reference | Related Articles | Metrics
High quality positron emission tomography reconstruction algorithm based on correlation coefficient and forward-and-backward diffusion
SHANG Guanhong LIU Yi ZHANG Quan GUI Zhiguo
Journal of Computer Applications    2014, 34 (5): 1482-1485.   DOI: 10.11772/j.issn.1001-9081.2014.05.1482
Abstract264)      PDF (752KB)(443)       Save

In Positron Emission Tomography (PET) computed imaging, traditional iterative algorithms have the problem of details loss and fuzzy object edges. A high quality Median Prior (MP) reconstruction algorithm based on correlation coefficient and Forward-And-Backward (FAB) diffusion was proposed to solve the problem in this paper. Firstly, a characteristic factor called correlation coefficient was introduced to represent the image local gray information. Then through combining the correlation coefficient and forward-and-backward diffusion model, a new model was made up. Secondly, considering that the forward-and-backward diffusion model has the advantages of dealing with background and edge separately, the proposed model was applied to Maximum A Posterior (MAP) reconstruction algorithm of the median prior distribution, thus a median prior reconstruction algorithm based on forward-and-backward diffusion was obtained. The simulation results show that, the new algorithm can remove the image noise while preserving object edges well. The Signal-to-Noise Ratio (SNR) and Root Mean Squared Error (RMSE) also show visually the improvement of the reconstructed image quality.

Reference | Related Articles | Metrics