Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Improved KLEIN algorithm and its quantum analysis
Yanjun LI, Yaodong GE, Qi WANG, Weiguo ZHANG, Chen LIU
Journal of Computer Applications    2024, 44 (9): 2810-2817.   DOI: 10.11772/j.issn.1001-9081.2023091333
Abstract205)   HTML2)    PDF (1882KB)(106)       Save

KLEIN has experienced attacks such as truncated difference cryptanalysis and integral cryptanalysis since it was proposed. Its encryption structure has actual security, but the vulnerability of the key expansion algorithm leads to full-round key recovery attacks. Firstly, the key expansion algorithm was modified and an improved algorithm N-KLEIN was proposed. Secondly, an efficient quantum circuit was implemented on the S-box using the in-place method, which reduced the width and depth of the circuit and improved the implementation efficiency of the quantum circuit. Thirdly, the quantization of obfuscation operations was achieved using LUP decomposition technology. Then, an efficient quantum circuit was designed for N-KLEIN, and an efficient quantum circuit for all round N-KLEIN was proposed. Finally, the resource occupation for the quantum implementation of full-round N-KLEIN was evaluated and compared with the resources occupied by existing quantum implementations of lightweight block ciphers such as PRESENT and HIGHT. At the same time, an in-depth study was conducted on the cost of key search attacks based on Grover algorithm, and the cost of N-KLEIN-{64,80,96} using Grover algorithm to search for keys under the Clifford+T model was given, and then the quantum security of N-KLEIN was evaluated. Comparative results indicate that the quantum implementation cost of N-KLEIN algorithm is significantly lower.

Table and Figures | Reference | Related Articles | Metrics
Ultrasound carotid plaque segmentation method based on semi-supervision and multi-scale cascaded attention
Chenqian LI, Jun LIU
Journal of Computer Applications    2024, 44 (8): 2604-2610.   DOI: 10.11772/j.issn.1001-9081.2023081197
Abstract35)   HTML2)    PDF (1974KB)(25)       Save

Obtaining reliable labels is time-consuming and laborious caused by the characteristics of ultrasonic images such as strong noise, low quality and blurred boundary. Therefore, a semi-supervision and multi-scale cascaded attention based ultrasound carotid plaque segmentation method was proposed. Firstly, a semi-supervised segmentation method of Uncertainty Rectified Pyramid Consistency (URPC) was used to make full use of unlabeled data to train the model, so as to reduce the time-consuming and laborious labeling pressure. Then, a dual encoder structure based on edge detection was proposed, and the edge detection encoder was used to assist the ultrasonic plaque image feature encoder to fully acquire the edge information. In addition, a Multi-Scale Fusion Module (MSFM) was designed to improve the extraction of irregularly shaped plaques by adaptive fusion of multi-scale features, and a Cascaded Channel Spatial Attention (CCSA) module was combined to better focus on the plaque region. Finally, the proposed method was evaluated on the ultrasonic carotid plaque image dataset. Experimental results show that the Dice index and IoU (Intersection over Union) index of the proposed method on the dataset are 2.8 and 6.3 percentage points higher than those of the supervised method CA-Net (Comprehensive Attention convolutional neural Network) respectively, and 1.8 and 1.3 percentage points higher than those of the semi-supervised method Cyclic Prototype Consistency Learning (CPCL) respectively. It can be seen that this method can effectively improve the segmentation accuracy of ultrasound carotid plaque image.

Table and Figures | Reference | Related Articles | Metrics
Low illumination face detection based on image enhancement
Zhonghua LI, Yunqi BAI, Xuejin WANG, Leilei HUANG, Chujun LIN, Shiyu LIAO
Journal of Computer Applications    2024, 44 (8): 2588-2594.   DOI: 10.11772/j.issn.1001-9081.2023081198
Abstract35)   HTML3)    PDF (2413KB)(19)       Save

In response to the issue of significantly reduced detection performance of face detection models in low-light conditions, a low-light face detection method based on image enhancement was developed. Firstly, image enhancement techniques were applied to preprocess low-light images, enhancing the effective facial features. Secondly, an attention mechanism was introduced after the model’s backbone network to increase the network’s focus on facial regions and reduce the negative impact of non-uniform lighting and noise simultaneously. Furthermore, an attention-based bounding box loss function — Wise Intersection over Union (WIoU) was incorporated to improve the network’s accuracy in detecting low-quality faces. Finally, a more efficient feature fusion module was used to replace the original model structure. Experimental results on the low-light face dataset DARK FACE compared to the original YOLOv7 model indicate that the improved method achieves an increase of 2.4 percentage points in average detection precision AP@0.5 and an increase of 1.4 percentage points in mean value of average precision AP@0.5:0.95, all without introducing additional parameters or computational complexity. Additionally, the results on two other low-light face datasets confirm the effectiveness and robustness of the proposed method, approving the applicability of the method for low-light face detection in diverse scenarios.

Table and Figures | Reference | Related Articles | Metrics
Multi-relation approximate reasoning model based on uncertain knowledge graph embedding
Jianjing LI, Guanfeng LI, Feizhou QIN, Weijun LI
Journal of Computer Applications    2024, 44 (6): 1751-1759.   DOI: 10.11772/j.issn.1001-9081.2023060762
Abstract277)   HTML16)    PDF (1027KB)(261)       Save

Because the uncertain embedding model of large-scale Knowledge Graph (KG) can not perform approximate reasoning on multiple logical relationships, a multi-relation approximate reasoning model based on Uncertain KG Embedding (UKGE) named UDConEx (Uncertainty DistMult (Distance Multiplicative) and complex Convolution Embedding) was proposed. Firstly, the UDConEx combined the characteristics of DistMult and ComplEx (Complex Embedding), enabling it to infer symmetric and asymmetric relationships. Subsequently, Convolutional Neural Network(CNN) was employed by the UDConEx to capture the interactive information in the uncertain KG, thereby enabling it to reason inverse and transitive relationships. Lastly, the neural network was employed to carry out confidence learning of uncertain KG information, enabling the UDConEx to perform approximate reasoning within the UKGE space. The experimental results on three public data sets of CN15k, NL27k, and PPI5k show that, compared with MUKGE (Multiplex UKGE) model, the Mean Absolute Error (MAE) of confidence prediction is reduced by 6.3%, 30.1% and 44.9% for CN15k, NL27k and PPI5k respectively; in the task of relation fact ranking, the linear-based Normalized Discounted Cumulative Gain (NDCG) is improved by 5.8% and 2.6% for CN15k and NL27k respectively; in the multi-relation approximate reasoning task, it is verified that the UDConEx has the approximate reasoning ability of multiple logical relationships. The inability of traditional embedding models to predict confidence is compensated for by the UDConEx, which achieves approximate reasoning for multiple logical relationships and offers enhanced accuracy and interpretability in uncertainty KG reasoning.

Table and Figures | Reference | Related Articles | Metrics
Security analysis of PFP algorithm under quantum computing model
Yanjun LI, Xiaoyu JING, Huiqin XIE, Yong XIANG
Journal of Computer Applications    2024, 44 (4): 1166-1171.   DOI: 10.11772/j.issn.1001-9081.2023050576
Abstract212)   HTML5)    PDF (1376KB)(189)       Save

The rapid development of quantum technology and the continuous improvement of quantum computing efficiency, especially the emergence of Shor algorithm and Grover algorithm, greatly threaten the security of traditional public key cipher and symmetric cipher. The block cipher PFP algorithm designed based on Feistel structure was analyzed. First, the linear transformation P of the round function was fused into the periodic functions in the Feistel structure, then four 5-round periodic functions of PFP were obtained, two rounds more than periodic functions in general Feistel structure, which was verified through experiments. Furthermore, by using quantum Grover and Simon algorithms, with a 5-round periodic function as the distinguisher, the security of 9, 10-round PFP was evaluated by analyzing the characteristics of PFP key arrangement algorithm. The time complexity required for key recovery is 226, 238.5, the quantum resource required is 193, 212 qubits, and the 58, 77 bits key can be restored, which are superior to the existing impossible differential analysis results.

Table and Figures | Reference | Related Articles | Metrics
Remote sensing image classification based on sample incremental learning
Xue LI, Guangle YAO, Honghui WANG, Jun LI, Haoran ZHOU, Shaoze YE
Journal of Computer Applications    2024, 44 (3): 732-736.   DOI: 10.11772/j.issn.1001-9081.2023030366
Abstract355)   HTML15)    PDF (1266KB)(415)       Save

Deep learning models have achieved remarkable results in remote sensing image classification. With the continuous collection of new remote sensing images, when the remote sensing image classification models based on deep learning train new data to learn new knowledge, their recognition performance of old data will decline, that is, old knowledge forgetting. In order to help remote sensing image classification model consolidate old knowledge and learn new knowledge, a remote sensing image classification model based on sample incremental learning, namely ICLKM (Incremental Collaborative Learning Knowledge Model) was proposed. The model consisted of two knowledge networks. The first network mitigated knowledge forgetting by retaining the output of the old model through knowledge distillation. The second network took the output of new data as the learning objective of the first network and effectively learned new knowledge by maintaining the consistency of the dual network models. Finally, two networks learned together to generate more accurate model through knowledge collaboration strategy. Experimental results on two remote sensing datasets NWPU-RESISC45 and AID show that, ICLKM has the accuracy improved by 3.53 and 6.70 percentage points respectively compared with FT (Fine-Tuning) method. It can be seen that ICLKM can effectively solve the knowledge forgetting problem of remote sensing image classification and continuously improve the recognition accuracy of known remote sensing images.

Table and Figures | Reference | Related Articles | Metrics
Survey on tile-based viewport adaptive streaming scheme of panoramic video
Junjie LI, Yumei WANG, Zhijun LI, Yu LIU
Journal of Computer Applications    2024, 44 (2): 536-547.   DOI: 10.11772/j.issn.1001-9081.2023020209
Abstract219)   HTML11)    PDF (2319KB)(413)       Save

Panoramic videos have attracted wide attention due to their unique immersive and interactive experience. The high bandwidth and low delay required for wireless streaming of panoramic videos have brought challenges to existing network streaming systems. Tile-based viewport adaptive streaming can effectively alleviate the streaming pressure brought by panoramic video, and has become the current mainstream scheme and hot research topic. By analyzing the research status and development trend of tile-based viewport adaptive streaming, the two important modules of this streaming scheme, namely viewport prediction and bit rate allocation, were discussed, and the methods in relevant fields were summarized from different perspectives. Firstly, based on the panoramic video streaming framework, the relevant technologies were clarified. Secondly, the user experience quality indicators to evaluate the performance of the streaming system were introduced from the subjective and objective dimensions. Then, the classic research methods were summarized from the aspects of viewport prediction and bit rate allocation. Finally, the future development trend of panoramic video streaming was discussed based on the current research status.

Table and Figures | Reference | Related Articles | Metrics
Differential and linear characteristic analysis of full-round Shadow algorithm
Yong XIANG, Yanjun LI, Dingyun HUANG, Yu CHEN, Huiqin XIE
Journal of Computer Applications    2024, 44 (12): 3839-3843.   DOI: 10.11772/j.issn.1001-9081.2023121762
Abstract114)   HTML2)    PDF (960KB)(75)       Save

As Radio Frequency IDentification (RFID) technology and wireless sensors become increasingly common, the need of secure data transmitted and processed by such devices with limited resources leads to the emergence and growth of lightweight ciphers. Characterized by their small key sizes and limited number of encryption rounds, precise security evaluation of lightweight ciphers is needed before putting into service. The differential and linear characteristics of full-round Shadow algorithm were analyzed for lightweight ciphers’ security requirements. Firstly, a concept of second difference was proposed to describe the differential characteristic more clearly, the existence of a full-round differential characteristic with probability 1 in the algorithm was proved, and the correctness of differential characteristic was verified through experiments. Secondly, a full-round linear characteristic was provided. It was proved that with giving a set of Shadow-32 (or Shadow-64) plain ciphertexts, it is possible to obtain 8 (or 16) bits of key information, and its correctness was experimentally verified. Thirdly, based on the linear equation relationship between plaintexts, ciphertexts and round keys, the number of equations and independent variables of the quadratic Boolean function were estimated. After that, the computational complexity of solving the initial key was calculated to be 2 63.4 . Finally, the structural features of Shadow algorithm were summarized, and the focus of future research was provided. Besides, differential and linear characteristic analysis of full-round Shadow algorithm provides preference for the differential and linear analysis of other lightweight ciphers.

Table and Figures | Reference | Related Articles | Metrics
Robust splicing forensic algorithm against high-intensity salt-and-pepper noise
Pengbo WANG, Wuyang SHAN, Jun LI, Mao TIAN, Deng ZOU, Zhanfeng FAN
Journal of Computer Applications    2024, 44 (10): 3177-3184.   DOI: 10.11772/j.issn.1001-9081.2023101462
Abstract96)   HTML3)    PDF (2871KB)(14)       Save

In the field of image forensics, image splicing detection technology can identify splicing and locate the splicing area through the analysis of image content. However, in common scenarios like transmission and scanning, salt-and-pepper (s&p) noise appears randomly and inevitably, and as the intensity of the noise increases, the current splicing forensic methods lose effectiveness progressively and might ultimately fail, thereby significantly impacting the effect of existing splicing forensic methods. Therefore, a splicing forensic algorithm against high-intensity s&p noise was proposed. The proposed algorithm was divided into two main parts: preprocessing and splicing forensics. Firstly, in the preprocessing part, a fusion of the ResNet32 and median filter was employed to remove s&p noise from the image, and the damaged image content was restored through the convolutional layer, so as to minimize the influence of s&p noise on splicing forensic part and restore image details. Then, in the splicing forensics part, based on the Siamese network structure, the noise artifacts associated with the image’s uniqueness were extracted, and the spliced area was identified through inconsistency assessment. Experimental results on widely used tampering datasets show that the proposed algorithm achieves good results on both RGB and grayscale images. In a 10% noise scenario, the proposed algorithm increases the Matthews Correlation Coefficient (MCC) value by over 50% compared to FS(Forensic Similarity) and PSCC-Net(Progressive Spatio-Channel Correlation Network) forensic algorithms, validating the effectiveness and advancement of the proposed algorithm in forensic analysis of tampered images with noise.

Table and Figures | Reference | Related Articles | Metrics
Contradiction separation super-deduction method and application
Feng CAO, Xiaoling YANG, Jianbing YI, Jun LI
Journal of Computer Applications    2024, 44 (10): 3074-3080.   DOI: 10.11772/j.issn.1001-9081.2023101404
Abstract110)   HTML1)    PDF (1422KB)(22)       Save

As a common inference mechanism in the current automated theorem prover, the traditional hyper-resolution method based on binary deduction is limited to only two clauses involved in each deduction step. The separated deduction steps lead to the lack of guidance and prediction of the binary chain deduction, and its deduction efficiency needs to be improved. To improve the efficiency of deduction, in theory, the idea of multi-clause deduction was introduced into the traditional method of super-resolution, the definition and method of the contradiction separation super-deduction were proposed,which had the deduction characteristics of multi-clause, dynamics and guidance. In the implementation of the algorithm, considering that the clause participation in deduction had multi-clause and synergized characteristics, and flexibly setting the deduction conditions, a contradiction separation super-deduction algorithm with backtracking mechanism was proposed. The proposed algorithm was applied to Eprover3.1 prover, taking the International Automated Theorem Prover Competition 2023 and the most difficult problems with a difficulty rating of 1 in the TPTP (Thousands of Problems for Theorem Provers) benchmark database as the test objects. Within 300 s, the Eprover3.1 prover with the proposed algorithm solved 15 theorems more than the original Eprover3.1 prover, and the average proof time was reduced by 1.326 s with the same total number of solved theorems, and 7 theorems with the rating of 1 could be solved. The test results show that the proposed algorithm can be effectively applied to automated theorem proving in first-order logic, improving the proof capability and efficiency of automated theorem prover.

Table and Figures | Reference | Related Articles | Metrics
Image segmentation model based on improved particle swarm optimization algorithm and genetic mutation
Jun LIANG, Zehong HONG, Songsen YU
Journal of Computer Applications    2023, 43 (6): 1743-1749.   DOI: 10.11772/j.issn.1001-9081.2022060945
Abstract374)   HTML17)    PDF (1649KB)(154)       Save

Image segmentation is a key step from image processing to image analysis. For the limitation that cluster partitioning has a large dependence on the initial cluster center, an image segmentation model PSOM-K (Particle Swarm Optimization Mutations-K-means) based on improved Particle Swarm Optimization (PSO) algorithm and genetic mutation was proposed. Firstly, the PSO formula was improved by increasing the influence of random neighbor particle positions on its own position, and expanding the search space of the algorithm, so that the algorithm was able to find out the global optimal solution quickly. Secondly, mutation operation of genetic algorithm was combined to improve the generalization ability of the model. Thirdly, the positions of the k-means cluster centers were initialized with the improved PSO algorithm from the three channels: Red (R), Green (G) and Blue (B). Finally, k-means was used to perform the image segmentation from the three channels: R, G, and B, and the images of the three channels were merged. Experimental results on Berkeley Segmentation Dataset (BSDS500) show that the improvement of Feature Similarity Index Measure (FSIM) at k=4 is 7.7% to 12.69% compared to CEFO (Chaotic Electromagnetic Field Optimization) method and 5.05% to 19.02% compared to WOA-DE (Whale Optimization Algorithm-Differential Evolution) method.Compared with the fine-grained segmentation algorithm HWOA (Hybrid Whale Optimization Algorithm), PSOM-K decreases at most 0.45% in FSIM but improves 7.59% to 13.58% in Peak Signal-to-Noise Ratio (PSNR) at k=40. Therefore, three independent channels, increasing the position influence of random neighbor particles in the particle swarm and genetic mutation are three effective strategies to find the better positions of k-means cluster centers, and they can improve the performance of image segmentation greatly.

Table and Figures | Reference | Related Articles | Metrics
Reconfigurable test scheme for 3D stacked integrated circuits based on 3D linear feedback shift register
Tian CHEN, Jianyong LU, Jun LIU, Huaguo LIANG, Yingchun LU
Journal of Computer Applications    2023, 43 (3): 949-955.   DOI: 10.11772/j.issn.1001-9081.2022020186
Abstract296)   HTML4)    PDF (2075KB)(106)    PDF(mobile) (1205KB)(2)    Save

Due to complex structure of Three-Dimensional Stacked Integrated Circuit (3D SIC), it is more difficult to design an efficient test structure for it to reduce test cost than for Two-Dimensional Integrated Circuit (2D IC). For decreasing cost of 3D SIC testing, a Three-Dimensional Linear Feedback Shift Register (3D-LFSR) test structure was proposed based on Linear Feedback Shift Register (LFSR), which can effectively adapt to different test phases of 3D SIC. The structure was able to perform tests independently in the pre-stacking tests. After the stacking, the pre-stacking test structure was reused and reconfigured into a test structure suitable for the current circuit to be tested, and the reconfigured test structure was able to further reduce test cost. Based on this structure, the corresponding test data processing method and test flow were designed, and the mixed test mode was adopted to reduce the test time. Experimental results show that compared with the dual-LFSR structure, 3D-LFSR structure has the average power consumption reduced by 40.19%, the average area overhead decreased by 21.31%, and the test data compression rate increased by 5.22 percentage points. And, using the hybrid test mode reduces the average test time by 20.49% compared to using the serial test mode.

Table and Figures | Reference | Related Articles | Metrics
Design and implementation of cipher component security criteria testing tool
Shanshan HUO, Yanjun LI, Jian LIU, Yinshuang LI
Journal of Computer Applications    2023, 43 (10): 3156-3161.   DOI: 10.11772/j.issn.1001-9081.2022091443
Abstract312)   HTML18)    PDF (2718KB)(180)       Save

Symmetric cryptography is the core technology of data confidentiality in information systems. At the same time, nonlinear S-box is usually the key cryptographic component, and is widely used in the design of block cipher, stream cipher, MAC (Message Authentication Code) algorithm, etc. In order to ensure the security of the cryptographic algorithm design, firstly, the criteria testing methods for differential uniformity, nonlinearity, fixed point number, algebraic degree and item number, algebraic immunity, avalanche characteristic and diffusion characteristic were researched. Secondly, the results of each security criterion of the S-box were designed and output in the visual window, and the detailed descriptions of the corresponding security criterion were given in a pop-up window way. Thirdly, the design of the sub-components of nonlinearity and algebraic immunity was focused, and the linear distribution table was simplified according to the nonlinearity. At the same time, based on the theorem, the calculation process of algebraic immunity was optimized and illustrated with an example. Finally, the S-box testing tool was implemented with seven security criteria, and the test cases were demonstrated. The proposed tool is mainly used to test the security criteria of the nonlinear component S-box in the symmetric cryptographic algorithm, and then provides a guarantee for the security of the overall algorithm.

Table and Figures | Reference | Related Articles | Metrics
TenrepNN:practice of new ensemble learning paradigm in enterprise self-discipline evaluation
Jingtao ZHAO, Zefang ZHAO, Zhaojuan YUE, Jun LI
Journal of Computer Applications    2023, 43 (10): 3107-3113.   DOI: 10.11772/j.issn.1001-9081.2022091454
Abstract230)   HTML13)    PDF (1741KB)(68)       Save

In order to cope with the current situations of low self-discipline, frequent violation events and difficult government supervision of enterprises in the internet environment, a Two-layer ensemble residual prediction Neural Network (TenrepNN) model was proposed to evaluate the self-discipline of enterprises. And by integrating the ideas of Stacking and Bagging ensemble learning, a new paradigm of integrated learning was designed, namely Adjusting. TenrepNN model has a two-layer structure. In the first layer, three base learners were used to predict the enterprise score preliminarily. In the second layer, the idea of residual correction was adopted, and a residual prediction neural network was proposed to predict the output deviation of each base learner. Finally, the final output was obtained by adding the deviations and the base learner scores together. On the enterprise self-discipline evaluation dataset, compared with the traditional neural network, the proposed model has the Root Mean Square Error (RMSE) reduced by 2.7%, and the classification accuracy in the self-discipline level reached 94.51%. Experimental results show that by integrating different base learners to reduce the variance and using residual prediction neural network to decrease the deviation explicitly, TenrepNN model can accurately evaluate enterprise self-discipline to achieve differentiated dynamic supervision.

Table and Figures | Reference | Related Articles | Metrics
Music genre classification algorithm based on attention spectral-spatial feature
Wanjun LIU, Jiaming WANG, Haicheng QU, Libing DONG, Xinyu CAO
Journal of Computer Applications    2022, 42 (7): 2072-2077.   DOI: 10.11772/j.issn.1001-9081.2021050740
Abstract521)   HTML21)    PDF (2397KB)(231)       Save

In order to improve the extraction effect of the deep convolutional neural network on music spectrum genre features, a music genre classification algorithm model based on attention spectral-spatial feature, namely DCNN-SSA (Deep Convolutional Neural Network Spectral Spatial Attention), was proposed. In DCNN-SSA model, the genre features of different music Mel spectrograms were effectively annotated in the spatial domain, and the network structure was changed to improve the feature extraction effect while ensuring the effectiveness of the model, thereby improving the accuracy of music genre classification. Firstly, the original audio signals were Mel-filtered to effectively filter the sound intensity and rhythm change of the music by simulating the filtering operation of the human ear, and the generated Mel spectrograms were cut and input into the network. Then, the model was enhanced in genre feature extraction by deepening the number of network layers, changing the convolution structure and adding spatial attention mechanism. Finally, through multiple batches of training and verification on the dataset, the features of music genres were extracted and learned effectively, and a model that can effectively classify music genres was obtained. Experimental results on GTZAN dataset show that compared with other deep learning models, the music genre classification algorithm based on spatial attention increases the music genre classification accuracy by 5.36 percentage points to 10.44 percentage points and improves model convergence effect.

Table and Figures | Reference | Related Articles | Metrics
Multiscale residual UNet based on attention mechanism to realize breast cancer lesion segmentation
Shengqin LUO, Jinyi CHEN, Hongjun LI
Journal of Computer Applications    2022, 42 (3): 818-824.   DOI: 10.11772/j.issn.1001-9081.2021040948
Abstract1533)   HTML54)    PDF (1860KB)(384)       Save

Concerning the characteristics of breast cancer in Magnetic Resonance Imaging (MRI), such as different shapes and sizes, and fuzzy boundaries, an algorithm based on multiscale residual U Network (UNet) with attention mechanism was proposed in order to avoid error segmentation and improve segmentation accuracy. Firstly, the multiscale residual units were used to replace two adjacent convolution blocks in the down-sampling process of UNet, so that the network could pay more attention to the difference of shape and size. Then, in the up-sampling stage, layer-crossed attention was used to guide the network to focus on the key regions, avoiding the error segmentation of healthy tissues. Finally, in order to enhance the ability of representing the lesions, the atrous spatial pyramid pooling was introduced as a bridging module to the network. Compared with UNet, the proposed algorithm improved the Dice coefficient, Intersection over Union (IoU), SPecificity (SP) and ACCuracy (ACC) by 2.26, 2.11, 4.16 and 0.05 percentage points, respectively. The experimental results show that the algorithm can improve the segmentation accuracy of lesions and effectively reduce the false positive rate of imaging diagnosis.

Table and Figures | Reference | Related Articles | Metrics
Materialized view asynchronous incremental maintenance task generation under hybrid transaction/analytical processing for single record
Yangyang SUN, Junping YAO, Xiaojun LI, Shouxiang FAN, Ziwei WANG
Journal of Computer Applications    2022, 42 (12): 3763-3768.   DOI: 10.11772/j.issn.1001-9081.2021101725
Abstract392)   HTML4)    PDF (660KB)(71)       Save

Existing materialized view asynchronous incremental maintenance task generation algorithms under Hybrid Transaction/Analytical Processing (HTAP) are mainly used for multiple records and unable to generate materialized view asynchronous incremental maintenance task under HTAP for single record, which results in the increase of disk IO overhead and the performance degradation of materialized view asynchronous incremental maintenance under HTAP. Therefore, a materialized view asynchronous incremental maintenance task generation method under HTAP for single record was proposed. Firstly, the benefit model of materialized view asynchronous incremental maintenance task generation under HTAP for single record was established. Then, the materialized view asynchronous incremental maintenance task generation under HTAP for single record algorithm was designed on the basis of Q-learning. Experimental results show that materialized view asynchronous incremental maintenance task generation under HTAP for single record is realized by the proposed algorithm, and the proposed algorithm decreases the average IOPS (Input/output Operations Per Second), average CPU utilization (2-core) and average CPU utilization (4-core) at least by 8.49 times, 1.85 percentage points and 0.97 percentage points respectively.

Table and Figures | Reference | Related Articles | Metrics
Data field classification algorithm for edge intelligent computing
Zhiyu SUN, Qi WANG, Bin GAO, Zhongjun LIANG, Xiaobin XU, Shangguang WANG
Journal of Computer Applications    2022, 42 (11): 3473-3478.   DOI: 10.11772/j.issn.1001-9081.2021091692
Abstract340)   HTML14)    PDF (2398KB)(123)       Save

In view of the general problems of not fully utilizing historical information and slow parameter optimization process in the research of clustering algorithms, an adaptive classification algorithm based on data field was proposed in combination with edge intelligent computing, which can be deployed on Edge Computing (EC) nodes to provide local intelligent classification service. By introducing supervision information to modify the structure of the traditional data field clustering model, the proposed algorithm enabled the traditional data field to be applied to classification problems, extending the applicable fields of data field theory. Based on the idea of the data field, the proposed algorithm transformed the domain value space of the data into the data potential field space, and divided the data into several unlabeled cluster results according to the spatial potential value. After comparing the cluster results with the historical supervision information for cloud similarity, the cluster results were attributed to the most similar category. Besides, a parameter search strategy based on sliding step length was proposed to speeded up the parameter optimization of the proposed algorithm. Based on this algorithm, a distributed data processing scheme was proposed. Through the cooperation of cloud center and edge devices, classification tasks were cut and distributed to different levels of nodes to achieve modularity and low coupling. Simulation results show that the precision and recall of the proposed algorithm maintained above 96%, and the Hamming loss was less than 0.022. Experimental results show that the proposed algorithm can accurately classify and accelerate the speed of parameter optimization, and outperforms than Logistic Regression (LR) algorithm and Random Forest (RF) algorithm in overall performance.

Table and Figures | Reference | Related Articles | Metrics
Vehicle navigation method based on trinocular vision
WANG Jun LIU Hongyan
Journal of Computer Applications    2014, 34 (6): 1762-1764.   DOI: 10.11772/j.issn.1001-9081.2014.06.1762
Abstract192)      PDF (607KB)(552)       Save

A classification method based on trinocular stereovision, which consisted of geometrical classifier and color classifier, was proposed to autonomously guide vehicles on unstructured terrain. In this method, rich 3D data which were taken by stereovision system included range and color information of the surrounding environment. Then the geometrical classifier was used to detect the broad class of ground according to the collected data, and the color classifier was adopted to label ground subclasses with different colors. During the classifying stage, the new classification data needed to be updated continuously to make the vehicle adapt to variable surrounding environment. Two broad categories of terrain what vehicles can drive and can not drive were marked with different colors by using the classification method. The experimental results show that the classification method can make an accurate classification of the terrain taken by trinocular stereovision system.

Reference | Related Articles | Metrics
Relative orientation approach based on direct resolving and iterative refinement
YANG Ahua LI Xuejun LIU Tao LI Dongyue
Journal of Computer Applications    2014, 34 (6): 1706-1710.   DOI: 10.11772/j.issn.1001-9081.2014.06.1706
Abstract326)      PDF (723KB)(509)       Save

In order to improve the robustness and accuracy of relative orientation, an approach combining direct resolving and iterative refinement for relative orientation was proposed. Firstly, the essential matrix was estimated from some corresponding points. Afterwards the initial relative position and posture of two cameras were obtained by decomposing the essential matrix. The process for determining the only position and posture parameters were introduced in detail. Finally, by constructing the horizontal epipolar coordinate system, the constraint equation group was built up from the corresponding points based on the coplanar constraint, and the initial position and posture parameters were refined iteratively. The algorithm was resistant to the outliers by applying the RANdom Sample Consensus (RANSAC) strategy and dynamically removing outliers during iterative refinement. The simulation experiments illustrate the resolving efficiency and accuracy of the proposed algorithm outperforms that of the traditional algorithm under the circumstance of importing varies of random errors. And the experiment with real data demonstrates the algorithm can be effectively applied to relative position and posture estimation in 3D reconstruction.

Reference | Related Articles | Metrics
Improved joint probabilistic data association algorithm based on Meanshift clustering and Bhattacharya likelihood modification
TIAN Jun LI Dan XIAO Liqing
Journal of Computer Applications    2014, 34 (5): 1279-1282.   DOI: 10.11772/j.issn.1001-9081.2014.05.1279
Abstract694)      PDF (575KB)(552)       Save

To reduce the calculation complexity of the Joint Probabilistic Data Association (JPDA) joint-association events, due to multiple targets' tracks aggregation, an improved JPDA algorithm, clustering by Meanshift algorithm and optimizing confirmation matrix by Bhattacharya coefficients,was proposed.The clustering center was created by Meanshift algorithm. Then the tracking gate was obtained by calculating Mahalanobis distance between the clustering center and targets' prediction observation. The Bhattacharya likelihood matrix which was as a basis for low probability events was created, consequently the computing complexity of JPDA joint-association events which was related to low probability events was reduced. The experimental results show that the new method is superior to the conventional JPDA both in computational complexity and precision of estimation for multiple targets' tracks aggregation.

Reference | Related Articles | Metrics
Security Analysis of Range Query with Single Assertion on Encrypted Data
GU Chunsheng JING Zhengjun LI Hongwei YU Zhimin
Journal of Computer Applications    2014, 34 (4): 1019-1024.   DOI: 10.11772/j.issn.1001-9081.2014.04.1019
Abstract590)      PDF (962KB)(455)       Save

To protect users' privacy, users often transfer encrypted sensitive data to a semi-trustworthy service provider. Cai et al.(CAI K, ZHANG M, FENG D. Secure range query with single assertion on encrypted data [J]. Chinese Journal of Computers, 2011, 34(11): 2093-2103) first presented the ciphertext-only secure range query scheme with single assertion on encrypted data to prevent information leakage of users' privacy, whereas the previous schemes of range query on encrypted data were implemented through many assertions. Applying principle of trigonometric functions and matrix theory, the rank of the sensitive data was directly generated from protected interval index. Hence, this scheme was not ciphertext-only secure. To avoid this security drawback, a secure improvement scheme was constructed by introducing random element, and its complexity was analyzed.

Reference | Related Articles | Metrics
Parking guidance system based on ZigBee and geomagnetic sensor technology
YUE Xuejun LIU Yongxin WANG Yefu CHEN Shurong LIN Da QUAN Dongping YAN Yingwei
Journal of Computer Applications    2014, 34 (3): 884-887.   DOI: 10.11772/j.issn.1001-9081.2014.03.0884
Abstract749)      PDF (601KB)(1074)       Save

Concerning the phenomenon that common parking service could not satisfy the increasing demand of the private vehicle owners, an intelligent parking guidance system based on ZigBee network and geomagnetic sensors was designed. Real-time vehicle position or related traffic information was collected by geomagnetic sensors around parking lots and updated to center sever via ZigBee network. On the other hand, out-door Liquid Crystal Display (LCD) screens controlled by center sever displayed information of available parking places. In this paper, guidance strategy was divided into 4 levels, which could provide clear and effective information to drivers. The experimental results prove that the distance detection accuracy of geomagnetic sensors was within 0.4m, and the lowest loss packet rate of the wireless network in the range of 150m is 0%. This system can possibly provide solution for better parking service in intelligent cities.

Related Articles | Metrics
New wireless positioning method with high accuracy and low complexity
YANG Xiaofeng CHEN Tiejun LIU Feng
Journal of Computer Applications    2014, 34 (2): 322-324.  
Abstract553)      PDF (539KB)(613)       Save
In order to lower the computational burden of wireless positioning algorithm with high accuracy, this paper proposed a new 2D beamspace matrix pencil algorithm to jointly estimate Time-Of-Arrival (TOA) and Direction-Of-Arrival (DOA), which can position target accurately with low complexity. This algorithm first transformed the complex data matrix into real and reduced dimensional matrix via Discrete Fourier Transform (DFT) matrix, which significantly reduced the computational burden; then estimated TOA and DOA of Line-of-Sight signal for positioning via singular value decomposition and solving generalized eigenvalues of matrix pencils. Matlab simulation results prove that this positioning method achieves Root Mean Square Error (RMSE) as small as 0.4m with computation cost no more than 1/4 of corresponding algorithm in element space, which makes it a promising positioning method for resource limited environments like battlefield, earthquake-stricken area and rural places.
Related Articles | Metrics
Fully secure identity-based online/offline encryption
WANG Zhanjun LI Jie MA Haiying WANG Jinhua
Journal of Computer Applications    2014, 34 (12): 3458-3461.  
Abstract234)      PDF (659KB)(766)       Save

The existing Identity-Based Online/Offline Encryption (IBOOE) schemes do not allow the attacker to choose the target identity adaptively, since they are only proven to be secure in the selective model. This paper introduced the online/offline technology into fully secure Identity-Based Encryption (IBE) schemes, and proposed a fully secure IBOOE scheme. Based on three static assumptions in composite order groups, this scheme was proven to be fully secure with the dual system encryption methodology. Compared with the famous IBOOE schemes, the proposed scheme not only greatly improves the efficiency of the online encryption, but also can meet the demands for complete safety in the practical systems.

Reference | Related Articles | Metrics
Analysis of global convergence of crossover evolutionary algorithm based on state-space model
WANG Dingxiang LI Maojun LI Xue CHENG Li
Journal of Computer Applications    2014, 34 (12): 3424-3427.  
Abstract352)      PDF (611KB)(690)       Save

Evolutionary Algorithm based on State-space model (SEA) is a novel real-coded evolutionary algorithm, it has good optimization effects in engineering optimization problems. Global convergence of crossover SEA (SCEA) was studied to promote the theory and application research of SEA. The conclusion that SCEA is not global convergent was drawn. Modified Crossover Evolutionary Algorithm based on State-space Model (SMCEA) was presented by changing the comstruction way of state evolution matrix and introducing elastic search operation. SMCEA is global convergent was proved by homogeneous finite Markov chain. By using two test functions to experimental analysis, the results show that the SMCEA are improved substantially in such aspects as convergence rate, ability of reaching the optimal value and operation time. Then, the effectiveness of SMCEA is proved and that SMCEA is better than Genetic Algorithm (GA) and SCEA was concluded.

Reference | Related Articles | Metrics
Target recognition method based on deep belief network
SHI Hehuan XU Yuelei YANG Zhijun LI Shuai LI Yueyun
Journal of Computer Applications    2014, 34 (11): 3314-3317.   DOI: 10.11772/j.issn.1001-9081.2014.11.3314
Abstract395)      PDF (796KB)(704)       Save

Aiming at improving the robustness in pre-processing and extracting features sufficiently for Synthetic Aperture Radar (SAR) images, an automatic target recognition algorithm for SAR images based on Deep Belief Network (DBN) was proposed. Firstly, a non-local means image despeckling algorithm was proposed based on Dual-Tree Complex Wavelet Transformation (DT-CWT); then combined with the estimation of the object azimuth, a robust process on original data was achieved; finally a multi-layer DBN was applied to extract the deeply abstract visual information as features to complete target recognition. The experiments were conducted on three Moving and Stationary Target Acquisition and Recognition (MSTAR) databases. The results show that the algorithm performs efficiently with high accuracy and robustness.

Reference | Related Articles | Metrics
Delaunay-based Non-uniform sampling for noisy point cloud
LI Guojun LI Zongchun HOU Dongxing
Journal of Computer Applications    2014, 34 (10): 2922-2924.   DOI: 10.11772/j.issn.1001-9081.2014.10.2922
Abstract373)      PDF (581KB)(518)       Save

To satisfy ε-sample condition for Delaunay-based triangulation surface reconstruction algorithm, a Delaunay-based non-uniform sampling algorithm for noisy point clouds was proposed. Firstly, the surface Medial Axis (MA) was approximated by the negative poles computed by k-nearest neighbors Voronoi vertices. Secondly, the Local Feature Size (LFS) of surface was estimated with the approximated medial axis. Finally, combined with the Bound Cocone algorithm, the unwanted interior points were removed. Experiments show that the new algorithm can simplify noisy point clouds accurately and robustly while keeping the boundary features well. The simplified point clouds are suitable for Delaunay-based triangulation surface reconstruction algorithm.

Reference | Related Articles | Metrics
Global convergence analysis of evolutionary algorithm based on state-space model
WANG Dingxiang LI Maojun LI Xue CHENG Li
Journal of Computer Applications    2014, 34 (10): 2816-2819.   DOI: 10.11772/j.issn.1001-9081.2014.10.2816
Abstract303)      PDF (635KB)(479)       Save

Evolutionary Algorithm based on State-space model (SEA) is a new evolutionary algorithm using real strings, and it has broad application prospects in engineering optimization problems. Global convergence of SEA was analyzed by homogeneous finite Markov chain to improve the theoretical system of SEA and promote the application research in engineering optimization problems of SEA. It was proved that SEA is not global convergent. Modified Elastic Evolutionary Algorithm based on State-space model (MESEA) was presented by limiting the value ranges of elements in state evolution matrix of SEA and introducing the elastic search. The analytical results show that search efficiency of SEA can be enhanced by introducing elastic search. The conclusion that MESEA is global convergent is drawn, and it provides theory basis for the application of algorithm in engineering optimization problems.

Reference | Related Articles | Metrics
Interference characteristic simulation in CSMA/CA-based wireless multi-hop networks
TAN Guoping TANG Luyao HUA Zaijun LIU Xiuquan
Journal of Computer Applications    2013, 33 (12): 3398-3401.  
Abstract679)      PDF (606KB)(489)       Save
Conflict interference is one of the key factors affecting the performance of wireless multi-hop network. In view of the different distribution of interference nodes in the network, adopting stochastic point process simulation, the cumulative interference characteristics of nodes were studied based on different stochastic point processes in Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol. Then the interference simulation platform based on NS2 was established to simulate the realistic interference distribution. Finally, the comparison between them shows that there is a certain difference between the simulation and the reality, and the reasons were also pointed out.
Related Articles | Metrics