Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Robust resource allocation optimization in cognitive wireless network integrating information communication and over-the-air computation
Hualiang LUO, Quanzhong LI, Qi ZHANG
Journal of Computer Applications    2024, 44 (4): 1195-1202.   DOI: 10.11772/j.issn.1001-9081.2023050573
Abstract288)   HTML11)    PDF (1373KB)(110)       Save

To address the power resource limitations of wireless sensors in over-the-air computation networks and the spectrum competition with existing wireless information communication networks, a cognitive wireless network integrating information communication and over-the-air computation was studied, in which the primary network focused on wireless information communication, and the secondary network aimed to support over-the-air computation where the sensors utilized signals sent by the base station of the primary network for energy harvesting. Considering the constraints of the Mean Square Error (MSE) of over-the-air computation and the transmit power of each node in the network, base on the random channel uncertainty, a robust resource optimization problem was formulated, with the objective function of maximizing the sum rate of wireless information communication users. To solve the robust optimization problem effectively, an Alternating Optimization (AO)-Improved Constrained Stochastic Successive Convex Approximation (ICSSCA) algorithm called AO-ICSSCA,was proposed, by which the original robust optimization problem was transformed into deterministic optimization sub-problems, and the downlink beamforming vector of the base station in the primary network, the power factors of the sensors, and the fusion beamforming vector of the fusion center in the secondary network were alternately optimized. Simulation experimental results demonstrate that AO-ICSSCA algorithm achieves superior performance with less computing time compared to the Constrained Stochastic Successive Convex Approximation (CSSCA) algorithm before improvement.

Table and Figures | Reference | Related Articles | Metrics
Customs risk control method based on improved butterfly feedback neural network
Zhenggang WANG, Zhong LIU, Jin JIN, Wei LIU
Journal of Computer Applications    2023, 43 (12): 3955-3964.   DOI: 10.11772/j.issn.1001-9081.2022121873
Abstract261)   HTML3)    PDF (2964KB)(108)       Save

Aiming at the problems of low efficiency, low accuracy, excessive occupancy of human resources and intelligent classification algorithm miniaturization deployment requirements in China Customs risk control methods at this stage, a customs risk control method based on an improved Butterfly Feedback neural Network Version 2 (BFNet-V2) was proposed. Firstly, the Filling in Code (FC) algorithm was used to realize the semantic replacement of the customs tabular data to the analog image. Then, the analog image data was trained by using the BFNet-V2. The regular neural network structure was composed of left and right links, different convolution kernels and blocks, and small block design, and the residual short path was added to improve the overfitting and gradient disappearance. Finally, a Historical momentum Adaptive moment estimation algorithm (H-Adam) was proposed to optimize the gradient descent process and achieve a better adaptive learning rate adjustment, and classify customs data. Xception (eXtreme inception), Mobile Network (MobileNet), Residual Network (ResNet), and Butterfly Feedback neural Network (BF-Net) were selected as the baseline network structures for comparison. The Receiver Operating Characteristic curve (ROC) and the Precision-Recall curve (PR) of the BFNet-V2 contain the curves of the baseline network structures. Taking Transfer Learning (TL) as an example, compared with the four baseline network structures, the classification accuracy of BFNet-V2 increases by 4.30%,4.34%,4.10% and 0.37% respectively. In the process of classifying real-label data, the misjudgment rate of BFNet-V2 reduces by 70.09%,57.98%,58.36% and 10.70%, respectively. The proposed method was compared with eight classification methods including shallow and deep learning methods, and the accuracies on three datasets increase by more than 1.33%. The proposed method can realize automatic classification of tabular data and improve the efficiency and accuracy of customs risk control.

Table and Figures | Reference | Related Articles | Metrics
Multi-scale feature enhanced retinal vessel segmentation algorithm based on U-Net
Zhiang ZHANG, Guangzhong LIAO
Journal of Computer Applications    2023, 43 (10): 3275-3281.   DOI: 10.11772/j.issn.1001-9081.2022091437
Abstract358)   HTML22)    PDF (2624KB)(172)       Save

Aiming at the shortcomings of traditional retinal vessel segmentation algorithm such as low accuracy of vessel segmentation and mis-segmentation of focal areas, a Multi-scale Feature Enhanced retinal vessel segmentation algorithm based on U-Net (MFEU-Net) was proposed. Firstly, in order to solve the vanishing gradient problem, an improved Feature Information Enhancement Residual Module (FIE-RM) was designed to replace the convolution block of U-Net. Secondly, in order to enlarge the receptive field and improve the extraction ability of vascular information features, a multi-scale dense atrous convolution module was introduced at the bottom of U-Net. Finally, in order to reduce the information loss in the process of encoding and decoding, a multi-scale channel enhancement module was constructed at the skip connection of U-Net. Experimental results on Digital Retinal Images for Vessel Extraction (DRIVE) and CHASE_DB1 datasets show that compared with CS-Net (Channel and Spatial attention Network), the suboptimal algorithm in retinal vessel segmentation, MFEU-Net has the F1 score improved by 0.35 and 1.55 percentage points respectively, and the Area Under Curve (AUC) improved by 0.34 and 1.50 percentage points respectively. It is verified that MFEU-Net can improve the accuracy and robustness of retinal vessel segmentation effectively.

Table and Figures | Reference | Related Articles | Metrics
Car-following model of intelligent connected vehicles based on time-delayed velocity difference and velocity limit
Kaiwang ZHANG, Fei HUI, Guoxiang ZHANG, Qi SHI, Zhizhong LIU
Journal of Computer Applications    2022, 42 (9): 2936-2942.   DOI: 10.11772/j.issn.1001-9081.2021081425
Abstract304)   HTML3)    PDF (2663KB)(88)       Save

Focusing on the problems of disturbed car-following behavior and instability of traffic flow caused by the uncertainty of the driver’s acquisition of road velocity limit and time delay information, a car-following model TD-VDVL (Time-Delayed Velocity Difference and Velocity limit) was proposed with the consideration of the time-delayed velocity difference and the velocity limit information in the Internet of Vehicles (IoV) environment. Firstly, the speed change caused by time delay and road velocity limit information were introduced to improve the Full Velocity Difference (FVD) model. Then, the linear spectrum wave perturbation method was used to derive the traffic flow stability judgment basis of TD-VDVL model, and the influence of each parameter in the model on the stability of the system was analyzed separately. Finally, the numerical simulation experiments and comparative analysis were carried out using Matlab. In the simulation experiments, straight roads and circular roads were selected, and slight disturbance was imposed on the fleet during driving. When conditions were the same, TD-VDVL model had the smallest velocity fluctuation rate and the fluctuation of fleet headway compared to the Optimal Velocity (OV) and FVD models. Especially when the sensitivity coefficient of the velocity limit information was 0.3, and the sensitivity coefficient of the time-delayed speed difference was 0.3, the proposed model had the average fluctuation rate of the fleet velocity reached 2.35% at time of 500 s, and the peak and valley difference of fleet headway of only 0.019 4 m. Experimental results show that TD-VDVL model has a better stable area after introducing time-delayed velocity difference and velocity limit information, and can significantly enhance the ability of car-following fleet to absorb disturbance.

Table and Figures | Reference | Related Articles | Metrics
Automatic detection algorithm for attention deficit/hyperactivity disorder based on speech pause and flatness
Guozhong LI, Ya CUI, Yixin EMU, Ling HE, Yuanyuan LI, Xi XIONG
Journal of Computer Applications    2022, 42 (9): 2917-2925.   DOI: 10.11772/j.issn.1001-9081.2021071213
Abstract317)   HTML3)    PDF (1994KB)(57)       Save

The clinicians diagnose Attention Deficit/Hyperactivity Disorder (ADHD) mainly based on on their subjective assessment, which lacks objective criteria to assist. To solve this problem, an automatic detection algorithm for ADHD based on speech pause and flatness was proposed. Firstly, the Frequency band Difference Energy Entropy Product (FDEEP) parameter was used to automatically locate the segment with voice from the speech and extract the speech pause features. Then, Transform Average Amplitude Squared Difference (TAASD) parameter was presented to calculate the voice multi-frequency and extract the flatness features. Finally, fusion features and the Support Vector Machine (SVM) classifier were combined to realize the automatic recognition of ADHD. The speech samples of the experiment were collected from 17 normal control children and 37 children with ADHD. Experimental results show that the proposed algorithm can effectively discriminate the normal children and children with ADHD, with an accuracy of 91.38%.

Table and Figures | Reference | Related Articles | Metrics
Real root isolation algorithm for exponential function polynomials
Xinyu GE, Shiping CHEN, Zhong LIU
Journal of Computer Applications    2022, 42 (5): 1531-1537.   DOI: 10.11772/j.issn.1001-9081.2021030440
Abstract258)   HTML1)    PDF (503KB)(52)       Save

For addressing real root isolation problem of transcendental function polynomials, an interval isolation algorithm for exponential function polynomials named exRoot was proposed. In the algorithm, the real root isolation problem of non-polynomial real functions was transformed into sign determination problem of polynomial, then was solved. Firstly, the Taylor substitution method was used to construct the polynomial nested interval of the objective function. Then, the problem of finding the root of the exponential function was transformed into the problem of determining the positivity and negativity of the polynomial in the intervals. Finally, a comprehensive algorithm was given and applied to determine the reachability of rational eigenvalue linear system tentatively. The proposed algorithm was implemented in Maple efficiently and easily with readable output results. Different from HSOLVER and numerical calculation method fsolve, exRoot avoids discussing the existence of roots directly, and theoretically has termination and completeness. It can reach any precision and can avoid the systematic error brought by numerical solution when being applied into the optimization problem.

Table and Figures | Reference | Related Articles | Metrics
Online kernel regression based on random sketching method
Qinghua LIU, Shizhong LIAO
Journal of Computer Applications    2022, 42 (3): 676-682.   DOI: 10.11772/j.issn.1001-9081.2021040869
Abstract349)   HTML19)    PDF (628KB)(107)       Save

In online kernel regression learning, the inverse matrix of the kernel matrix needs to be calculated when a new sample arrives, and the computational complexity is at least the square of the number of rounds. The idea of applying sketching method to hypothesis updating was introduced, and a more efficient online kernel regression algorithm via sketching method was proposed. Firstly, The loss function was set as the square loss, a new gradient descent algorithm, called FTL-Online Kernel Regression (F-OKR) was proposed, using the Nystr?m approximation method to approximate the Kernel, and applying the idea of Follow-The-Leader (FTL). Then, sketching method was used to accelerate F-OKR so that the computational complexity of F-OKR was reduced to the level of linearity with the number of rounds and sketch scale, and square with the data dimension. Finally, an efficient online kernel regression algorithm called Sketched Online Kernel Regression (SOKR) was designed. Compared to F-OKR, SOKR had no change in accuracy and reduced the runtime by about 16.7% on some datasets. The sub-linear regret bounds of these two algorithms were proved, and experimental results on standard regression datasets also verify that the algorithms have better performance than NOGD (Nystr?m Online Gradient Descent) algorithm, the average loss of all the datasets was reduced by about 64%.

Table and Figures | Reference | Related Articles | Metrics
Artificial bee colony algorithm based on multi-population combination strategy
Wenxia LI, Linzhong LIU, Cunjie DAI, Yu LI
Journal of Computer Applications    2021, 41 (11): 3113-3119.   DOI: 10.11772/j.issn.1001-9081.2021010064
Abstract506)   HTML35)    PDF (757KB)(311)       Save

In view of the disadvantages of the standard Artificial Bee Colony (ABC) algorithm such as weak development ability and slow convergence, a new ABC algorithm based on multi-population combination strategy was proposed. Firstly, the different-dimensional coordination and multi-dimensional matching update mechanisms were introduced into the search equation. Then, two combination strategies were designed for the hire bee and the follow bee respectively. The combination strategy was composed of two sub-strategies focusing on breadth exploration and depth development respectively. In the follow bee stage, the population was divided into free subset and non-free subset, and different sub-strategies were adopted by the individuals belonging to different subsets to balance the exploration and development ability of algorithm. The 15 benchmark functions were used to compare the proposed improved ABC algorithm with the standard ABC algorithm and other three improved ABC algorithms. The results show that the proposed algorithm has better optimization performance in both low-dimensional and high-dimensional problems.

Table and Figures | Reference | Related Articles | Metrics
Multi-objective automatic identification and localization system in mobile cellular networks
MIAO Sheng, DONG Liang, DONG Jian'e, ZHONG Lihui
Journal of Computer Applications    2019, 39 (11): 3343-3348.   DOI: 10.11772/j.issn.1001-9081.2019040672
Abstract575)      PDF (905KB)(340)       Save
Aiming at difficult multi-target identification recognition and low localization accuracy in mobile cellular networks, a multi-objective automatic identification and localization method was presented based on cellular network structure to improve the detection efficiency of target number and the localization accuracy of each target. Firstly, multi-target existence was detected through the analysis of the result variance of multiple positioning in the monitoring area. Secondly, cluster analysis on locating points was conducted by k-means unsupervised learning in this study. As it is difficult to find an optimal cluster number for k-means algorithm, a k-value fission algorithm based on beam resolution was proposed to determine the k value, and then the cluster centers were determined. Finally, to enhance the signal-to-noise ratio of received signals, the beam directions were determined according to cluster centers. Then, each target was respectively positioned by Time Difference Of Arrival (TDOA) algorithm by the different beam direction signals received by the linear constrained narrow-band beam former. The simulation results show that, compared to other TDOA and Probability Hypothesis Density (PHD) filter algorithms in recent references, the presented multi-objective automatic identification and localization method for solving multi-target localization problems can improve the signal-to-noise ratio of the received signals by about 10 dB, the Cramer-Mero lower bound of the delay estimation error can be reduced by 67%, and the relative accuracy of the positioning accuracy can be increased more than 10 percentage points. Meanwhile, the proposed algorithm is simple and effective, is relatively independent in each positioning, has a linear time complexity, and is relatively stable.
Reference | Related Articles | Metrics
Hardware/Software co-design of SM2 encryption algorithm based on the embedded SoC
ZHONG Li, LIU Yan, YU Siyang, XIE Zhong
Journal of Computer Applications    2015, 35 (5): 1412-1416.   DOI: 10.11772/j.issn.1001-9081.2015.05.1412
Abstract807)      PDF (797KB)(822)       Save

Concerning the problem that the development cycle of existing elliptic curve algorithm system level design is long and the performance-overhead indicators are not clear, a method of Hardware/Software (HW/SW) co-design based on Electronic System Level (ESL) was proposed. This method presented several HW/SW partitions by analyzing the theories and implementations of SM2 algorithm, and generated cycle-accurate models for HW modules with SystemC. Module and system verification were proposed to compare the executing cycle counts of HW/SW modules to obtain the best partition. Finally, the ESL models were converted to Rigister Transfer Level (RTL) models according to the CFG (Control Flow Graph) and DFG (Data Flow Graph) to perform logic synthesis and comparison. In the condition of 50 MHz,180 nm CMOS technology, when getting best performance,the execute time of point-multiply was 20 ms, with 83 000 gates and the power consuption was 2.23 mW. The experimental result shows that the system analysis is conducive to performance and resources evaluation, and has high applicability in encryption chip based on elliptic curve algorithm. The embedded SoC (System on Chip) based on this algorithm can choose appropriate architecture based on performance and resource constraints.

Reference | Related Articles | Metrics
Culling of foreign matter fake information in detection of subminiature accessory based on prior knowledge
ZHEN Rongjie WANG Zhong LIU Wenjing GOU Jiansong
Journal of Computer Applications    2014, 34 (5): 1458-1462.   DOI: 10.11772/j.issn.1001-9081.2014.05.1458
Abstract227)      PDF (810KB)(450)       Save

In visual detection of subminiature accessory, the extracted target contour will be affected by the existence of foreign matter in the field like dust and hair crumbs. In order to avoid the impact for measurement brought by foreign matter, a method of culling foreign matter fake information based on prior knowledge was put forward. Firstly, the corners of component image with foreign matter were detected. Secondly, the corner-distribution features of standard component were obtained by statistics. Finally, the judgment condition of foreign matter fake imformation was derived from the corner-distribution features of standard component to cull the foreign matter fake information. Through successful application in an actual engineering project, the processing experiments on three typical images with foreign matter prove that the proposed algorithm ensures the accuracy of the measurement, while effectively culling the foreign matter fake information in the images.

Reference | Related Articles | Metrics
Improved wavelet denoising with dual-threshold and dual-factor function
REN Zhong LIU Ying LIU Guodong HUANG Zhen
Journal of Computer Applications    2013, 33 (09): 2595-2598.   DOI: 10.11772/j.issn.1001-9081.2013.09.2595
Abstract733)      PDF (632KB)(497)       Save
Since the traditional wavelet threshold functions have some drawbacks such as the non-continuity on the points of threshold, large deviation of estimated wavelet coefficients, Gibbs phenomenon and distortion are generated and Signal-to-Noise Ratio (SNR) can be hardly improved for the denoised signal. To overcome these drawbacks, an improved wavelet threshold function was proposed. Compared with the soft, hard, semi-soft threshold function and others, this function was not only continuous on the points of threshold and more convenient to be processed, but also was compatible with the performances of traditional functions and the practical flexibility was greatly improved via adjusting dual threshold parameters and dual variable factors. To verify this improved function, a series of simulation experiments were performed, the SNR and Root-Mean-Square Error (RMSE) values were compared between different denoising methods. The experimental results demonstrate that the smoothness and distortion are greatly enhanced. Compared with soft function, its SNR increases by 22.2% and its RMSE decreases by 42.6%.
Related Articles | Metrics
Fast local binary fitting optimization approach for image segmentation
LIN Yazhong LI Xin ZHANG Huiqi LUAN Qinbo
Journal of Computer Applications    2013, 33 (02): 491-494.   DOI: 10.3724/SP.J.1087.2013.00491
Abstract1004)      PDF (716KB)(458)       Save
It is difficult to get the correct segmentation results for the intensity inhomogeneity images, and the segmentation results are very sensitive to the initial contours. Thus, a fast and stable approach was proposed to overcome these disadvantages. First, an Adaptive Distance Preserving Level Set (ADPLS) method was utilized to get a better initial contour. Second, the Local Binary Fitting (LBF) model was used for a further segmentation. The experimental results show that the improved model can achieve good performance and is better to solve the contradiction among the segmentation speed, accuracy and stability.
Related Articles | Metrics
Denosing of ECG signal based on FIR and aTrous wavelet transform
ZHONG Li-hui WEI Guan-jun SHI LiShi
Journal of Computer Applications    2012, 32 (10): 2966-2968.   DOI: 10.3724/SP.J.1087.2012.02966
Abstract892)      PDF (486KB)(511)       Save
The weak and low-frequency Electrocardiogram (ECG) signals which can be used for disease diagnosis after preprocessing are susceptible to interference from the external environment. The wavelet decomposition and re-construction can not effectively filter out the 50Hz frequency and EMG interference of which the frequency band is the same with the ECG, the wavelet thresholding method can not effectively filter out the baseline drift, and this method can cause Pseudo-Gibbs phenomenon in the signal singular points. In order to denoise effectively, a wavelet denoising method based on aTrous algorithm was proposed in this paper, and this method was a comprehension of the wavelet reconstruction and decomposition, wavelet thresholding and 50Hz notch filter. The clinical ECG simulation results show that this method can effectively remove the ECG baseline drift, 50Hz frequency interference and EMG interference, while reducing the Gibbs phenomenon.
Reference | Related Articles | Metrics
Fast disparity estimation algorithm based on features of disparity vector
SONG Xiao-wei YANG Lei LIU Zhong LIAO Liang
Journal of Computer Applications    2012, 32 (07): 1856-1859.   DOI: 10.3724/SP.J.1087.2012.01856
Abstract1004)      PDF (809KB)(653)       Save
Disparity estimation is a key technology for stereo video compression. Considering the disadvantage of the epipolar correction algorithm, a fast disparity estimation algorithm based on the features of disparity vector was proposed. The algorithm analyzed the features of disparity vector in parallel camera and convergent camera systems respectively, and explained how to find the best matching block by a three-step search according to their features. The algorithm was tested in both 640×480 and 1280×720 resolution sequences. The experimental results show that compared to the original TZSearch algorithm in JMVC, the proposed algorithm can effectively shorten the encoding time and improve coding efficiency without decreasing the image quality and compression efficiency. Because there is not epipolar correction in the proposed algorithm, the disadvantage caused by epipolar correction will not appear.
Reference | Related Articles | Metrics
Range-based approach for multi-object convergence problem
TAN Rong GU Jun-zhong LIN Xin CEHN Peng
Journal of Computer Applications    2011, 31 (09): 2389-2394.   DOI: 10.3724/SP.J.1087.2011.02389
Abstract1406)      PDF (948KB)(663)       Save
In this paper, the concept of multi-object convergence problem was introduced. While some former query techniques could be used to deal with this problem, they are all point-based and unable to protect location privacy. Hence, a range-based spatial Skyline query algorithm named VRSSA was proposed. It utilized the Voronoi graph and supported the spatial anonymity techniques in Location-based Service (LBS). Furthermore, with respect to the changes of query conditions, another two algorithms, Dynamic Point Joining Algorithm (DPJA) and Dynamic Point Deleting Algorithm (DPDA), to dynamically update the query results were proposed so that heavy re-computation could be avoided. The experimental results show that the approaches could efficiently and effectively solve the problem.
Related Articles | Metrics
Quality of service estimation research based on the AGWL grid workflow model
Jin-Zhong Li
Journal of Computer Applications   
Abstract1763)      PDF (405KB)(686)       Save
With regard to the lack of research for QoS composition in ASKALON grid workflow management system, this paper presented a QoS estimation algorithm for grid workflow based on Abstract Grid Workflow Language (AGWL) grid workflow model. Three main features of the algorithm are: 1) Based on AGWL; 2) Scalable QoS metrics model; 3) Multi-dimensional global QoS metrics. Lastly, the feasibility of the algorithm is validated through simulation experiment.
Related Articles | Metrics
Reliability analysis model of cluster storage system by object grouping
Zhong LIU Zong-Bo LI Liu YANG
Journal of Computer Applications   
Abstract1572)      PDF (474KB)(1257)       Save
The reliability of the large-scale cluster storage system is one important research aspect in the related domain. A high-availability data objects placement algorithm was proposed. It groups objected into redundancy sets using RAID at the algorithm level. The redundancy allowed us to reconstruct any corrupted data objects and storage nodes when it failed and assured efficiently the high availability of storage system. The availability of storage system was quantitatively analyzed by using Markov reward model, and the computing results indicate the algorithm is efficient.
Related Articles | Metrics