Loading...
Toggle navigation
Home
About
About Journal
Historical Evolution
Indexed In
Awards
Reference Index
Editorial Board
Journal Online
Archive
Project Articles
Most Download Articles
Most Read Articles
Instruction
Contribution Column
Author Guidelines
Template
FAQ
Copyright Agreement
Expenses
Academic Integrity
Contact
Contact Us
Location Map
Subscription
Advertisement
中文
Table of Content
10 March 2021, Volume 41 Issue 3
Previous Issue
Next Issue
The 37th CCF National Database Conference (NDBC 2020)
Database star-join optimization for multicore CPU and GPU platforms
LIU Zhuan, HAN Ruichen, ZHANG Yansong, CHEN Yueguo, ZHANG Yu
2021, 41(3): 611-617. DOI:
10.11772/j.issn.1001-9081.2020091430
Asbtract
(
)
PDF
(1026KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the high execution cost of star-join between the fact table and multiple dimension tables in On-line Analytical Processing (OLAP), a star-join optimization technique was proposed for advanced multicore CPU (Central Processing Unit) and GPU (Graphics Processing Unit). Firstly, the vector index based vectorized star-join algorithm on CPU and GPU platforms was proposed for the intermediate materialization cost problem in star-join in multicore CPU and GPU platforms. Secondly, the star-join operation based on vector granularity was presented according to the vector division for CPU cache size and GPU shared memory size, so as to optimize the vector index materialization cost in star-join. Finally, the compressed vector index based star-join algorithm was proposed to compress the fixed-length vector index to the variable-length binary vector index, so as to improve the storage access efficiency of the vector index in cache under low selection rate. Experimental results show that the vectorized star-join algorithm achieves more than 40% performance improvement compared to the traditional row-wise or column-wise star-join algorithms on multicore CPU platform, and the vectorized star-join algorithm achieves more than 15% performance improvement compared to the conventional star-join algorithms on GPU platform; in the comparison with the mainstream main-memory databases and GPU databases, the optimized star-join algorithm achieves 130% performance improvement compared to the optimal main-memory database Hyper, and achieves 80% performance improvement compared to the optimal GPU database OmniSci. It can be seen that the vector index based star-join optimization technique effectively improves the multiple table join performance, and compared with the traditional optimization techniques, the vector index based vectorized processing improves the data storage access efficiency in small cache, and the compressed vector further improves the vector index access efficiency in cache.
Two-stage file compaction framework by log-structured merge-tree for time series data
ZHANG Lingzhe, HUANG Xiangdong, QIAO Jialin, GOU Wangminhao, WANG Jianmin
2021, 41(3): 618-622. DOI:
10.11772/j.issn.1001-9081.2020122053
Asbtract
(
)
PDF
(793KB) (
)
References
|
Related Articles
|
Metrics
When the Log-Structured Merge-tree (LSM-tree) in the time series database is under high write load or resource constraints, file compaction not in time will cause a large accumulation of LSM
C
0
layer data, resulting in an increase in the latency of ad hoc queries of recently written data. To address this problem, a two-stage LSM compaction framework was proposed that realizes low-latency query of newly written time series data on the basis of maintaining efficient query for large blocks of data. Firstly, the file compaction process was divided into two stages:quickly merging of a small number of out-of-order files, merging of a large number of small files, then multiple file compaction strategies were provided in each stage, finally the two-stage compaction resource allocation was performed according to the query load of the system. By implementing the test of the traditional LSM compaction strategy and the two-stage LSM compaction framework on the time series database Apache IoTDB, the results showed that compared with the traditional LSM, the two-stage file compaction module was able to greatly reduce the number of ad hoc query reads while improving the flexibility of the strategy, and made the historical data analysis and query performance improved by about 20%. Experimental results show that the two-stage LSM compaction framework can increase the ad hoc query efficiency of recently written data, and can improve the performance of historical data analysis and query as well as the flexibility of compaction strategy.
NVM-LH: non-volatile memory-friendly linear hash index
TANG Chen, HUANG Guorui, JIN Peiquan
2021, 41(3): 623-629. DOI:
10.11772/j.issn.1001-9081.2020091451
Asbtract
(
)
PDF
(1035KB) (
)
References
|
Related Articles
|
Metrics
Non-Volatile Memory (NVM) attracts people's attention because of its large capacity, persistence, bit addressability and low read latency. However, it also has some disadvantages, such as limited writes and asymmetric reading and writing speed. When the traditional linear hash index is implemented directly on NVM, it will lead to a great number of random write operations. To solve this problem, a new NVM-friendly linear hash index called NVM-LH (NVM-oriented Linear Hashing) was proposed. The cache friendliness was achieved by NVM-LH through the cache line alignment during storing data. And a log-free data consistency guaranteeing strategy was presented in NVM-LH. In addition, the split and delete operations were optimized in NVM-LH to minimize the NVM write operations. Experimental results show that NVM-LH outperforms the state-of-the-art NVM-aware hash index CCEH (Cacheline-Conscious Extendible Hashing) in terms of space utilization (30% higher) and NVM write number (about 15% lower), showing better NVM-friendliness.
Implementation method of lightweight distributed index based on log structured merge-tree
CUI Shuangshuang, WANG Hongzhi
2021, 41(3): 630-635. DOI:
10.11772/j.issn.1001-9081.2020091543
Asbtract
(
)
PDF
(896KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem that the existing distributed database based on Log Structured Merge-Tree (LSM-Tree) only supports efficient primary key query and cannot allow users to quickly apply it in their own clusters, a light-weight distributed index implementation method based on LSM-Tree, called SIBL (Secondary Index Based LSM-Tree), was proposed. Firstly, the query efficiency of the non-primary key attributes was improved by indexing the primary key attribute columns. Then, a distributed index construction algorithm and an index interval division algorithm based on equidistant sampling were proposed to ensure the even distribution of indexes in the system. And the query algorithm of the traditional index was optimized, and the index file was regarded as a special data file and stored in the system in a distributed manner, ensuring the load balance and scalability of the system. Finally, experiments of the proposed method with Huawei's secondary index scheme HIndex were carried out on the HBase database to compare performances such as time and space overhead of index construction, index query performance and system load balance, verifying that the proposed method improves query performance by 50 to 200 times.
AAC-Hunter: efficient algorithm for discovering aggregation algebraic constraints in relational databases
ZHANG Xiaowei, JIANG Dawei, CHEN Ke, CHEN Gang
2021, 41(3): 636-642. DOI:
10.11772/j.issn.1001-9081.2020091473
Asbtract
(
)
PDF
(1077KB) (
)
References
|
Related Articles
|
Metrics
In order to better maintain the data integrity and help auditors find anomalous reimbursement records in relational databases, the algorithm AAC-Hunter (Aggregation Algebraic Constraints Hunter), which discovered Aggregation Algebraic Constraints (AACs) automatically, was proposed. An AAC is a fuzzy constraint defined between the aggregation results of two columns in the database and acts on most but not all records. Firstly, joining, grouping and algebraic expressions were enumerated to generate candidate AACs. Secondly, the value range sets of these candidate AACs were calculated. Finally, the AAC results were output. However, this method was not able to face the performance challenges caused by massive data, so that a set of heuristic rules were applied to decrease the size of candidate constraint space and the optimization strategies based on intermediate results reuse and trivial candidate AACs elimination were employed to speed up the value range set calculation for candidate AACs. Experimental results on TPC-H and European Soccer datasets show that AAC-Hunter reduces the constraint discovery space by 95.68% and 99.94% respectively, and shortens running time by 96.58% and 92.51% respectively, compared with the baseline algorithm without heuristic rules or optimization strategies. As the effectiveness of AAC-Hunter is verified, it can be seen that AAC-Hunter can improve the efficiency and capability of auditing application.
Personalized privacy protection for spatio-temporal data
LIU Xiangyu, XIA Guoping, XIA Xiufeng, ZONG Chuanyu, ZHU Rui, LI Jiajia
2021, 41(3): 643-650. DOI:
10.11772/j.issn.1001-9081.2020091463
Asbtract
(
)
PDF
(1280KB) (
)
References
|
Related Articles
|
Metrics
Due to the popularity of smart mobile terminals, sensitive information such as personal location privacy, check-in data privacy and trajectory privacy in the collected spatio-temporal data are easy to be leaked. In the current researches, protection technologies are proposed for the above privacy leakages respectively, and there is not a personalized spatio-temporal data privacy protection method to prevent the above privacy leakages for users. Therefore, a personalized privacy protection model for spatio-temporal data named (
p
,
q
,
ε
)-anonymity and a Personalized Privacy Protection for Spatio-Temporal Data (PPP
ST
) algorithm based on this model were proposed to protect the users' privacy data with personalized settings (location privacy, check-in data privacy and trajectory privacy). The heuristic rules were designed to generalize the spatio-temporal data to ensure the availability of the published data and realize the high availability of spatio-temporal data. In the comparison experiments, the data availability rate of PPP
ST
algorithm is about 4.66% and 15.45% higher than those of Information Data Used through K-anonymity (IDU-K) and Personalized Clique Cloak (PCC) algorithms on average respectively. At the same time, the generalized location search technology was designed to improve the execution efficiency of the algorithm. Experiments and analysis were conducted based on real spatio-temporal data. Experimental results show that PPP
ST
algorithm can effectively protect the privacy of personalized spatio-temporal data.
Ensemble classification algorithm for imbalanced time series
CAO Yang, YAN Qiuyan, WU Xin
2021, 41(3): 651-656. DOI:
10.11772/j.issn.1001-9081.2020091493
Asbtract
(
)
PDF
(925KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the existing ensemble classification methods have poor learning ability for unbalanced time series data, the idea of optimizing component algorithm performance and integration strategy was adopted, and based on the heterogeneous ensemble method Hierarchical Vote Collective of Transformation-based Ensembles (HIVE-COTE), an ensemble classification algorithm IMHIVE-COTE (Imbalanced Hierarchical Vote Collective of Transformation-based Ensembles) for unbalanced time series was proposed. The algorithm mainly contains two improvements:first, a new unbalanced classification component SBST-HESCA (SMOM (
K
-NN-based Synthetic Minority Oversampling algorithm for Multiclass imbalance problems) & Boosting into ST-HESCA (Shapelet Transformation-Heterogeneous Ensembles of Standard Classification Algorithm) algorithm) was added, the idea of boosting combined with resampling was introduced, and the sample weights were updated through cross-validation prediction results, so as to make the re-sampling process of the dataset more conducive to improving the classification quality of minority samples; second, the HIVE-COTE calculation framework was improved by combining the SBST-HESCA component, and the weight of the component algorithm was optimized, so that the unbalanced time series classification algorithm had higher voting weight to the classification result, as a result, the overall classification quality of the ensemble algorithm was further improved. The experimental part verified and analyzed the performance of IMHIVE-COTE:compared with the comparison methods, IMHIVE-COTE had the highest overall classification evaluation, and the best, the best and third overall classification evaluation on three unbalanced classification indexes. It is proved that IMHIVE-COTE's ability to solve the problem of unbalanced time series classification is significantly better.
Dynamic workload matching method for automatic parameter tuning
SHEN Chen, TAI Lingxiang, PENG Yuwei
2021, 41(3): 657-661. DOI:
10.11772/j.issn.1001-9081.2020091424
Asbtract
(
)
PDF
(867KB) (
)
References
|
Related Articles
|
Metrics
A dynamic workload description method and a dynamic workload matching algorithm were proposed to improve the accuracy of workload matching in automatic parameter tuning systems, such as OtterTune, with static workload description. First, a dynamic workload description method was proposed to describe the workload changes more accurately. Then, for the problems such as irregular sequences in workload matching and that the Euclidean distance algorithm is no longer applicable, a dynamic workload matching algorithm using data alignment was proposed based on the Dynamic Time Warping (DTW) algorithm. Finally, the proposed methods were applied to OtterTune to form a dynamic workload-based tuning tool D-OtterTune (Dynamic workload based OtterTune), and several experiments were conducted on it. Experimental results showed that, compared with the original method OtterTune, with the stable improvement in the workload matching accuracy for automatic parameter tuning brought by the dynamic workload matching method, D-OtterTune had the accuracy increased by 3%. It can be seen that D-OtterTune can have a significant impact on the overall business performance in data-intensive applications.
Remote sensing time-series images classification algorithm with abnormal data
REN Yuanyuan, WANG Chuanjian
2021, 41(3): 662-668. DOI:
10.11772/j.issn.1001-9081.2020091425
Asbtract
(
)
PDF
(1226KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem of convolutional neural network having poor classification performance to time-series remote sensing images with abnormal data, an end-to-end network based on the integration of multi-mode and multi-single-mode architecture was introduced. Firstly, multi-scale features of the multi-dimensional time-series were extracted by the multivariate time-series model and the univariate time-series model. Then, the spatio-temporal sequence feature construction was completed by automatic coding based on the pixel spatial coordinate information. Finally, the classification was implemented by fully connected layer and the softmax function. In the case of data anomaly (data loss and data distortion), the proposed algorithm was compared with commonly used time-series remote sensing image classification algorithms such as 1D Convolutional Neural Network (1D-CNN), Multi-Channels Deep Neural Network (MCDNN), Time Series Convolutional Neural Networks (TSCNN) and Long Short-Term Memory (LSTM) network. Experimental results showed that the proposed network using the end-to-end multi-mode and multi-single-mode architecture fusion had the highest classification accuracy in the case of data anomaly, and the F1 value reached 93.40%.
Artificial intelligence
YOLOv3 compression and acceleration based on ZYNQ platform
GUO Wenxu, SU Yuanqi, LIU Yuehu
2021, 41(3): 669-676. DOI:
10.11772/j.issn.1001-9081.2020060994
Asbtract
(
)
PDF
(1391KB) (
)
References
|
Related Articles
|
Metrics
The object detection networks with high accuracy are hard to be directly deployed on end-devices such as vehicles and drones due to their significant increase of parameters and computational cost. In order to solve the problem, by considering network compression and computation acceleration, a new compression scheme for residual networks was proposed to compress YOLOv3 (You Only Look Once v3), and this compressed network was then accelerated on ZYNQ platform. Firstly, a network compression algorithm containing both network pruning and network quantization was proposed. In the aspect of network pruning, a strategy for residual structure was introduced to divide the network pruning into two granularities:channel pruning and residual connection pruning, which overcame the limitations of the channel pruning on residual connections and further reduced the parameter number of the model. In the aspect of network quantization, a relative entropy-based simulated quantization was utilized to quantize the parameters channel by channel, and perform the online statistics of the parameter distribution and the information loss caused by the parameter quantization, so as to assist to choose the best quantization strategy to reduce the precision loss during the quantization process. Secondly, the 8-bit convolution acceleration module was designed and optimized on ZYNQ platform, which optimized the on-chip cache structure and accelerate the compressed YOLOv3 with combining the Winograd algorithm. Experimental results show that the proposed solution can achieve smaller model scale and higher accuracy (7 percent points increased) compared to YOLOv3 tiny. Meanwhile, the hardware acceleration method on ZYNQ platform achieves higher energy efficiency ratio than other platforms, thus helping the actual deployment of YOLOv3 and other residual networks on the end sides of ZYNQ.
Imputation algorithm for hybrid information system of incomplete data analysis approach based on rough set theory
PENG Li, ZHANG Haiqing, LI Daiwei, TANG Dan, YU Xi, HE Lei
2021, 41(3): 677-685. DOI:
10.11772/j.issn.1001-9081.2020060894
Asbtract
(
)
PDF
(1135KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem of the poor imputation capability of the ROUgh Set Theory based Incomplete Data Analysis Approach (ROUSTIDA) for the Hybrid Information System (HIS) containing multiple attributes such as discrete (e.g., integer, string, and enumeration), continuous (e.g., floating) and missing attributes in the real-world application, a Rough Set Theory based Hybrid Information System for Missing Data Imputation Approach (RSHISMIS) was proposed. Firstly, according to the idea of decision attribute equivalence class partition, HIS was divided to solve the problem of decision rule conflict problem that might occurs after imputation. Secondly, a hybrid distance matrix was defined to reasonably quantify the similarity between objects in order to filter the samples with imputation capability and to overcome the shortcoming of ROUSTIDA that cannot handle with continuous attributes. Thirdly, the nearest-neighbor idea was combined to solve the problem of ROUSTIDA that it cannot impute the data with the same missing attribute in the case of conflict between the attribute values of non-discriminant objects. Finally, experiments were conducted on 10 UCI datasets, and the proposed method was compared with classical algorithms including ROUSTIDA, K Nearest Neighbor Imputation (KNNI), Random Forest Imputation (RFI), and Matrix Factorization (MF). Experimental results show that the proposed method outperforms ROUSTIDA by 81% in recall averagely and 5% to 53% in precision. Meanwhile, the method has the maximal 0.12 reduction of Normalized Root Mean Square Error (NRMSE) compared with ROUSTIDA. Besides, the classification accuracy of the method is 7% higher on average than that of ROUSTIDA, and is also better than those of the imputation algorithms KNNI, RFI and MF.
Co-training algorithm combining improved density peak clustering and shared subspace
LYU Jia, XIAN Yan
2021, 41(3): 686-693. DOI:
10.11772/j.issn.1001-9081.2020071095
Asbtract
(
)
PDF
(2185KB) (
)
References
|
Related Articles
|
Metrics
There would be lack of useful information in added unlabeled samples during the iterations of co-training algorithm, meanwhile, the labels of the samples labeled by multiple classifiers may happen to be inconsistent, which would lead to accumulation of classification errors. To solve the above problems, a co-training algorithm combining improved density peak clustering and shared subspace was proposed. Firstly, the two base classifiers were obtained by the complementation of attribute sets. Secondly, an improved density peak clustering was performed based on the siphon balance rule. And beginning from the cluster centers, the unlabeled samples with high mutual neighbor degrees were selected in a progressive manner, then they were labeled by the two base classifiers. Finally, the final categories of the samples with inconsistent labels were determined by the shared subspace obtained by the multi-view non-negative matrix factorization algorithm. In the proposed algorithm, the unlabeled samples with better representation of spatial structure were selected by the improved density peak clustering and mutual neighbor degree, and the same sample labeled by different labels was revised via shared subspace, solving the low classification accuracy problem caused by sample misclassification. The algorithm was validated by comparisons in multiple experiments on 9 UCI datasets, and experimental results show that the proposed algorithm has the highest classification accuracy rate in 7 data sets, and the second highest classification accuracy rate in the other 2 data sets.
Text correction and completion method in continuous sign language recognition
LONG Guangyu, CHEN Yiqiang, XING Yunbing
2021, 41(3): 694-698. DOI:
10.11772/j.issn.1001-9081.2020060798
Asbtract
(
)
PDF
(877KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that the text results of continuous sign language recognition based on video have problems of semantic ambiguity and chaotic word order, a two-step method was proposed to convert the sign language text of the continuous sign language recognition result into a fluent and understandable Chinese text. In the first step, the natural sign language rules and
N
-gram language model (
N
-gram) were used to perform the text ordering of the continuous sign language recognition results. In the second step, a Bidirectional Long-Term Short-Term Memory (Bi-LSTM) network model was trained by using the Chinese universal quantifier dataset to solve the quantifier-free problem of the sign language grammar, so as to improve the fluency of texts. The absolute accuracy and the proportion of the longest correct subsequences were adopted as the evaluation indexes of text ordering. Experimental results showed that the text ordering results of the proposed method had the absolute accuracy of 77.06%, the proportion of the longest correct subsequences of 86.55%, and the accuracy of quantifier completion of 97.23%. The proposed method can effectively improve the smoothness and intelligibility of text results of continuous sign language recognition. It has been successfully applied to the video-based continuous sign language recognition, which improves the barrier-free communication experience between the hearing-impaired and the normal-hearing people.
Reinforced automatic summarization model based on advantage actor-critic algorithm
DU Xixi, CHENG Hua, FANG Yiquan
2021, 41(3): 699-705. DOI:
10.11772/j.issn.1001-9081.2020060837
Asbtract
(
)
PDF
(975KB) (
)
References
|
Related Articles
|
Metrics
The extractive summary model is relatively redundant and the abstractive summary model often loses key information and has inaccurate summary and repeated generated content in long text automatic summarization task. In order to solve these problems, a Reinforced Automatic Summarization model based on Advantage Actor-Critic algorithm (A2C-RLAS) for long text was proposed. Firstly, the key sentences of the original text were extracted by the extractor based on the hybrid neural network of Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). Then, the key sentences were refined by the rewriter based on the copy mechanism and the attention mechanism. Finally, the Advantage Actor-Critic (A2C) algorithm in reinforcement learning was used to train the entire network, and the semantic similarity between the rewritten summary and the reference summary (BERTScore (Evaluating Text Generation with Bidirectional Encoder Representations from Transformers) value) was used as a reward to guide the extraction process, so as to improve the quality of sentences extracted by the extractor. The experimental results on CNN/Daily Mail dataset show that, compared with models such as Reinforcement Learning-based Extractive Summarization (Refresh) model, a Recurrent Neural Network based sequence model for extractive summarization (SummaRuNNer) and Distributional Semantics Reward (DSR) model, the A2C-RLAS has the final summary with content more accurate, language more fluent and redundant content effectively reduced, at the same time, A2C-RLAS has both the ROUGE (Recall-Oriented Understudy for Gisting Evaluation) and BERTScore indicators improved. Compared to the Refresh model and the SummaRuNNer model, the ROUGE-L value of the A2C-RLAS model is increased by 6.3% and 10.2% respectively; compared with the DSR model, the F1 value of the A2C-RLAS model is increased by 30.5%.
Salient object detection based on difference of Gaussian feature network
HOU Yunlong, ZHU Lei, CHEN Qin, LYU Suidong
2021, 41(3): 706-713. DOI:
10.11772/j.issn.1001-9081.2020060957
Asbtract
(
)
PDF
(1463KB) (
)
References
|
Related Articles
|
Metrics
As a clue with physiological basis, the center-surround contrast theory has been widely used in traditional saliency detection models. However, this theory is rarely applied to models based on deep Convolutional Neural Network (CNN) explicitly. In order to introduce the classic center-surround contrast theory into deep CNN, a salient object detection model based on Difference of Gaussian (DoG) feature network was proposed. Firstly, a Difference of Gaussian Pyramid (DGP) structure was constructed on the deep features of multiple scales to perceive the local prominent features of salient object in an image. Then, the obtained differential feature were used to perform weighted selection to the deep features with rich semantic information. Finally, the accurate extraction of the salient object was realized. In addition, the Gaussian smoothing process was implemented by using standard one-dimensional convolution in the proposed network design, so as to reduce the computational complexity and realize the end-to-end training of the network at the same time. Through comparison of the proposed model and six salient object detection algorithms on four public datasets, it can be seen that the results obtained by the proposed model achieve the best performance in the quantitative evaluation of Mean Absolute Error (MAE) and maximum F-measure. Especially on the DUTS-TE dataset the maximum F-measure and the mean absolute error of the results of the proposed model reach 0.885 and 0.039 respectively. Experimental results show that the proposed model has good detection performance for salient objects in complex natural scenes.
Face frontalization generative adversarial network algorithm based on face feature map symmetry
LI Hongxia, QIN Pinle, YAN Hanmei, ZENG Jianchao, BAO Qianyue, CHAI Rui
2021, 41(3): 714-720. DOI:
10.11772/j.issn.1001-9081.2020060779
Asbtract
(
)
PDF
(1432KB) (
)
References
|
Related Articles
|
Metrics
At present, the research of face frontalization mainly solves the face yaw problem, and pays less attention to the face frontalization of the side face affected by yaw and pitch at the same time in real scenes such as surveillance video. Aiming at this problem and the problem of incomplete identity information retained in front face image generated by multi-angle side faces, a Generative Adversarial Network (GAN) based on feature map symmetry and periocular feature preserving loss was proposed. Firstly, according to the prior of face symmetry, a symmetry module of the feature map was proposed. The face key point detector was used to detect the position of nasal tip point, and mirror symmetry was performed to the feature map extracted by the encoder according to the nasal tip, so as to alleviate the lack of facial information at the feature level. Finally, benefiting from the idea of periocular recognition, the periocular feature preserving loss was added in the existing identity preserving method of generated image to train the generator to generate realistic and identity-preserving front face image. Experimental results show that the facial details of the images generated by the proposed algorithm were well preserved, and the average Rank-1 recognition rate of faces with all angles under the pitch of CAS-PEAL-R1 dataset is 99.03%, which can effectively solve the frontalization problem of multi-angle side faces.
Human action recognition method based on low-rank action information and multi-scale convolutional neural network
JIANG Li, HUANG Shijian, YAN Wenjuan
2021, 41(3): 721-726. DOI:
10.11772/j.issn.1001-9081.2020060958
Asbtract
(
)
PDF
(1376KB) (
)
References
|
Related Articles
|
Metrics
In view of the problem that traditional methods of action information acquisition in human action recognition need cumbersome steps and various assumptions, and considering the superior performance of Convolutional Neural Network (CNN) in image and video processing, a human action recognition method based on Low-rank Action Information (LAI) and Multi-scale Convolutional Neural Network (MCNN) was proposed. Firstly, the action video was divided into several segments, and the LAI of each segment was extracted by the low-rank learning of this segment, then the LAI of all segments was connected together on the time axis to obtain the LAI of the whole video, which effectively captured the action information in the video, so as to avoid cumbersome extraction steps and various assumptions. Secondly, according to the characteristics of LAI, an MCNN model was designed. In the model, the multi-scale convolution kernels were used to obtain the action characteristics of LAI under different receptive fields, and the reasonable design of each convolution layer, pooling layer and fully connected layer were utilized to further refine the characteristics and finally output the action categories. The performance of the proposed method was verified on two benchmark databases KTH and HMDB51, and three groups of comparison experiments were designed and carried out. Experimental results show that the recognition rates of the proposed method are 97.33% and 72.05% respectively on the two databases, which are at least increased by 0.67 and 1.15 percentage points respectively compared with those of the methods of Two-Fold Transformation (TFT) and Deep Temporal Embedding Network (DTEN). The proposed method can further promote the wide application of action recognition technology in security, human-computer interaction and other fields.
Data science and technology
Fixed word-aligned partition compression algorithm of inverted list based on directed acyclic graph
JIANG Kun, LIU Zheng, ZHU Lei, LI Xiaoxing
2021, 41(3): 727-732. DOI:
10.11772/j.issn.1001-9081.2020060874
Asbtract
(
)
PDF
(905KB) (
)
References
|
Related Articles
|
Metrics
In Fixed Word-Aligned (FWA) inverted index compression algorithms of Web search engines, due to the "greedy" block partition strategy of the inverted list and the interleaved storage of the codeword information, it is difficult for the algorithm to achieve the optimal compression performance. To solve the above problem, an FWA partition compression algorithm based on Directed Acyclic Graph (DAG) was proposed. Firstly, considering the small integer information in the inverted list brought by the clustering characteristics of Web pages, a novel FWA compression format with data area of 64-bit blocks was designed. In this compression format, the data area was divided into 16 storage modes suitable for continuous small integer compression through 4-bit selector area, and the selector area and data area in each block of the inverted list were stored separately, so as to ensure good batch decompression performance. Secondly, a DAG described Word-Aligned Partition (WAP) algorithm was proposed on the basis of the new compression format. In the algorithm, the inverted list block partitioning problem was regarded as a Single-Source Shortest-Path (SSSP) problem by DAG, and by considering the constraints of various storage modes of data area in FWA compression format, the structure and recursive definition of the SSSP problem were determined. Thirdly, the dynamic programming technique was used to solve the problem of SSSP and generate the pseudo-code and algorithm complexity of the optimal partition vector. The original storage modes of traditional FWA algorithms such as S9, S16 and S8b were partitioned and optimized based on DAG, and the computational complexities of the algorithms before and after optimization were compared and analyzed. Finally, the compression performance experiments were conducted with simulation integer sequence data and Text REtrieval Conference (TREC) GOV2 Web page index data. Experimental results show that, compared with the traditional FWA algorithms, the DAG based FWA partition algorithm can improve the compression ratio and decompression speed by batch decompression and partition optimization technology. At the same time, it can obtain a higher compression ratio than the traditional Frame Of Reference (FOR) algorithms for the compression of continuous small integer sequence.
Method of dynamically constructing spatial topic R-tree based on
k
-means++
ZOU Zhiwen, QIN Cheng
2021, 41(3): 733-737. DOI:
10.11772/j.issn.1001-9081.2020060851
Asbtract
(
)
PDF
(769KB) (
)
References
|
Related Articles
|
Metrics
The existing R-tree spatial clustering technology usually randomly designates or calculates the Euclidean distance between spatial data to select the cluster centers, without considering the topic relevance between spatial data, so that the clustering result is influenced by the initial value of
k
, and the association between spatial data is only based on geographic location. Aiming at this situation, a method of dynamically constructing spatial Topic R-tree (TR-tree) based on
k
-means++ was proposed. Firstly, in the traditional
k
-means++ algorithm,
k
clusters were dynamically determined by the clustering measure function, and Latent Dirichlet Allocation (LDA) model was introduced into the clustering measure function to calculate the topic probability of each spatial data text, as a result, the topic relevance between spatial data was strengthened. Secondly, the cluster center with the highest probability was selected through the topic probabilities. Finally, the TR-tree was constructed, and the spatial data were dynamically allocated during the construction. Experimental results show that with a slight increase of the R-tree construction time, this method has the indexing efficiency and correlation between nodes significantly improved compared to the algorithm of constructing R-tree based only on geographic location clustering.
Comparative density peaks clustering algorithm with automatic determination of clustering center
GUO Jia, HAN Litao, SUN Xianlong, ZHOU Lijuan
2021, 41(3): 738-744. DOI:
10.11772/j.issn.1001-9081.2020071071
Asbtract
(
)
PDF
(2809KB) (
)
References
|
Related Articles
|
Metrics
In order to solve the problem that the clustering centers cannot be determined automatically by Density Peaks Clustering (DPC) algorithm, and the clustering center points and the non-clustering center points are not obvious enough in the decision graph, Comparative density Peaks Clustering algorithm with Automatic determination of clustering center (ACPC) was designed. Firstly, the distance parameter was replaced by the distance comparison quantity, so that the potential clustering centers were more obvious in the decision graph. Then, the 2D interval estimation method was used to perform the automatic selection of clustering centers, so as to realize the automation of clustering process. Experimental results show that the ACPC algorithm has better clustering effect on four synthetic datasets; and the comparison of the Accuracy indicator on real datasets shows that on the dataset Iris, the clustering accuracy of ACPC can reach 94%, which is 27.3% higher than that of the traditional DPC algorithm, and the problem of selecting clustering centers interactively is solved by ACPC.
Cyber security
Research and application progress of blockchain in area of data integrity protection
GAO Haoyu, LI Leixiao, LIN Hao, LI Jie, DENG Dan, LI Shaoxu
2021, 41(3): 745-755. DOI:
10.11772/j.issn.1001-9081.2020060912
Asbtract
(
)
PDF
(1658KB) (
)
References
|
Related Articles
|
Metrics
As an indispensable new resource of modern information society, data always face the risk of being tampered from the beginning. The availability and authenticity of the tampered data will be greatly reduced. And the blockchain technology perfectly meets the requirements of data integrity protection due to its characteristics of anti-tampering, decentralization and single point failure prevention. Firstly, the background of blockchain technology and vital requirements of data protection were briefly described. Secondly, according to the types of blockchain, the existing blockchain data integrity protection achievements were classified and introduced, and the advantages and disadvantages of each achievement were summarized combined with data integrity protection. Thirdly, the current data integrity protection technologies were classified and compared with the blockchain data integrity protection technology, and the shortcomings of the traditional data integrity protection technologies and the advantages of blockchain data integrity protection technology were analyzed. Finally, the defects of blockchain data integrity protection technology were summarized, and the solutions were given.
d-PBFT:detection consensus algorithm for alliance blockchain
LIU Yu, ZHU Chaoyang, LI Jinze, LAO Yuanji, QIN Tuanfa
2021, 41(3): 756-762. DOI:
10.11772/j.issn.1001-9081.2020060900
Asbtract
(
)
PDF
(1007KB) (
)
References
|
Related Articles
|
Metrics
There is an identity authentication mechanism in alliance blockchain, but even by using the mechanism, Byzantine malicious nodes still exist in the network, and the member nodes in the alliance may be controlled and utilized by the third-party enemies. To solve these problems, a detection-Practical Byzantine Fault Tolerance (d-PBFT) consensus algorithm that can monitor the node states was proposed. Firstly, the primary node was elected and the its state was checked to ensure that the elected primary node never be malicious before. Secondly, the consensus request submitted by the client was executed through the three-stage consensus process "pre-prepare-prepare-commit". Finally, the primary node state was assessed according to the three-stage achievement state. If primary node was unstable or malicious, it would be marked, and the malicious node would be added to the quarantine to wait for processing. In this algorithm, based on tolerating a specific number of Byzantine nodes, every node state was monitored all the time and the malicious nods would be isolated. In this case, the bad impact of malicious nodes on the alliance would be reduced. Experimental results show that the network with d-PBFT algorithm has high throughput and low consensus delay, and the consensus generation amount of the algorithm is 26.1% more than that of Practical Byzantine Fault Tolerance (PBFT) algorithm when alliance network includes Byzantine nodes. The d-PBFT algorithm not only improves the robustness of alliance network, but also improves the network throughput.
Sunflower based construction of locally repairable codes
ZHANG Mao, LI Ruihu, ZHENG Youliang, FU Qiang
2021, 41(3): 763-767. DOI:
10.11772/j.issn.1001-9081.2020060839
Asbtract
(
)
PDF
(681KB) (
)
References
|
Related Articles
|
Metrics
The construction of binary Locally Repairable Code (LRC) achieving C-M (Cadambe-Mazumdar) bound has been fully studied while there are few researches on general fields. In order to solve the problem, the construction of LRC on general fields was studied. Firstly, a method for determining the number of elements in sunflower was proposed by projective geometry theory. Then, the parameters such as code length, dimension and locality of LRC were clearly described by depicting LRC through the disjoint repair group. Finally, based on the parity-check matrix with disjoint local repair group, two families of LRC on general fields with the minimum distance of 6 were constructed by sunflower, many of which were optimal or almost optimal. Compared with the existing LRC constructed by methods such as subfield subcode, generalized concatenated code and algebraic curve, the constructed two families of codes improve the information rate under the same code minimum distance and locality. These results can be applied to the construction of other LRC on general fields.
Network security situation prediction based on improved particle swarm optimization and extreme learning machine
TANG Yanqiang, LI Chenghai, SONG Yafei
2021, 41(3): 768-773. DOI:
10.11772/j.issn.1001-9081.2020060924
Asbtract
(
)
PDF
(1076KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the problems of low prediction accuracy and slow convergence speed of network security situation prediction model, a prediction method based on Improved Particle Swarm Optimization Extreme Learning Machine (IPSO-ELM) algorithm was proposed. Firstly, the inertia weight and learning factor of Particle Swarm Optimization (PSO) algorithm were improved to realize the adaptive adjustment of the two parameters with the increase of iteration times, so that PSO had a large search range and fast speed at the initial stage, strong convergence ability and stability at the later stage. Secondly, aiming at the problem that PSO is easy to fall into the local optimum, a particle stagnation disturbance strategy was proposed to re-guide the particles trapped in the local optimum to the global optimal flying. The Improved Particle Swarm Optimization (IPSO) algorithm obtained in this way ensured the global optimization ability and enhanced the local search ability. Finally, IPSO was combined with Extreme Learning Machine (ELM) to optimize the initial weights and thresholds of ELM. Compared with ELM, the ELM combining with IPSO had the prediction accuracy improved by 44.25%. Experimental results show that, compared with PSO-ELM, IPSO-ELM has the fitting degree of prediction results reached 0.99, and the convergence rate increased by 47.43%. The proposed algorithm is obviously better than the comparison algorithms in the prediction accuracy and convergence speed.
Speech steganalysis method based on deep residual network
REN Yiming, WANG Rangding, YAN Diqun, LIN Yuzhen
2021, 41(3): 774-779. DOI:
10.11772/j.issn.1001-9081.2020060763
Asbtract
(
)
PDF
(1026KB) (
)
References
|
Related Articles
|
Metrics
Concerning the low detection performance of the Least Significant Bit (LSB) steganography method on WAV-format speech, a speech steganalysis method based on deep residual network was proposed. First, the residual signal of the input speech signal was calculated through a fixed convolutional layer composed of multiple sets of high-pass filters, and a truncated linear unit was adopted to perform truncation to the obtained residual signal. Then, a deep network was constructed by stacking the convolutional layer and the designed residual block to extract the deep feature information of steganography. Finally, the final classification result was output by the classifier composed of the fully connected layer and Softmax layer. Experimental results under the different secret information embedding rates of two steganography methods,Hide4PGP (Hide 4 Pretty Good Privacy) and LSBmatching (Least Significant Bit matching), show that compared with the exising Convolutional Neural Network (CNN)-based steganalysis methods, the proposed method can achieve better performance, and compared with LinNet, the proposed method has the detection accuracy increased by 7 percentage points on detecting Hide4PGP with the embedding rate of 0.1 bps (bit per sample).
Malware detection method based on perceptual hash algorithm and feature fusion
JIANG Qianyu, WANG Fengying, JIA Lipeng
2021, 41(3): 780-785. DOI:
10.11772/j.issn.1001-9081.2020060906
Asbtract
(
)
PDF
(995KB) (
)
References
|
Related Articles
|
Metrics
In the current detection of the malware family, the local features or global features extracted through the grayscale image of the malware cannot fully describe the malware. Aiming at the problem and to improve the detection effect, a malware detection method based on perceptual hash algorithm and feature fusion was proposed. Firstly, the grayscale image samples of malware were detected through the perceptual hash algorithm, and samples of specific malware families and uncertain malware families were quickly divided. Experimental tests showed that about 67% malwares were able to be detected by the perceptual hash algorithm. Then, the local features of Local Binary Pattern (LBP) and global features of Gist were further extracted for the samples of uncertain families, and the features of merging the above two features were used to classify and detect the malware samples by the machine learning algorithm. Finally, experimental results of the detection of 25 types of malware families show that the detection accuracy is higher when using the fusion feature of LBP and Gist compared to that when using a single feature only, and the proposed method is more efficient in classification and detection than the detection algorithm using machine learning only with the detection speed increased by 93.5%.
Advanced computing
Multi-user task offloading strategy based on stable allocation
MAO Yingchi, XU Xuesong, LIU Pengfei
2021, 41(3): 786-793. DOI:
10.11772/j.issn.1001-9081.2020060861
Asbtract
(
)
PDF
(1162KB) (
)
References
|
Related Articles
|
Metrics
With the emergence of many computation-intensive applications, mobile devices cannot meet user requirements such as delay and energy consumption due to their limited computing capabilities. Mobile Edge Computing (MEC) offloads user task computing to the MEC server through a wireless channel to significantly reduce the response delay and energy consumption of tasks. Concerning the problem of multi-user task offloading, a Multi-User Task Offloading strategy based on Stable Allocation (MUTOSA) was proposed to minimize energy consumption while ensuring the user delay requirement. Firstly, based on the comprehensive consideration of delay and energy consumption, the problem of multi-user task offloading in the independent task scenario was modeled. Then, based on the idea of delayed reception in the stable allocation of game theory, an adjustment strategy was proposed. Finally, the problem of multi-user task unloading was solved through continuous iteration. Experimental results show that, compared with the benchmark strategy and heuristic strategy, the proposed strategy can meet the delay requirements of more users, increase user satisfaction by about 10% on average, and reduce the total energy consumption of user devices by about 50%. It shows that the proposed strategy can effectively reduce energy consumption with ensuring the user delay requirement, and can effectively improve the user experience for delay-sensitive applications.
Task allocation strategy considering service quality of spatial crowdsourcing workers and its glowworm swarm optimization algorithm solution
RAN Jiamin, NI Zhiwei, PENG Peng, ZHU Xuhui
2021, 41(3): 794-802. DOI:
10.11772/j.issn.1001-9081.2020060940
Asbtract
(
)
PDF
(1196KB) (
)
References
|
Related Articles
|
Metrics
Focusing on the task allocation problem in spatial crowdsourcing, with the consideration of the influence of the spatial crowdsourcing workers' service quality on the allocation results, a task allocation strategy with the quality evaluation of worker's service was proposed. Firstly, in each spatio-temporal environment, the evaluation element of spatial crowdsourcing workers was added to establish a multi-objective model that fully considers the service quality and distance cost of the workers. Secondly, the algorithm convergence speed was increased and the global optimization ability was improved by improving the initialization and coding strategy, position movement strategy and neighborhood search strategy of the discrete glowworm swarm optimization algorithm. Finally, the improved algorithm was used to solve the model. Experimental results on the simulated and real datasets show that, compared with other swarm intelligence algorithms, the proposed algorithm can improve the total score of task allocation by 2% to 25% on datasets with different scales. By considering the service quality of workers, the proposed algorithm can effectively improve the efficiency of task allocation and the final total score.
Discrete random drift particle swarm optimization algorithm for solving multi-objective community detection problem
LI Ping, WANG Fen, CHEN Qidong, SUN Jun
2021, 41(3): 803-811. DOI:
10.11772/j.issn.1001-9081.2020060800
Asbtract
(
)
PDF
(1095KB) (
)
References
|
Related Articles
|
Metrics
For solving the problem of multi-objective community detection in complex network, a Discrete Random Drift Particle Swarm Optimization (DRDPSO) algorithm was proposed. Firstly, by performing random coding operation on communities and using discretization operation for random drift optimization algorithm, the local network structure was improved and the global modularity value was gradually enhanced. Secondly, two objective functions, Kernel K-Means (KKM) and Ratio Cut (RC), were used to control the community size in the network and ameliorate the modularity resolution ratio. Finally, the Pareto non-inferior solution sets were updated step by step according to the multi-objective solving strategy, and the objective community structures satisfying the requirements were selected from the Pareto non-inferior solution sets. To verify the effectiveness of proposed algorithm, the comparison experiments of DRDPSO algorithm with other community detection algorithms were carried out on three generation networks with 10 different parameter configurations and three real networks. And the community detection results obtained by different algorithms were compared and analyzed by using two evaluation indicators of best community. Experimental results show that using DRDPSO algorithm for solving the multi-objective community detection problem in complex network, the probability of obtaining the highest community detection evaluation indexes (normalized mutual information and modularity) is more than 95%. The application of DRDPSO algorithm in real network can further improve the accuracy and robustness of network community division.
Design of FPGA accelerator with high parallelism for convolution neural network
WANG Xiaofeng, JIANG Penglong, ZHOU Hui, ZHAO Xiongbo
2021, 41(3): 812-819. DOI:
10.11772/j.issn.1001-9081.2020060996
Asbtract
(
)
PDF
(1115KB) (
)
References
|
Related Articles
|
Metrics
Most of the algorithms based on Convolutional Neural Network (CNN) are computation-intensive and memory-intensive, so they are difficult to be applied in embedded fields such as aerospace, mobile robotics and smartphones which have low-power requirements. To solve this problem, a Field Programmable Gate Array (FPGA) accelerator with high parallelism for CNN was proposed. Firstly, four kinds of parallelism in CNN algorithm that can be used for FPGA acceleration were compared and studied. Then, a Multi-channel Convolutional Rotating-register Pipeline (MCRP) structure was proposed to concisely and effectively utilize the convolution kernel parallelism of CNN algorithm. Finally, using the strategy of input/output channel parallelism+convolution kernel parallelism, a CNN accelerator architecture with high parallelism was proposed based on MCRP structure, and to verify the design rationality of the architecture, it was deployed on the XCZU9EG chip of XILINX. Under the condition of making full use of the on-chip Digital Signal Processor (DSP) resources, the peak computing capacity of the proposed CNN accelerator reached 2 304 GOPS(Giga Operations Per Second). Taking SSD-300 algorithm as the test object, this CNN accelerator had the actual computing capacity of 1 830.33 GOPS, and the hardware utilization rate of 79.44%. Experimental results show that, the MCRP structure can effectively improve the computing capacity of CNN accelerator, and the CNN accelerator based on MCRP structure can generally meet the computing capacity requirements of most applications in the embedded fields.
Network and communications
Time of arrival positioning based on time reversal
ZHANG Qilin, LI Fangwei, WANG Mingyue
2021, 41(3): 820-824. DOI:
10.11772/j.issn.1001-9081.2020060976
Asbtract
(
)
PDF
(950KB) (
)
References
|
Related Articles
|
Metrics
It is difficult for traditional algorithms to accurately find out the first direct path in indoor Ultra Wide Band (UWB) Time Of Arrival (TOA) positioning system, resulting in low positioning accuracy. In order to solve the problem, a TOA indoor UWB positioning algorithm based on Time Reversal (TR) was proposed. Firstly, the spatial-temporal focusing characteristic of TR processing was used to determine the first direct path, so as to estimate the TOA of this path. Then, the Weighted Least Squares (WLS) positioning algorithm was used to assign the corresponding weights to different estimation components for improving the positioning accuracy. The simulation results show that, compared with the traditional TOA positioning, the proposed scheme has the Root Mean Square Error (RMSE) reduced by 28.6% under the low signal-to-noise ratio condition. It can be seen that the proposed scheme improves the system positioning accuracy significantly.
Adaptive network transmission mechanism based on forward error correction
ZHU Yongjin, YIN Fei, DOU Longlong, WU Kun, ZHANG Zhiwei, QIAN Zhuzhong
2021, 41(3): 825-832. DOI:
10.11772/j.issn.1001-9081.2020060948
Asbtract
(
)
PDF
(1133KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the performance degradation of transmission performance of Transmission Control Protocol (TCP) in wireless network caused by the loss packet retransmission mechanism triggered by packet loss, an Adaptive transmission mechanism based on Forward Error Correction (AdaptiveFEC) was proposed. In the mechanism, the transmission performance of TCP was improved by the avoidance of triggering TCP loss packet retransmission mechanism, which realized by reducing data segment loss with forward error correction. Firstly, the optimal redundant segment ratio in current time was selected according to the current network status and the data transmission characteristics of the current connection. Then, the network status was estimated by analyzing the data segment sequence number in the TCP data segment, so that the redundant segment ratio was dynamically updated according to the network. Large number of experiment results show that, in the transmission environment with a round-trip delay of 20 ms and a packet loss rate of 5%, AdaptiveFEC can increase the transmission rate of TCP connection by 42% averagely compared to static forward error correction mechanism, and the download speed can be twice as much as the original speed with the proposed mechanism applied to file download applications.
Multimedia computing and computer simulation
Fast calibration algorithm in surgical navigation system based on augmented reality
SUN Qichang, MAI Yongfeng, CHEN Xiaojun
2021, 41(3): 833-838. DOI:
10.11772/j.issn.1001-9081.2020060776
Asbtract
(
)
PDF
(1272KB) (
)
References
|
Related Articles
|
Metrics
To solve the problem of fusion of virtual scene and real scene of Optical See-Through Head-Mounted Display (OST-HMD) in Augmented Reality (AR), a fast calibration method for OST-HMD was proposed on the basis of optical positioning and tracking system. Firstly, the virtual makers in OST-HMD and the corresponding points in real world were collected as two 3D point datasets, and the transformation between the virtual space and optical positioning and tracking space was estimated to solve the transformation matrix from the virtual space to the real scene. Then, the transitive relation among matrices of the entire navigation system was built, and an AR-based surgical navigation system was designed and implemented on this basis, and the accuracy validation experiment and model experiment were conducted to this system. Experimental results show that the proposed algorithm makes the root mean square error between the virtual datum points and the corresponding real datum points achieved 1.39 ±0.49 mm, and the average time of calibration process of 23.8 seconds, which demonstrates the algorithm has the potential in clinical applications.
3D virtual human animation generation based on dual-camera capture of facial expression and human pose
LIU Jie, LI Yi, ZHU Jiangping
2021, 41(3): 839-844. DOI:
10.11772/j.issn.1001-9081.2020060993
Asbtract
(
)
PDF
(1377KB) (
)
References
|
Related Articles
|
Metrics
In order to generate a three-dimensional virtual human animation with rich expression and smooth movement, a method for generating three-dimensional virtual human animation based on synchronous capture of facial expression and human pose with two cameras was proposed. Firstly, the Transmission Control Protocol (TCP) network timestamp method was used to realize the time synchronization of the two cameras, and the ZHANG Zhengyou's calibration method was used to realize the spatial synchronization of the two cameras. Then, the two cameras were used to collect facial expressions and human poses respectively. When collecting facial expressions, the 2D feature points of the image were extracted and the regression of these 2D points was used to calculate the Facial Action Coding System (FACS) facial action unit in order to prepare for the realization of expression animation. Based on the standard head 3D coordinate, according to the camera internal parameters, the Efficient Perspective-
n
-Point (EP
n
P) algorithm was used to realize the head pose estimation. After that, the facial expression information was matched with the head pose estimation information. When collecting human poses, the Occlusion-Robust Pose-Map (ORPM) method was used to calculate the human poses and output data such as the position and rotation angle of each bone point. Finally, the established 3D virtual human model was used to show the effect of data-driven animation in the Unreal Engine 4 (UE4). Experimental results show that this method can simultaneously capture facial expressions and human poses and has the frame rate reached 20 fps in the experimental test, so it can generate natural and realistic three-dimensional animation in real time.
Image super-resolution reconstruction based on attention mechanism
WANG Yongjin, ZUO Yu, WU Lian, CUI Zhongwei, ZHAO Chenjie
2021, 41(3): 845-850. DOI:
10.11772/j.issn.1001-9081.2020060979
Asbtract
(
)
PDF
(2394KB) (
)
References
|
Related Articles
|
Metrics
At present, super-resolution reconstruction of a single image achieves a good effect, but most models achieve the good effect by increasing the number of network layers rather than exploring the correlation between channels. In order to solve this problem, an image super-resolution reconstruction method based on Channel Attention mechanism (CA) and Depthwise Separable Convolution (DSC) was proposed. The multi-path global and local residual learning were adopted by the entire model. Firstly, the shallow feature extraction block was used to extract the features of the input image. Then, the channel attention mechanism was introduced in the deep feature extraction block, and the correlation of the channels was increased by adjusting the weights of the feature graphs of different channels to extract the high-frequency feature information. Finally, a high-resolution image was reconstructed. In order to reduce the huge parameter influence brought by the attention mechanism, the depthwise separable convolution technology was used in the local residual block to greatly reduce the training parameters. Meanwhile, the Adaptive moment estimation (Adam) optimizer was used to accelerate the convergence of the model, so as to improve the algorithm performance. The image reconstruction by the proposed method was carried out on Set5 and Set14 datasets. Experimental results show that the images reconstructed by the proposed method have higher Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity index (SSIM), and the parameters of the proposed model are reduced to 1/26 of that of the depth Residual Channel Attention Network (RCAN) model.
Frontier and comprehensive applications
Algorithms for low-carbon pickup and delivery vehicle routing problem with fuzzy demand
MA Yanfang, WANG Shan, HUANG Lingyu, CHENG Cong
2021, 41(3): 851-859. DOI:
10.11772/j.issn.1001-9081.2020071079
Asbtract
(
)
PDF
(1198KB) (
)
References
|
Related Articles
|
Metrics
Due to high carbon emissions in the logistics and distribution process, from a low carbon perspective, a Low Carbon Vehicle Routing Problem with Pickup and Delivery (LCVRPPD) considering fuzzy demand was formulated, and a 2-OPT based differential algorithm was proposed to solve the problem. In the algorithm, the natural number encoding method was adopted and three different fitness functions were given. Then, the 2-OPT algorithm was introduced to replace the original mutation mechanism of differential algorithm, and the binomial crossover operators and greedy selection operator were combined, so as to accelerate the convergence of the improved algorithm. In the case study, Taguchi method was used to determine reasonable values of parameters in the improved algorithm, and the SPSS (Statistical Product and Service Solutions) analysis revealed that the solution of the model with the minimum total cost as the objective function is the best compared to those of the other two different objective models of transportation cost minimization and carbon minimization respectively. For examples with different customer scales, compared with the basic differential algorithm, the improved algorithm has the total cost reduced by 1.8% to 3.0% and the carbon emission decreased by 0.7% to 3.5%; compared with genetic algorithm, the improved algorithm has the total cost reduced by 1.9% to 16.47% and the carbon emission decreased by 1.2% to 4.3%; compared with particle swarm optimization algorithm, the optimization effect is more obvious, the improved algorithm has the total cost reduced by 4.0% to 22.5% and the carbon emission decreased by 1.56% to 7.88%, which verify the effectiveness and advancement of the proposed algorithm. In summary, the proposed model and algorithm can provide a reference for the low carbon routing problem of pickup and delivery vehicles.
Label printing production scheduling technology based on improved genetic algorithm
MA Xiaomei, HE Fei
2021, 41(3): 860-866. DOI:
10.11772/j.issn.1001-9081.2020060833
Asbtract
(
)
PDF
(1167KB) (
)
References
|
Related Articles
|
Metrics
There are a variety of problems in the label printing production process, such as multi-variety, small batch, high-degree customization and uncertainties in some working procedures. Aiming at these problems, a flexible job-shop scheduling model with the goal of minimizing the maximum completion time was established, and an improved Genetic Algorithm (GA) was proposed. First of all, integer coding was adopted based on the standard genetic algorithm. Secondly, the roulette method was used in the selection operation stage, and the convergence of the algorithm was guaranteed by introducing the elite solution retention strategy. Finally, dynamic adaptive crossover and mutation probabilities were proposed to ensure that the algorithm optimized in a wide range to avoid prematurity in the early stage, and the algorithm converged timely to ensure that the excellent individuals obtained previously were not destroyed in the later stage. In order to verify the feasibility of the proposed improved genetic algorithm, the Ft06 benchmark example was first used to compare the proposed algorithm with the standard genetic algorithm. The results showed that the optimal solution of the improved genetic algorithm (55 s) was better than the optimal solution of the standard genetic algorithm (56 s), and the number of iterations of the improved genetic algorithm was significantly better than that of the standard genetic algorithm. Then, through the 8×8, 10×10 and 15×10 standard examples of Flexible Job-shop Scheduling Problem (FJSP), the effectiveness, stability and optimization performance of the algorithm were verified. On all of three standard test examples, the improved genetic algorithm obtained the optimal solution in a short time. Finally, when the proposed algorithm was used to solve the production scheduling problem of the label printing job-shop, the processing efficiency was increased by 50.3% compared to the original one. Therefore, the proposed improved genetic algorithm can be effectively applied to solve the production scheduling problem of label printing job-shop.
Bulk storage assignment algorithm in bulk port based on game theory
ZHANG Shuyao, LI Yonghua, FAN Jiajia
2021, 41(3): 867-874. DOI:
10.11772/j.issn.1001-9081.2020060911
Asbtract
(
)
PDF
(1307KB) (
)
References
|
Related Articles
|
Metrics
The bulk port has a limited storage yard, during the entering port operation of cargos, there is the problem that how to give consideration to both the operating efficiency and arranging the reasonable storage of cargos in the storage yard with dynamic changes of cargos entering and leaving the port. In order to solve the problem, a Bulk Storage Assignment Algorithm in Bulk port based on Game theory (BSAABG) was proposed. Firstly, the storage assignment behavior was modelled as a dynamic game, and the satisfaction equilibrium was applied to analyze this game. Assuming that each batch of cargos has an expectation for assignment benefit, the game will reach satisfaction equilibrium when all cargos meet their expectations. Then, BSAABG was used to solve the model constructed above, and the convergence of the proposed algorithm was proved theoretically. Experimental results show that, when the number of cargo batches is 20, BSAABG can increase the average cargo satisfaction by 62.5% and 18.2% compared to the manual assignment method (simulated by Greedy Algorithm (GA)) and Storage Assignment algorithm Based on Rule (SABR) respectively, and has the storage assignment benefit 6.83 times and 3.22 times of those of GA and SABR respectively. It can be seen that the proposed algorithm can effectively improve the average cargo satisfaction and the storage assignment benefit.
LSTM and artificial neural network for urban bus travel time prediction based on spatiotemporal eigenvectors
ZHANG Xinhuan, LIU Hongjie, SHI Junqing, MAO Chengyuan, MENG Guolian
2021, 41(3): 875-880. DOI:
10.11772/j.issn.1001-9081.2020060467
Asbtract
(
)
PDF
(859KB) (
)
References
|
Related Articles
|
Metrics
Aiming at the problem that "with the increase of the prediction distance, the prediction of travel time becomes more and more difficult", a comprehensive prediction model of Long Short Term Memory (LSTM) and Artificial Neural Network (ANN) based on spatiotemporal eigenvectors was proposed. Firstly, 24 hours were segmented into 288 time slices to generate time eigenvectors. Secondly, the LSTM time window model was established based on the time slices. This model was able to solve the window movement problem of long-time prediction. Thirdly, the bus line was divided into multiple space slices and the average velocity of the current space slice was used as the instantaneous velocity. At the same time, the predicted time of each space slice would be used as the spatial eigenvector and sent to the new hybrid neural network model named LSTM-A (Long Short Term Memory Artificial neural network). This model combined with the advantages of the two prediction models and solved the problem of bus travel time prediction. Finally, based on the experimental dataset, experiments and tests were carried out:the prediction problem between bus stations was divided into sub-problems of line slice prediction, and the concept of real-time calculation was introduced to each related sub-problem, so as to avoid the prediction error caused by complex road conditions. Experimental results show that the proposed algorithm is superior to single neural network models in both accuracy and applicability. In conclusion, the proposed new hybrid neural network model LSTM-A can realize the long-distance arrival time prediction from the dimension of time feature and the short-distance arrival time prediction from the dimension of spatial feature, thus effectively solving the problem of urban bus travel time prediction and avoiding the remote dependency and error accumulation of buses.
Solving auto part spraying sequence by transforming to traveling salesman problem and genetic algorithm
WANG Binrong, TAN Dailun, ZHENG Bochuan
2021, 41(3): 881-886. DOI:
10.11772/j.issn.1001-9081.2020060868
Asbtract
(
)
PDF
(970KB) (
)
References
|
Related Articles
|
Metrics
Optimizing the color spraying sequence of auto parts can help enterprises to further reduce the production costs. However, there is no research proposing specific mathematical models and solutions for this type of problems. Considering the fact that each auto part must be sprayed and sprayed only once, which has the basic characteristics of the Traveling Salesman Problem (TSP), a modeling method of TSP transformation was proposed and the Genetic Algorithm (GA) with strong parallelism and robustness was selected to solve it. Firstly, the auto parts were defined as the TSP vertices, and the distance between the vertices and production constraints were defined according to the color and category requirements of the auto parts, so as to construct a 0-1 planning model for minimizing the number of color switching of the spraying sequence. Secondly, the color and category constraints of the auto parts were transformed into penalty factors, so as to construct the fitness function of the genetic algorithm. And, based on the championship selection strategy, the mutation strategies of copying, swapping, flipping, and sliding were comprehensively designed. Finally, three sets of data with 64, 93, 293 auto parts and 5, 7, 10 colors were constructed for simulation experiments. The proposed algorithm can obtain the accurate optimal solutions 5, 7, 10 for all three sets of data. With repeatedly running the algorithm, the average values of the approximate optimal solution are 5.63, 7.37, 11.49. Experimental results show that the established mathematical model is accurate to describe the auto part spraying sequence problem, and the designed genetic algorithm is efficient and practical, both of them can be applied to other similar production and processing problems.
Design of abnormal electrocardiograph monitoring model based on stacking classifier
QIN Jing, ZUO Changqing, WANG Zumin, JI Changqing, WANG Baofeng
2021, 41(3): 887-890. DOI:
10.11772/j.issn.1001-9081.2020060760
Asbtract
(
)
PDF
(765KB) (
)
References
|
Related Articles
|
Metrics
The traditional methods of manual heart disease monitoring are highly dependent on senior doctors with prior knowledge, and their speeds and accuracies of monitoring disease need to be improved. In order to solve these problems, a ElectroCardioGraph (ECG) monitoring algorithm based on stack classifier was proposed for the determination of cardiac anomalies. Firstly, the advantages of various machine learning algorithms were combined, and these algorithms were integrated by the way of stack classifier to make up for the limitation of learning by single machine learning algorithm. Then, Synthetic Minority Over-sampling TEchnique (SMOTE) was used to perform data augmentation to the original dataset and balance the number of samples of various diseases, so as to improve the data balance. The proposed algorithm was compared with other machine learning algorithms on MIT-BIH dataset. Experimental results show that the proposed algorithm can improve the accuracy and speed of abnormal ECG monitoring.
Ultrasound thyroid segmentation network based on feature fusion and dynamic multi-scale dilated convolution
HU Yishan, QIN Pinle, ZENG Jianchao, CHAI Rui, WANG Lifang
2021, 41(3): 891-897. DOI:
10.11772/j.issn.1001-9081.2020060783
Asbtract
(
)
PDF
(1326KB) (
)
References
|
Related Articles
|
Metrics
Concerning the the size and morphological diversity of thyroid tissue and the complexity of surrounding tissue in thyroid ultrasound images, an ultrasound thyroid segmentation network based on feature fusion and dynamic multi-scale dilated convolution was proposed. Firstly, the dilated convolutions with different dilation rates and dynamic filters were used to fuse the global semantic information of different receptive domains and the semantic information in the context details with different ranges, so as to improve the adaptability and accuracy of the network to multi-scale targets. Then, the hybrid upsampling method was used to enhance the spatial information of high-dimensional semantic features and the context information of low-dimensional spatial features during feature dimensionality reduction. Finally, the spatial attention mechanism was introduced to optimize the low-dimensional features of the image, and the method of fusing high- and low-dimensional features was applied to retain the useful features of high- and low-dimensional feature information with the elimination of the redundant information and improve the network's ability to distinguish the background and foreground of the image. Experimental results show that the proposed method has an accuracy rate of 0.963±0.026, a recall rate of 0.84±0.03 and a dice coefficient of 0.79±0.03 in the public dataset of thyroid ultrasound images. It can be seen that the proposed method can solve the problems of large difference of tissue morphology and complex surrounding tissues.
Vein recognition algorithm based on Siamese nonnegative matrix factorization with transferability
WANG Jinkai, JIA Xu
2021, 41(3): 898-903. DOI:
10.11772/j.issn.1001-9081.2020060965
Asbtract
(
)
PDF
(944KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problem that the recognition algorithm which was obtained under one vein image dataset is lack of universality to other datasets, a Siamese Nonnegative Matrix Factorization (NMF) model with transferability was proposed. Firstly, the supervised learning for the vein images with same labels in the source dataset was achieved by using two NMF models with the same structures and the parameter sharing. Then, the vein feature differences between two different datasets were reduced through using maximum mean discrepancy constraint, that is to transfer the knowledge in the source dataset to the target dataset. Finally, the matching of vein images was realized based on cosine distance. Experimental results show that, the proposed recognition algorithm can not only achieve the high recognition accuracy on the source dataset, but also respectively reduce the average False Accept Rate (FAR) and average False Reject Rate (FRR) to 0.043 and 0.055 on the target dataset when using only a small number of vein images in the target dataset. In addition, the average recognition time of the proposed algorithm is 0.56 seconds, which can meet the real-time requirement of recognition.
Rail tread block defects detection method based on improved Faster R-CNN
LUO Hui, JIA Chen, LU Chunyu, LI Jian
2021, 41(3): 904-910. DOI:
10.11772/j.issn.1001-9081.2020060759
Asbtract
(
)
PDF
(1562KB) (
)
References
|
Related Articles
|
Metrics
Concerning the problems of large scale change and small sample dataset in rail tread block defects, a rail tread block defects detection method based on improved Faster Region-based Convolutional Neural Network (Faster R-CNN) was proposed. Firstly, based on the basic network structure of ResNet-101, a multi-scale Feature Pyramid Network (FPN) was constructed to achieve the fusion of deep and shallow feature information in order to improve the detection accuracy of small-scale defects. Secondly, the Generalized Intersection over Union (GIoU) loss was used to solve the problem of insensitivity to the position of the predicted border caused by regression loss SmoothL1 in Faster R-CNN. Finally, a method of Region Proposal Network by Guided Anchoring (GA-RPN) was proposed to solve the problem of the imbalance of positive and negative samples in the training of the detection network due to the large redundancy of anchor points generated by Region Proposal Network (RPN). During the training process, the RSSDs dataset was expanded based on image preprocessing methods such as flipping, cropping and adding noise to solve the problem of insufficient training samples of rail tread block defects. Experimental results show that the mean Average Precision (mAP) of the rail tread block defects detection based on the proposed improved method can reach 82.466%, which is increased by 13.201 percentage points compared with Faster R-CNN, so that the rail tread block defects can be detected accurately by the proposed method.
Imperfect wheat kernel recognition combined with image enhancement and conventional neural network
HE Jiean, WU Xiaohong, HE Xiaohai, HU Jianrong, QIN Linbo
2021, 41(3): 911-916. DOI:
10.11772/j.issn.1001-9081.2020060864
Asbtract
(
)
PDF
(1123KB) (
)
References
|
Related Articles
|
Metrics
In the practical application scenario, the wheat kernel image background is single, and the imperfect characteristics of wheat imperfect grains are mostly local features while most of the image features are not different from normal grains. In order to solve the problems, an imperfect wheat kernel recognition method based on detail Image Enhancement (IE) was proposed. Firstly, the alternate minimization algorithm was used to constrain the L0 norms of the original image in the horizontal and vertical directions to smooth the original image as the base layer, and the original image was subtracted from the base layer to obtain the detail layer of the image. Then, the detail layer was delighted and superimposed with the base layer to enhance the image. Finally, the enhanced image was used as the input of the Convolutional Neural Network (CNN), and the CNN with Batch Normalization (BN) layer was used for recognition of the image. The classic classification networks LeNet-5, ResNet-34, VGG-16 and these networks with the BN layer were used as classification networks, and the images before and after enhancement were used as input to carry out classification experiments, and the accuracy of the test set was used to evaluate the performance. Experimental results show that by adding the BN layer and using the same input, all three classic classification networks have the accuracy of the test set increased by 5 percentage points, and when using the images with enhanced detail as input, the three networks have the accuracy of the test set increased by 1 percentage point, and when the above two are used together, all the three networks obtain the accuracy of the test set improved by more than 7 percentage points.
2024 Vol.44 No.11
Current Issue
Archive
Superintended by:
Sichuan Associations for Science and Technology
Sponsored by:
Sichuan Computer Federation
Chengdu Branch, Chinese Academy of Sciences
Honorary Editor-in-Chief:
ZHANG Jingzhong
Editor-in-Chief:
XU Zongben
Associate Editor:
SHEN Hengtao XIA Zhaohui
Domestic Post Distribution Code:
62-110
Foreign Distribution Code:
M4616
Address:
No. 9, 4th Section of South Renmin Road, Chengdu 610041, China
Tel:
028-85224283-803
028-85222239-803
Website:
www.joca.cn
E-mail:
bjb@joca.cn
WeChat
Join CCF