《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (6): 1965-1971.DOI: 10.11772/j.issn.1001-9081.2023060897
所属专题: 前沿与综合应用
收稿日期:
2023-07-11
修回日期:
2023-08-25
接受日期:
2023-08-31
发布日期:
2023-09-14
出版日期:
2024-06-10
通讯作者:
千王菲
作者简介:
王晓路(1977—),男,四川广安人,副教授,博士,主要研究方向:物联网、人工智能;
基金资助:
Received:
2023-07-11
Revised:
2023-08-25
Accepted:
2023-08-31
Online:
2023-09-14
Published:
2024-06-10
Contact:
Wangfei QIAN
About author:
WANG Xiaolu, born in 1977, Ph. D., associate professor. His research interests include internet of things, artificial intelligence.
Supported by:
摘要:
针对步态识别易受拍摄视角、外观变化等影响的问题,提出一种基于双支路卷积网络的步态识别方法。首先,提出随机裁剪随机遮挡的数据增强方法RRDA(Restricted Random Data Augmentation),以扩展外观变化的数据样本,提高模型遮挡的鲁棒性;其次,采用结合注意力机制的两路复合卷积层(C-Conv)提取步态特征,一个分支通过水平金字塔映射(HPM)提取行人外观全局和最具辨识度的信息;另一分支通过多个并行的微动作捕捉模块(MCM)提取短时间的步态时空信息;最后,将两个分支的特征信息相加融合,再通过全连接层实现步态识别。基于平衡样本特征的区分能力和模型的收敛性构造联合损失函数,以加速模型的收敛。在CASIA-B步态数据集上进行实验,所提方法在3种行走状态下的平均识别率分别达到97.40%、93.67%和81.19%,均高于GaitSet方法、CapsNet方法、双流步态方法和GaitPart方法;在正常行走状态下比GaitSet方法的识别准确率提升了1.30个百分点,在携带背包状态下提升了2.87个百分点,在穿着外套状态下提升了10.89个百分点。实验结果表明,所提方法是可行、有效的。
中图分类号:
王晓路, 千王菲. 基于双支路卷积网络的步态识别方法[J]. 计算机应用, 2024, 44(6): 1965-1971.
Xiaolu WANG, Wangfei QIAN. Gait recognition method based on two-branch convolutional network[J]. Journal of Computer Applications, 2024, 44(6): 1965-1971.
图像预处理方式 | NM | BG | CL |
---|---|---|---|
原图 | 95.58 | 88.87 | 72.87 |
开运算处理 | 96.23 | 89.00 | 74.04 |
表1 形态学处理识别率 (%)
Tab. 1 Recognition rate of morphological processing
图像预处理方式 | NM | BG | CL |
---|---|---|---|
原图 | 95.58 | 88.87 | 72.87 |
开运算处理 | 96.23 | 89.00 | 74.04 |
尺寸 | NM | BG | CL |
---|---|---|---|
64×64 | 96.13 | 89.65 | 74.26 |
128×128 | 96.81 | 90.04 | 76.86 |
表2 不同尺寸识别率 (%)
Tab. 2 Recognition rates of different sizes
尺寸 | NM | BG | CL |
---|---|---|---|
64×64 | 96.13 | 89.65 | 74.26 |
128×128 | 96.81 | 90.04 | 76.86 |
行走 状态 | 方法 | 不同视角下的识别率/% | 平均 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0° | 18° | 36° | 54° | 72° | 90° | 108° | 126° | 144° | 162° | 180° | |||
NM | GaitSet | 91.10 | 99.00 | 99.90 | 97.80 | 95.10 | 94.50 | 96.10 | 98.30 | 99.20 | 98.10 | 88.00 | 96.10 |
CapsNet | 91.80 | 98.30 | 99.00 | 98.00 | 94.10 | 92.80 | 96.30 | 98.10 | 98.40 | 96.20 | 89.20 | 95.70 | |
双流步态 | 91.00 | 97.70 | 99.20 | 96.50 | 93.80 | 92.50 | 93.20 | 95.60 | 97.90 | 96.70 | 84.60 | 94.43 | |
GaitPart | 94.10 | 98.60 | 99.30 | 98.50 | 94.00 | 92.30 | 95.90 | 98.40 | 99.20 | 97.80 | 90.40 | 96.20 | |
本文方法 | 96.20 | 98.50 | 99.00 | 98.10 | 97.60 | 95.90 | 96.70 | 98.40 | 99.00 | 98.80 | 93.20 | 97.40 | |
BG | GaitSet | 86.70 | 94.20 | 95.70 | 93.40 | 88.90 | 85.50 | 89.00 | 91.70 | 94.50 | 95.90 | 83.30 | 90.80 |
CapsNet | 87.30 | 93.70 | 94.80 | 93.10 | 88.10 | 84.50 | 88.80 | 93.50 | 96.30 | 93.30 | 83.90 | 90.70 | |
双流步态 | 87.20 | 92.80 | 93.40 | 90.90 | 85.90 | 82.80 | 88.40 | 91.60 | 96.00 | 93.80 | 78.60 | 89.22 | |
GaitPart | 89.10 | 94.80 | 96.70 | 95.10 | 88.30 | 84.90 | 89.00 | 93.60 | 96.10 | 93.80 | 85.80 | 91.50 | |
本文方法 | 91.80 | 96.70 | 95.70 | 95.15 | 92.60 | 89.70 | 92.40 | 95.20 | 97.00 | 95.96 | 88.20 | 93.67 | |
CL | GaitSet | 59.50 | 75.00 | 78.30 | 74.60 | 71.40 | 71.30 | 70.80 | 74.10 | 74.60 | 69.40 | 54.10 | 70.30 |
CapsNet | 63.40 | 77.30 | 80.10 | 79.40 | 72.40 | 69.80 | 71.20 | 73.80 | 75.50 | 71.70 | 62.00 | 72.40 | |
双流步态 | 72.70 | 84.90 | 86.30 | 84.20 | 77.90 | 77.00 | 79.60 | 79.00 | 82.00 | 75.80 | 63.40 | 78.40 | |
GaitPart | 70.70 | 85.50 | 86.90 | 83.30 | 77.10 | 72.50 | 76.90 | 82.20 | 83.80 | 80.20 | 66.50 | 78.70 | |
本文方法 | 76.50 | 88.30 | 87.10 | 83.60 | 81.00 | 76.90 | 79.40 | 82.70 | 87.40 | 81.70 | 68.50 | 81.19 |
表3 不同方法多个视角识别率对比
Tab. 3 Comparison of recognition rates among different methods with various angles of view
行走 状态 | 方法 | 不同视角下的识别率/% | 平均 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0° | 18° | 36° | 54° | 72° | 90° | 108° | 126° | 144° | 162° | 180° | |||
NM | GaitSet | 91.10 | 99.00 | 99.90 | 97.80 | 95.10 | 94.50 | 96.10 | 98.30 | 99.20 | 98.10 | 88.00 | 96.10 |
CapsNet | 91.80 | 98.30 | 99.00 | 98.00 | 94.10 | 92.80 | 96.30 | 98.10 | 98.40 | 96.20 | 89.20 | 95.70 | |
双流步态 | 91.00 | 97.70 | 99.20 | 96.50 | 93.80 | 92.50 | 93.20 | 95.60 | 97.90 | 96.70 | 84.60 | 94.43 | |
GaitPart | 94.10 | 98.60 | 99.30 | 98.50 | 94.00 | 92.30 | 95.90 | 98.40 | 99.20 | 97.80 | 90.40 | 96.20 | |
本文方法 | 96.20 | 98.50 | 99.00 | 98.10 | 97.60 | 95.90 | 96.70 | 98.40 | 99.00 | 98.80 | 93.20 | 97.40 | |
BG | GaitSet | 86.70 | 94.20 | 95.70 | 93.40 | 88.90 | 85.50 | 89.00 | 91.70 | 94.50 | 95.90 | 83.30 | 90.80 |
CapsNet | 87.30 | 93.70 | 94.80 | 93.10 | 88.10 | 84.50 | 88.80 | 93.50 | 96.30 | 93.30 | 83.90 | 90.70 | |
双流步态 | 87.20 | 92.80 | 93.40 | 90.90 | 85.90 | 82.80 | 88.40 | 91.60 | 96.00 | 93.80 | 78.60 | 89.22 | |
GaitPart | 89.10 | 94.80 | 96.70 | 95.10 | 88.30 | 84.90 | 89.00 | 93.60 | 96.10 | 93.80 | 85.80 | 91.50 | |
本文方法 | 91.80 | 96.70 | 95.70 | 95.15 | 92.60 | 89.70 | 92.40 | 95.20 | 97.00 | 95.96 | 88.20 | 93.67 | |
CL | GaitSet | 59.50 | 75.00 | 78.30 | 74.60 | 71.40 | 71.30 | 70.80 | 74.10 | 74.60 | 69.40 | 54.10 | 70.30 |
CapsNet | 63.40 | 77.30 | 80.10 | 79.40 | 72.40 | 69.80 | 71.20 | 73.80 | 75.50 | 71.70 | 62.00 | 72.40 | |
双流步态 | 72.70 | 84.90 | 86.30 | 84.20 | 77.90 | 77.00 | 79.60 | 79.00 | 82.00 | 75.80 | 63.40 | 78.40 | |
GaitPart | 70.70 | 85.50 | 86.90 | 83.30 | 77.10 | 72.50 | 76.90 | 82.20 | 83.80 | 80.20 | 66.50 | 78.70 | |
本文方法 | 76.50 | 88.30 | 87.10 | 83.60 | 81.00 | 76.90 | 79.40 | 82.70 | 87.40 | 81.70 | 68.50 | 81.19 |
行走状态 | 数据增强 | 卷积类型 | 注意力机制 | 损失函数 | Rank-1识别率/% |
---|---|---|---|---|---|
NM | 普通卷积(Conv) | √ | 三元组损失函数 | 97.39 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 97.56 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 97.79 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 96.81 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 97.45 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 97.40 | |
BG | 普通卷积(Conv) | √ | 三元组损失函数 | 92.17 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 93.37 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 93.40 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 90.04 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 93.09 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 93.67 | |
CL | 普通卷积(Conv) | √ | 三元组损失函数 | 78.08 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 80.86 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 81.01 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 76.86 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 80.78 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 81.19 |
表4 消融实验结果
Tab. 4 Ablation experiment results
行走状态 | 数据增强 | 卷积类型 | 注意力机制 | 损失函数 | Rank-1识别率/% |
---|---|---|---|---|---|
NM | 普通卷积(Conv) | √ | 三元组损失函数 | 97.39 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 97.56 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 97.79 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 96.81 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 97.45 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 97.40 | |
BG | 普通卷积(Conv) | √ | 三元组损失函数 | 92.17 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 93.37 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 93.40 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 90.04 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 93.09 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 93.67 | |
CL | 普通卷积(Conv) | √ | 三元组损失函数 | 78.08 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 80.86 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 81.01 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 76.86 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 80.78 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 81.19 |
1 | SEPAS-MOGHADDAM A, ETEMAD A. Deep gait recognition: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 264-284. |
2 | WAN C, WANG L, PHOHA V V. A survey on gait recognition[J]. ACM Computing Surveys, 2019, 51(5): 89.1-89.35. |
3 | RANI V, KUMAR M. Human gait recognition: a systematic review[J]. Multimedia Tools and Applications, 2023, 82: 37003-37037. |
4 | XIA L M, WANG H, GUO W T. Gait recognition based on Wasserstein generating adversarial image inpainting network[J]. Journal of Central South University, 2019, 26: 2759-2770. |
5 | WEN J, SHEN Y, YANG J. Multi-view gait recognition based on generative adversarial network[J]. Neural Processing Letters, 2022, 54 : 1855-1877. |
6 | 王阳. 基于特征点匹配的步态识别[J]. 传感器与微系统, 2018,37(1): 137-140. |
WANG Y. Gait recognition based on feature point matching[J]. Transducer and Microsystem Technologies, 2018, 37(1): 137-140. | |
7 | CHOI S, KIM J, KIM W, et al.Skeleton‑based gait recognition via robust frame-level matching [J]. IEEE Transactions on Information Forensics and Security, 2019,14(10): 2577-2592. |
8 | BARI A S M H, GAVRILOVA M L. Artificial neural network based gait recognition using Kinect sensor [J]. IEEE Access, 2019,7:162708-162722. |
9 | BATTISTONE F, PETROSINO A. TGLSTM: a time based graph deep learning approach to gait recognition [J]. Pattern Recognition Letters, 2019, 126: 132-138. |
10 | 汪涛, 汪泓章, 夏懿, 等. 基于卷积神经网络与注意力模型的人体步态识别[J]. 传感技术学报, 2019, 32(7): 1027-1033. |
WANG T, WANG H Z, XIA Y, et al. Human gait recognition based on convolutional neural network and attention model [J]. Chinese Journal of Sensors and Actuators, 2019, 32(7): 1027-1033. | |
11 | LIAO R, YU S, AN W, et al. A model‑based gait recognition method with body pose and human prior knowledge [J]. Pattern Recognition, 2019, 98:107069. |
12 | TEEPE T, KHAN A, GILG J, et al. GaitGraph: graph convolutional network for skeleton-based gait recognition[C]// Proceedings of the 2021 IEEE International Conference on Image Processing. Piscataway: IEEE, 2021: 2314-2318. |
13 | GAO S, YUN J, ZHAO Y, et al. Gait‑D: skeleton‑based gait feature decomposition for gait recognition[J]. IET Computer Vision, 2022, 16(2): 111-125. |
14 | WANG L, CHEN J. A two-branch neural network for gait recognition[EB/OL].[2023-06-22].. |
15 | TIAN Y, WEI L, LU S, et al. Free-view gait recognition[J]. PLoS ONE, 2019, 14(4): e0214389. |
16 | YAO L, KUSAKUNNIRAN W, WU Q, et al. Robust gait recognition using hybrid descriptors based on skeleton gait energy image [J]. Pattern Recognition Letters, 2021,150: 289-296. |
17 | ZHANG Y, HUANG Y, YU S, et al. Cross-view gait recognition by discriminative feature learning[J]. IEEE Transactions on Image Processing, 2020, 29: 1001-1015. |
18 | WANG X, YAN W Q. Human gait recognition based on frame-by-frame gait energy images and convolutional long short-term memory[J]. International Journal of Neural Systems, 2020,30(1):1950027. |
19 | WANG X, ZHANG J, YAN W Q. Gait recognition using multichannel convolution neural networks [J]. Neural Computing and Applications, 2020,32(18): 14275-14285. |
20 | CHEN X, LUO X, WENG J, et al. Multi-view gait image generation for cross-view gait recognition [J]. IEEE Transactions on Image Processing, 2021, 30: 3041-3055. |
21 | ELHARROUSS O, ALMAADEED N, ALMAADEED S, et al. Gait recognition for person re-identification[J]. The Journal of Supercomputing, 2021, 77(4): 3653-3672. |
22 | 刘晓阳,刘金强,郑昊琳.基于双流神经网络的煤矿井下人员步态识别方法[J].矿业科学学报,2021,6(2):218-227. |
LIU X Y, LIU J Q, ZHENG H L. Gait recognition method for coal mine personnel based on two-stream neural network [J]. Journal of Mining Science and Technology, 2021, 6(2): 218-227. | |
23 | XU Z, LU W, ZHANG Q, et al. Gait recognition based on capsule network [J]. Journal of Visual Communication and Image Representation, 2019, 59: 159-167. |
24 | SOKOLOVA A, KONUSHIN A. Pose-based deep gait recognition[J]. IET Biometrics, 2019, 8(2): 134-143. |
25 | LI S, LIU W, MA H. Attentive spatial-temporal summary networks for feature learning in irregular gait recognition [J]. IEEE Transactions on Multimedia, 2019, 21(9): 2361-2375. |
26 | FAN C, PENG Y, CAO C, et al. GaitPart: temporal part-based model for gait recognition [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 14213-14221. |
27 | SEPAS-MOGHADDAM A, GHORBANI S, TROJE N F, et al. Gait recognition using multi-scale partial representation transformation with capsules [C]// Proceedings of the 2020 25th International Conference on Pattern Recognition. Piscataway: IEEE, 2020: 8045-8052. |
28 | 汪堃,雷一鸣,张军平.基于双流步态网络的跨视角步态识别[J].模式识别与人工智能,2020, 33(5): 383-392. |
WANG K, LEI Y M, ZHANG J P. Two-stream gait network for cross-view gait recognition [J]. Pattern Recognition and Artificial Intelligence, 2020, 33(5): 383-392. | |
29 | CHAO H, WANG K, HE Y, et al. GaitSet: cross-view gait recognition through utilizing gait as a deep set[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022,44(7): 3467-3478. |
30 | HUANG H, ZHANG Y, SI Y, et al.Two-branch 3D convolution neural network for gait recognition [J].Signal, Image and Video Processing, 2023, 17: 3495-3504. |
31 | WOO S, PARK J, LEE J-Y, et al. CBAM: convolutional block attention module[C]// Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 3-19. |
32 | YU S, TAN D, TAN T. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition[C]// Proceedings of the 18th International Conference on Pattern Recognition. Piscataway: IEEE, 2006: 441-444. |
33 | 朱小鹏, 云利军, 张春节, 等. 基于深度学习的红外图像人体步态识别方法[J].计算机工程与设计,2022,43(3): 851-857. |
ZHU X P, YUN L J, ZHANG C J, et al. Gait recognition method based on deep learning in infrared image[J]. Computer Engineering and Design, 2022, 43(3): 851-857. |
[1] | 潘烨新, 杨哲. 基于多级特征双向融合的小目标检测优化模型[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2871-2877. |
[2] | 赵志强, 马培红, 黑新宏. 基于双重注意力机制的人群计数方法[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2886-2892. |
[3] | 李顺勇, 李师毅, 胥瑞, 赵兴旺. 基于自注意力融合的不完整多视图聚类算法[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2696-2703. |
[4] | 秦璟, 秦志光, 李发礼, 彭悦恒. 基于概率稀疏自注意力神经网络的重性抑郁疾患诊断[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2970-2974. |
[5] | 王熙源, 张战成, 徐少康, 张宝成, 罗晓清, 胡伏原. 面向手术导航3D/2D配准的无监督跨域迁移网络[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2911-2918. |
[6] | 李力铤, 华蓓, 贺若舟, 徐况. 基于解耦注意力机制的多变量时序预测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2732-2738. |
[7] | 黄云川, 江永全, 黄骏涛, 杨燕. 基于元图同构网络的分子毒性预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2964-2969. |
[8] | 薛凯鹏, 徐涛, 廖春节. 融合自监督和多层交叉注意力的多模态情感分析网络[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2387-2392. |
[9] | 汪雨晴, 朱广丽, 段文杰, 李书羽, 周若彤. 基于交互注意力机制的心理咨询文本情感分类模型[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2393-2399. |
[10] | 高鹏淇, 黄鹤鸣, 樊永红. 融合坐标与多头注意力机制的交互语音情感识别[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2400-2406. |
[11] | 刘禹含, 吉根林, 张红苹. 基于骨架图与混合注意力的视频行人异常检测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2551-2557. |
[12] | 李钟华, 白云起, 王雪津, 黄雷雷, 林初俊, 廖诗宇. 基于图像增强的低照度人脸检测[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2588-2594. |
[13] | 莫尚斌, 王文君, 董凌, 高盛祥, 余正涛. 基于多路信息聚合协同解码的单通道语音增强[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2611-2617. |
[14] | 顾焰杰, 张英俊, 刘晓倩, 周围, 孙威. 基于时空多图融合的交通流量预测[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2618-2625. |
[15] | 石乾宏, 杨燕, 江永全, 欧阳小草, 范武波, 陈强, 姜涛, 李媛. 面向空气质量预测的多粒度突变拟合网络[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2643-2650. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||