《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (6): 1965-1971.DOI: 10.11772/j.issn.1001-9081.2023060897
• 前沿与综合应用 • 上一篇
收稿日期:
2023-07-11
修回日期:
2023-08-25
接受日期:
2023-08-31
发布日期:
2023-09-14
出版日期:
2024-06-10
通讯作者:
千王菲
作者简介:
王晓路(1977—),男,四川广安人,副教授,博士,主要研究方向:物联网、人工智能;
基金资助:
Received:
2023-07-11
Revised:
2023-08-25
Accepted:
2023-08-31
Online:
2023-09-14
Published:
2024-06-10
Contact:
Wangfei QIAN
About author:
WANG Xiaolu, born in 1977, Ph. D., associate professor. His research interests include internet of things, artificial intelligence.
Supported by:
摘要:
针对步态识别易受拍摄视角、外观变化等影响的问题,提出一种基于双支路卷积网络的步态识别方法。首先,提出随机裁剪随机遮挡的数据增强方法RRDA(Restricted Random Data Augmentation),以扩展外观变化的数据样本,提高模型遮挡的鲁棒性;其次,采用结合注意力机制的两路复合卷积层(C-Conv)提取步态特征,一个分支通过水平金字塔映射(HPM)提取行人外观全局和最具辨识度的信息;另一分支通过多个并行的微动作捕捉模块(MCM)提取短时间的步态时空信息;最后,将两个分支的特征信息相加融合,再通过全连接层实现步态识别。基于平衡样本特征的区分能力和模型的收敛性构造联合损失函数,以加速模型的收敛。在CASIA-B步态数据集上进行实验,所提方法在3种行走状态下的平均识别率分别达到97.40%、93.67%和81.19%,均高于GaitSet方法、CapsNet方法、双流步态方法和GaitPart方法;在正常行走状态下比GaitSet方法的识别准确率提升了1.30个百分点,在携带背包状态下提升了2.87个百分点,在穿着外套状态下提升了10.89个百分点。实验结果表明,所提方法是可行、有效的。
中图分类号:
王晓路, 千王菲. 基于双支路卷积网络的步态识别方法[J]. 计算机应用, 2024, 44(6): 1965-1971.
Xiaolu WANG, Wangfei QIAN. Gait recognition method based on two-branch convolutional network[J]. Journal of Computer Applications, 2024, 44(6): 1965-1971.
图像预处理方式 | NM | BG | CL |
---|---|---|---|
原图 | 95.58 | 88.87 | 72.87 |
开运算处理 | 96.23 | 89.00 | 74.04 |
表1 形态学处理识别率 (%)
Tab. 1 Recognition rate of morphological processing
图像预处理方式 | NM | BG | CL |
---|---|---|---|
原图 | 95.58 | 88.87 | 72.87 |
开运算处理 | 96.23 | 89.00 | 74.04 |
尺寸 | NM | BG | CL |
---|---|---|---|
64×64 | 96.13 | 89.65 | 74.26 |
128×128 | 96.81 | 90.04 | 76.86 |
表2 不同尺寸识别率 (%)
Tab. 2 Recognition rates of different sizes
尺寸 | NM | BG | CL |
---|---|---|---|
64×64 | 96.13 | 89.65 | 74.26 |
128×128 | 96.81 | 90.04 | 76.86 |
行走 状态 | 方法 | 不同视角下的识别率/% | 平均 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0° | 18° | 36° | 54° | 72° | 90° | 108° | 126° | 144° | 162° | 180° | |||
NM | GaitSet | 91.10 | 99.00 | 99.90 | 97.80 | 95.10 | 94.50 | 96.10 | 98.30 | 99.20 | 98.10 | 88.00 | 96.10 |
CapsNet | 91.80 | 98.30 | 99.00 | 98.00 | 94.10 | 92.80 | 96.30 | 98.10 | 98.40 | 96.20 | 89.20 | 95.70 | |
双流步态 | 91.00 | 97.70 | 99.20 | 96.50 | 93.80 | 92.50 | 93.20 | 95.60 | 97.90 | 96.70 | 84.60 | 94.43 | |
GaitPart | 94.10 | 98.60 | 99.30 | 98.50 | 94.00 | 92.30 | 95.90 | 98.40 | 99.20 | 97.80 | 90.40 | 96.20 | |
本文方法 | 96.20 | 98.50 | 99.00 | 98.10 | 97.60 | 95.90 | 96.70 | 98.40 | 99.00 | 98.80 | 93.20 | 97.40 | |
BG | GaitSet | 86.70 | 94.20 | 95.70 | 93.40 | 88.90 | 85.50 | 89.00 | 91.70 | 94.50 | 95.90 | 83.30 | 90.80 |
CapsNet | 87.30 | 93.70 | 94.80 | 93.10 | 88.10 | 84.50 | 88.80 | 93.50 | 96.30 | 93.30 | 83.90 | 90.70 | |
双流步态 | 87.20 | 92.80 | 93.40 | 90.90 | 85.90 | 82.80 | 88.40 | 91.60 | 96.00 | 93.80 | 78.60 | 89.22 | |
GaitPart | 89.10 | 94.80 | 96.70 | 95.10 | 88.30 | 84.90 | 89.00 | 93.60 | 96.10 | 93.80 | 85.80 | 91.50 | |
本文方法 | 91.80 | 96.70 | 95.70 | 95.15 | 92.60 | 89.70 | 92.40 | 95.20 | 97.00 | 95.96 | 88.20 | 93.67 | |
CL | GaitSet | 59.50 | 75.00 | 78.30 | 74.60 | 71.40 | 71.30 | 70.80 | 74.10 | 74.60 | 69.40 | 54.10 | 70.30 |
CapsNet | 63.40 | 77.30 | 80.10 | 79.40 | 72.40 | 69.80 | 71.20 | 73.80 | 75.50 | 71.70 | 62.00 | 72.40 | |
双流步态 | 72.70 | 84.90 | 86.30 | 84.20 | 77.90 | 77.00 | 79.60 | 79.00 | 82.00 | 75.80 | 63.40 | 78.40 | |
GaitPart | 70.70 | 85.50 | 86.90 | 83.30 | 77.10 | 72.50 | 76.90 | 82.20 | 83.80 | 80.20 | 66.50 | 78.70 | |
本文方法 | 76.50 | 88.30 | 87.10 | 83.60 | 81.00 | 76.90 | 79.40 | 82.70 | 87.40 | 81.70 | 68.50 | 81.19 |
表3 不同方法多个视角识别率对比
Tab. 3 Comparison of recognition rates among different methods with various angles of view
行走 状态 | 方法 | 不同视角下的识别率/% | 平均 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0° | 18° | 36° | 54° | 72° | 90° | 108° | 126° | 144° | 162° | 180° | |||
NM | GaitSet | 91.10 | 99.00 | 99.90 | 97.80 | 95.10 | 94.50 | 96.10 | 98.30 | 99.20 | 98.10 | 88.00 | 96.10 |
CapsNet | 91.80 | 98.30 | 99.00 | 98.00 | 94.10 | 92.80 | 96.30 | 98.10 | 98.40 | 96.20 | 89.20 | 95.70 | |
双流步态 | 91.00 | 97.70 | 99.20 | 96.50 | 93.80 | 92.50 | 93.20 | 95.60 | 97.90 | 96.70 | 84.60 | 94.43 | |
GaitPart | 94.10 | 98.60 | 99.30 | 98.50 | 94.00 | 92.30 | 95.90 | 98.40 | 99.20 | 97.80 | 90.40 | 96.20 | |
本文方法 | 96.20 | 98.50 | 99.00 | 98.10 | 97.60 | 95.90 | 96.70 | 98.40 | 99.00 | 98.80 | 93.20 | 97.40 | |
BG | GaitSet | 86.70 | 94.20 | 95.70 | 93.40 | 88.90 | 85.50 | 89.00 | 91.70 | 94.50 | 95.90 | 83.30 | 90.80 |
CapsNet | 87.30 | 93.70 | 94.80 | 93.10 | 88.10 | 84.50 | 88.80 | 93.50 | 96.30 | 93.30 | 83.90 | 90.70 | |
双流步态 | 87.20 | 92.80 | 93.40 | 90.90 | 85.90 | 82.80 | 88.40 | 91.60 | 96.00 | 93.80 | 78.60 | 89.22 | |
GaitPart | 89.10 | 94.80 | 96.70 | 95.10 | 88.30 | 84.90 | 89.00 | 93.60 | 96.10 | 93.80 | 85.80 | 91.50 | |
本文方法 | 91.80 | 96.70 | 95.70 | 95.15 | 92.60 | 89.70 | 92.40 | 95.20 | 97.00 | 95.96 | 88.20 | 93.67 | |
CL | GaitSet | 59.50 | 75.00 | 78.30 | 74.60 | 71.40 | 71.30 | 70.80 | 74.10 | 74.60 | 69.40 | 54.10 | 70.30 |
CapsNet | 63.40 | 77.30 | 80.10 | 79.40 | 72.40 | 69.80 | 71.20 | 73.80 | 75.50 | 71.70 | 62.00 | 72.40 | |
双流步态 | 72.70 | 84.90 | 86.30 | 84.20 | 77.90 | 77.00 | 79.60 | 79.00 | 82.00 | 75.80 | 63.40 | 78.40 | |
GaitPart | 70.70 | 85.50 | 86.90 | 83.30 | 77.10 | 72.50 | 76.90 | 82.20 | 83.80 | 80.20 | 66.50 | 78.70 | |
本文方法 | 76.50 | 88.30 | 87.10 | 83.60 | 81.00 | 76.90 | 79.40 | 82.70 | 87.40 | 81.70 | 68.50 | 81.19 |
行走状态 | 数据增强 | 卷积类型 | 注意力机制 | 损失函数 | Rank-1识别率/% |
---|---|---|---|---|---|
NM | 普通卷积(Conv) | √ | 三元组损失函数 | 97.39 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 97.56 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 97.79 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 96.81 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 97.45 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 97.40 | |
BG | 普通卷积(Conv) | √ | 三元组损失函数 | 92.17 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 93.37 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 93.40 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 90.04 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 93.09 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 93.67 | |
CL | 普通卷积(Conv) | √ | 三元组损失函数 | 78.08 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 80.86 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 81.01 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 76.86 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 80.78 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 81.19 |
表4 消融实验结果
Tab. 4 Ablation experiment results
行走状态 | 数据增强 | 卷积类型 | 注意力机制 | 损失函数 | Rank-1识别率/% |
---|---|---|---|---|---|
NM | 普通卷积(Conv) | √ | 三元组损失函数 | 97.39 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 97.56 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 97.79 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 96.81 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 97.45 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 97.40 | |
BG | 普通卷积(Conv) | √ | 三元组损失函数 | 92.17 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 93.37 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 93.40 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 90.04 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 93.09 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 93.67 | |
CL | 普通卷积(Conv) | √ | 三元组损失函数 | 78.08 | |
√ | 普通卷积(Conv) | √ | 三元组损失函数 | 80.86 | |
√ | 复合卷积(C-Conv) | √ | 三元组损失函数 | 81.01 | |
√ | 复合卷积(C-Conv) | √ | 交叉熵损失函数 | 76.86 | |
√ | 复合卷积(C-Conv) | 联合损失函数 | 80.78 | ||
√ | 复合卷积(C-Conv) | √ | 联合损失函数 | 81.19 |
1 | SEPAS-MOGHADDAM A, ETEMAD A. Deep gait recognition: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(1): 264-284. |
2 | WAN C, WANG L, PHOHA V V. A survey on gait recognition[J]. ACM Computing Surveys, 2019, 51(5): 89.1-89.35. |
3 | RANI V, KUMAR M. Human gait recognition: a systematic review[J]. Multimedia Tools and Applications, 2023, 82: 37003-37037. |
4 | XIA L M, WANG H, GUO W T. Gait recognition based on Wasserstein generating adversarial image inpainting network[J]. Journal of Central South University, 2019, 26: 2759-2770. |
5 | WEN J, SHEN Y, YANG J. Multi-view gait recognition based on generative adversarial network[J]. Neural Processing Letters, 2022, 54 : 1855-1877. |
6 | 王阳. 基于特征点匹配的步态识别[J]. 传感器与微系统, 2018,37(1): 137-140. |
WANG Y. Gait recognition based on feature point matching[J]. Transducer and Microsystem Technologies, 2018, 37(1): 137-140. | |
7 | CHOI S, KIM J, KIM W, et al.Skeleton‑based gait recognition via robust frame-level matching [J]. IEEE Transactions on Information Forensics and Security, 2019,14(10): 2577-2592. |
8 | BARI A S M H, GAVRILOVA M L. Artificial neural network based gait recognition using Kinect sensor [J]. IEEE Access, 2019,7:162708-162722. |
9 | BATTISTONE F, PETROSINO A. TGLSTM: a time based graph deep learning approach to gait recognition [J]. Pattern Recognition Letters, 2019, 126: 132-138. |
10 | 汪涛, 汪泓章, 夏懿, 等. 基于卷积神经网络与注意力模型的人体步态识别[J]. 传感技术学报, 2019, 32(7): 1027-1033. |
WANG T, WANG H Z, XIA Y, et al. Human gait recognition based on convolutional neural network and attention model [J]. Chinese Journal of Sensors and Actuators, 2019, 32(7): 1027-1033. | |
11 | LIAO R, YU S, AN W, et al. A model‑based gait recognition method with body pose and human prior knowledge [J]. Pattern Recognition, 2019, 98:107069. |
12 | TEEPE T, KHAN A, GILG J, et al. GaitGraph: graph convolutional network for skeleton-based gait recognition[C]// Proceedings of the 2021 IEEE International Conference on Image Processing. Piscataway: IEEE, 2021: 2314-2318. |
13 | GAO S, YUN J, ZHAO Y, et al. Gait‑D: skeleton‑based gait feature decomposition for gait recognition[J]. IET Computer Vision, 2022, 16(2): 111-125. |
14 | WANG L, CHEN J. A two-branch neural network for gait recognition[EB/OL].[2023-06-22].. |
15 | TIAN Y, WEI L, LU S, et al. Free-view gait recognition[J]. PLoS ONE, 2019, 14(4): e0214389. |
16 | YAO L, KUSAKUNNIRAN W, WU Q, et al. Robust gait recognition using hybrid descriptors based on skeleton gait energy image [J]. Pattern Recognition Letters, 2021,150: 289-296. |
17 | ZHANG Y, HUANG Y, YU S, et al. Cross-view gait recognition by discriminative feature learning[J]. IEEE Transactions on Image Processing, 2020, 29: 1001-1015. |
18 | WANG X, YAN W Q. Human gait recognition based on frame-by-frame gait energy images and convolutional long short-term memory[J]. International Journal of Neural Systems, 2020,30(1):1950027. |
19 | WANG X, ZHANG J, YAN W Q. Gait recognition using multichannel convolution neural networks [J]. Neural Computing and Applications, 2020,32(18): 14275-14285. |
20 | CHEN X, LUO X, WENG J, et al. Multi-view gait image generation for cross-view gait recognition [J]. IEEE Transactions on Image Processing, 2021, 30: 3041-3055. |
21 | ELHARROUSS O, ALMAADEED N, ALMAADEED S, et al. Gait recognition for person re-identification[J]. The Journal of Supercomputing, 2021, 77(4): 3653-3672. |
22 | 刘晓阳,刘金强,郑昊琳.基于双流神经网络的煤矿井下人员步态识别方法[J].矿业科学学报,2021,6(2):218-227. |
LIU X Y, LIU J Q, ZHENG H L. Gait recognition method for coal mine personnel based on two-stream neural network [J]. Journal of Mining Science and Technology, 2021, 6(2): 218-227. | |
23 | XU Z, LU W, ZHANG Q, et al. Gait recognition based on capsule network [J]. Journal of Visual Communication and Image Representation, 2019, 59: 159-167. |
24 | SOKOLOVA A, KONUSHIN A. Pose-based deep gait recognition[J]. IET Biometrics, 2019, 8(2): 134-143. |
25 | LI S, LIU W, MA H. Attentive spatial-temporal summary networks for feature learning in irregular gait recognition [J]. IEEE Transactions on Multimedia, 2019, 21(9): 2361-2375. |
26 | FAN C, PENG Y, CAO C, et al. GaitPart: temporal part-based model for gait recognition [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 14213-14221. |
27 | SEPAS-MOGHADDAM A, GHORBANI S, TROJE N F, et al. Gait recognition using multi-scale partial representation transformation with capsules [C]// Proceedings of the 2020 25th International Conference on Pattern Recognition. Piscataway: IEEE, 2020: 8045-8052. |
28 | 汪堃,雷一鸣,张军平.基于双流步态网络的跨视角步态识别[J].模式识别与人工智能,2020, 33(5): 383-392. |
WANG K, LEI Y M, ZHANG J P. Two-stream gait network for cross-view gait recognition [J]. Pattern Recognition and Artificial Intelligence, 2020, 33(5): 383-392. | |
29 | CHAO H, WANG K, HE Y, et al. GaitSet: cross-view gait recognition through utilizing gait as a deep set[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022,44(7): 3467-3478. |
30 | HUANG H, ZHANG Y, SI Y, et al.Two-branch 3D convolution neural network for gait recognition [J].Signal, Image and Video Processing, 2023, 17: 3495-3504. |
31 | WOO S, PARK J, LEE J-Y, et al. CBAM: convolutional block attention module[C]// Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 3-19. |
32 | YU S, TAN D, TAN T. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition[C]// Proceedings of the 18th International Conference on Pattern Recognition. Piscataway: IEEE, 2006: 441-444. |
33 | 朱小鹏, 云利军, 张春节, 等. 基于深度学习的红外图像人体步态识别方法[J].计算机工程与设计,2022,43(3): 851-857. |
ZHU X P, YUN L J, ZHANG C J, et al. Gait recognition method based on deep learning in infrared image[J]. Computer Engineering and Design, 2022, 43(3): 851-857. |
[1] | 徐泽鑫, 杨磊, 李康顺. 较短的长序列时间序列预测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1824-1831. |
[2] | 周妍, 李阳. 用于脑卒中病灶分割的具有注意力机制的校正交叉伪监督方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1942-1948. |
[3] | 刘源泂, 何茂征, 黄益斌, 钱程. 基于ResNet50和改进注意力机制的船舶识别模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1935-1941. |
[4] | 吴锦富, 柳毅. 基于随机噪声和自适应步长的快速对抗训练方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1807-1815. |
[5] | 赵雅娟, 孟繁军, 徐行健. 在线教育学习者知识追踪综述[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1683-1698. |
[6] | 王美, 苏雪松, 刘佳, 殷若南, 黄珊. 时频域多尺度交叉注意力融合的时间序列分类方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1842-1847. |
[7] | 邴雅星, 王阳萍, 雍玖, 白浩谋. 基于筛选学习网络的六自由度目标位姿估计算法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1920-1926. |
[8] | 时旺军, 王晶, 宁晓军, 林友芳. 小样本场景下的元迁移学习睡眠分期模型[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1445-1451. |
[9] | 宋霄罡, 张冬冬, 张鹏飞, 梁莉, 黑新宏. 面向复杂施工环境的实时目标检测算法[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1605-1612. |
[10] | 李鑫, 孟乔, 皇甫俊逸, 孟令辰. 基于分离式标签协同学习的YOLOv5多属性分类[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1619-1628. |
[11] | 盖彦辛, 闫涛, 张江峰, 郭小英, 陈斌. 基于时空注意力的空间关联三维形貌重建[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1570-1578. |
[12] | 孙子文, 钱立志, 杨传栋, 高一博, 陆庆阳, 袁广林. 基于Transformer的视觉目标跟踪方法综述[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1644-1654. |
[13] | 郭琳, 刘坤虎, 马晨阳, 来佑雪, 徐映芬. 基于感受野扩展残差注意力网络的图像超分辨率重建[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1579-1587. |
[14] | 李鸿天, 史鑫昊, 潘卫国, 徐成, 徐冰心, 袁家政. 融合多尺度和注意力机制的小样本目标检测[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1437-1444. |
[15] | 耿焕同, 刘振宇, 蒋骏, 范子辰, 李嘉兴. 基于改进YOLOv8的嵌入式道路裂缝检测算法[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1613-1618. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||