《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (7): 2250-2257.DOI: 10.11772/j.issn.1001-9081.2023070977
收稿日期:
2023-07-19
修回日期:
2023-09-30
接受日期:
2023-10-07
发布日期:
2023-10-26
出版日期:
2024-07-10
通讯作者:
刘瑞华
作者简介:
郝子赫(1999—),女,吉林长春人,硕士研究生,主要研究方向:计算机视觉、步态识别;基金资助:
Ruihua LIU(), Zihe HAO, Yangyang ZOU
Received:
2023-07-19
Revised:
2023-09-30
Accepted:
2023-10-07
Online:
2023-10-26
Published:
2024-07-10
Contact:
Ruihua LIU
About author:
HAO Zihe, born in 1999, M. S. candidate. Her research interests include machine vision, gait recognition.Supported by:
摘要:
随着深度学习的引入,步态识别算法取得了很大的突破,但是仍存在忽略了浅层网络提取的细节信息,以及对不限时长的步态视频时空信息难以融合的问题。为了有效利用浅层特征和融合时空特征,提出一种基于多层级精细特征融合的跨视角步态识别算法。所提算法由两个部分组成:边缘运动捕捉模块(EMCM)用于提取包含时间信息的边缘运动特征;多层级特征提取模块(MFEM)用于提取包含不同粒度全局和局部信息的多层级精细特征。首先,使用EMCM和MFEM分别提取多层级精细特征和边缘运动特征;然后,将两个模块提取的特征融合得到具有鉴别性的步态特征;最后,在公开数据集CASIA-B上和OU-MVLP上进行多种情况下的对比实验。在CASIA-B上平均识别准确率可达89.9%,与GaitPart相比,所提算法的平均识别准确率提升了1.1个百分点;在OU-MVLP上90°视角下比GaitSet识别准确率提升了3.0个百分点。所提算法能够有效地提升多种情况下的步态识别的准确率。
中图分类号:
刘瑞华, 郝子赫, 邹洋杨. 基于多层级精细特征融合的步态识别算法[J]. 计算机应用, 2024, 44(7): 2250-2257.
Ruihua LIU, Zihe HAO, Yangyang ZOU. Gait recognition algorithm based on multi-layer refined feature fusion[J]. Journal of Computer Applications, 2024, 44(7): 2250-2257.
层级 | 操作1 | 操作2 | 操作3 | 操作4 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
操作 | In_C | Out_C | 核 | 操作 | 输出维度 | 操作 | 输出维度 | 操作 | 输出维度 | |
1 | Conv2d | 1 | 64 | (3,3) | TP | RM | HPP | |||
Conv2d | 64 | 64 | (3,3) | |||||||
2 | Maxpooling | (2,2) | TP | RM | HPP | |||||
Conv2d | 64 | 128 | (3,3) | |||||||
Conv2d | 128 | 128 | (3,3) | |||||||
3 | Maxpooling | (2,2) | TP | down1 | HPP | |||||
Conv2d | 128 | 256 | (3,3) | down2 | ||||||
Conv2d | 256 | 256 | (3,3) | concat |
表1 多层级网络结构
Tab. 1 Multi-layer network structure
层级 | 操作1 | 操作2 | 操作3 | 操作4 | ||||||
---|---|---|---|---|---|---|---|---|---|---|
操作 | In_C | Out_C | 核 | 操作 | 输出维度 | 操作 | 输出维度 | 操作 | 输出维度 | |
1 | Conv2d | 1 | 64 | (3,3) | TP | RM | HPP | |||
Conv2d | 64 | 64 | (3,3) | |||||||
2 | Maxpooling | (2,2) | TP | RM | HPP | |||||
Conv2d | 64 | 128 | (3,3) | |||||||
Conv2d | 128 | 128 | (3,3) | |||||||
3 | Maxpooling | (2,2) | TP | down1 | HPP | |||||
Conv2d | 128 | 256 | (3,3) | down2 | ||||||
Conv2d | 256 | 256 | (3,3) | concat |
数据 | 算法 | 提出年份 | 不同视角下的平均识别准确率 | 准确率均值 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0° | 18° | 36° | 54° | 72° | 90° | 108° | 126° | 144° | 162° | 180° | ||||
NM#5~6 | GaitSet[ | 2019 | 90.8 | 97.9 | 99.4 | 96.9 | 93.6 | 91.7 | 95.0 | 97.8 | 98.9 | 96.8 | 85.8 | 95.0 |
GaitPart[ | 2020 | 94.1 | 98.6 | 99.3 | 98.5 | 94.0 | 92.3 | 95.9 | 98.4 | 99.2 | 97.8 | 90.4 | 96.2 | |
MvGGAN[ | 2021 | 94.8 | 99.0 | 99.7 | 99.2 | 96.6 | 93.7 | 96.3 | 98.6 | 99.2 | 98.2 | 92.3 | 97.1 | |
SEFM-P[ | 2022 | 94.0 | 97.7 | 98.6 | 97.4 | 94.3 | 92.4 | 94.4 | 98.3 | 98.4 | 98.3 | 88.9 | 95.7 | |
文献[ | 2023 | 91.1 | 91.5 | 92.4 | 96.9 | 93.6 | 91.7 | 95.0 | 97.8 | 98.9 | 96.8 | 85.8 | 95.0 | |
GaitMFF | 95.1 | 99.0 | 99.9 | 98.5 | 95.6 | 94.2 | 96.8 | 98.7 | 99.5 | 99.0 | 93.0 | 97.2 | ||
BG#1~2 | GaitSet[ | 2019 | 83.8 | 91.2 | 91.8 | 88.8 | 83.3 | 81.0 | 84.1 | 90.0 | 92.2 | 94.4 | 79.0 | 87.2 |
GaitPart[ | 2020 | 89.1 | 94.8 | 96.7 | 95.1 | 88.3 | 84.9 | 89.0 | 93.5 | 96.1 | 93.8 | 85.8 | 91.5 | |
MvGGAN[ | 2021 | 92.4 | 94.7 | 97.2 | 94.6 | 88.7 | 83.6 | 87.8 | 93.8 | 96.3 | 95.2 | 86.8 | 91.9 | |
SEFM-P[ | 2022 | 85.8 | 91.9 | 92.9 | 89.1 | 95.5 | 82.2 | 84.1 | 90.9 | 92.9 | 91.5 | 79.0 | 87.8 | |
文献[ | 2023 | 83.8 | 91.2 | 91.8 | 88.8 | 83.3 | 81.0 | 84.1 | 90.0 | 92.2 | 94.4 | 79.0 | 87.2 | |
GaitMFF | 90.8 | 94.8 | 95.1 | 94.3 | 89.7 | 85.1 | 89.1 | 94.0 | 97.3 | 95.4 | 88.6 | 92.2 | ||
CL#1~2 | GaitSet[ | 2019 | 61.4 | 75.4 | 80.7 | 77.3 | 72.1 | 70.1 | 71.5 | 73.5 | 73.5 | 68.4 | 50.0 | 70.4 |
GaitPart[ | 2020 | 70.7 | 85.5 | 86.9 | 83.3 | 77.1 | 72.5 | 76.9 | 82.2 | 83.8 | 80.2 | 66.5 | 78.7 | |
MvGGAN[ | 2021 | 70.5 | 77.9 | 82.5 | 82.7 | 77.4 | 73.6 | 73.8 | 77.8 | 77.6 | 72.5 | 64.8 | 75.6 | |
SEFM-P[ | 2022 | 72.6 | 83.4 | 85.4 | 80.9 | 74.1 | 71.3 | 76.7 | 76.1 | 80.3 | 80.1 | 66.5 | 77.0 | |
文献[ | 2023 | 61.4 | 75.4 | 80.7 | 77.3 | 72.1 | 70.1 | 71.5 | 73.5 | 73.5 | 68.4 | 50.0 | 70.5 | |
GaitMFF | 77.0 | 86.6 | 86.7 | 82.0 | 77.9 | 75.1 | 77.9 | 82.7 | 84.6 | 82.6 | 69.6 | 80.3 |
表2 不同算法不同视角下在CASIA-B数据集上的平均识别准确率对比 ( %)
Tab. 2 Average recognition accuracies of different algorithms on CASIA-B dataset under different views
数据 | 算法 | 提出年份 | 不同视角下的平均识别准确率 | 准确率均值 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0° | 18° | 36° | 54° | 72° | 90° | 108° | 126° | 144° | 162° | 180° | ||||
NM#5~6 | GaitSet[ | 2019 | 90.8 | 97.9 | 99.4 | 96.9 | 93.6 | 91.7 | 95.0 | 97.8 | 98.9 | 96.8 | 85.8 | 95.0 |
GaitPart[ | 2020 | 94.1 | 98.6 | 99.3 | 98.5 | 94.0 | 92.3 | 95.9 | 98.4 | 99.2 | 97.8 | 90.4 | 96.2 | |
MvGGAN[ | 2021 | 94.8 | 99.0 | 99.7 | 99.2 | 96.6 | 93.7 | 96.3 | 98.6 | 99.2 | 98.2 | 92.3 | 97.1 | |
SEFM-P[ | 2022 | 94.0 | 97.7 | 98.6 | 97.4 | 94.3 | 92.4 | 94.4 | 98.3 | 98.4 | 98.3 | 88.9 | 95.7 | |
文献[ | 2023 | 91.1 | 91.5 | 92.4 | 96.9 | 93.6 | 91.7 | 95.0 | 97.8 | 98.9 | 96.8 | 85.8 | 95.0 | |
GaitMFF | 95.1 | 99.0 | 99.9 | 98.5 | 95.6 | 94.2 | 96.8 | 98.7 | 99.5 | 99.0 | 93.0 | 97.2 | ||
BG#1~2 | GaitSet[ | 2019 | 83.8 | 91.2 | 91.8 | 88.8 | 83.3 | 81.0 | 84.1 | 90.0 | 92.2 | 94.4 | 79.0 | 87.2 |
GaitPart[ | 2020 | 89.1 | 94.8 | 96.7 | 95.1 | 88.3 | 84.9 | 89.0 | 93.5 | 96.1 | 93.8 | 85.8 | 91.5 | |
MvGGAN[ | 2021 | 92.4 | 94.7 | 97.2 | 94.6 | 88.7 | 83.6 | 87.8 | 93.8 | 96.3 | 95.2 | 86.8 | 91.9 | |
SEFM-P[ | 2022 | 85.8 | 91.9 | 92.9 | 89.1 | 95.5 | 82.2 | 84.1 | 90.9 | 92.9 | 91.5 | 79.0 | 87.8 | |
文献[ | 2023 | 83.8 | 91.2 | 91.8 | 88.8 | 83.3 | 81.0 | 84.1 | 90.0 | 92.2 | 94.4 | 79.0 | 87.2 | |
GaitMFF | 90.8 | 94.8 | 95.1 | 94.3 | 89.7 | 85.1 | 89.1 | 94.0 | 97.3 | 95.4 | 88.6 | 92.2 | ||
CL#1~2 | GaitSet[ | 2019 | 61.4 | 75.4 | 80.7 | 77.3 | 72.1 | 70.1 | 71.5 | 73.5 | 73.5 | 68.4 | 50.0 | 70.4 |
GaitPart[ | 2020 | 70.7 | 85.5 | 86.9 | 83.3 | 77.1 | 72.5 | 76.9 | 82.2 | 83.8 | 80.2 | 66.5 | 78.7 | |
MvGGAN[ | 2021 | 70.5 | 77.9 | 82.5 | 82.7 | 77.4 | 73.6 | 73.8 | 77.8 | 77.6 | 72.5 | 64.8 | 75.6 | |
SEFM-P[ | 2022 | 72.6 | 83.4 | 85.4 | 80.9 | 74.1 | 71.3 | 76.7 | 76.1 | 80.3 | 80.1 | 66.5 | 77.0 | |
文献[ | 2023 | 61.4 | 75.4 | 80.7 | 77.3 | 72.1 | 70.1 | 71.5 | 73.5 | 73.5 | 68.4 | 50.0 | 70.5 | |
GaitMFF | 77.0 | 86.6 | 86.7 | 82.0 | 77.9 | 75.1 | 77.9 | 82.7 | 84.6 | 82.6 | 69.6 | 80.3 |
算法 | 提出年份 | CASIA-B | CASIA-B* | ||||||
---|---|---|---|---|---|---|---|---|---|
NM | BG | CL | 均值 | NM | BG | CL | 均值 | ||
GaitSet[ | 2019 | 95.0 | 87.2 | 70.4 | 84.2 | 92.3 | 86.1 | 73.4 | 83.9 |
GaitPart[ | 2020 | 96.2 | 91.5 | 78.7 | 88.8 | 93.1 | 86.0 | 75.1 | 84.7 |
MvGGAN[ | 2021 | 97.1 | 91.9 | 75.6 | 88.2 | ||||
SEFM-P[ | 2022 | 95.7 | 87.8 | 77.0 | 86.8 | ||||
Deng[ | 2023 | 95.0 | 87.2 | 70.5 | 84.2 | ||||
GaitMFF | 97.2 | 92.2 | 80.3 | 89.9 | 95.5 | 91.4 | 81.5 | 89.4 |
表3 不同算法不同视角下,CASIA-B和CASIA-B*数据集上的平均识别准确率对比 ( %)
Tab. 3 Average recognition accuracies of different algorithms on CASIA-B and CASIA-B* datasets under different views
算法 | 提出年份 | CASIA-B | CASIA-B* | ||||||
---|---|---|---|---|---|---|---|---|---|
NM | BG | CL | 均值 | NM | BG | CL | 均值 | ||
GaitSet[ | 2019 | 95.0 | 87.2 | 70.4 | 84.2 | 92.3 | 86.1 | 73.4 | 83.9 |
GaitPart[ | 2020 | 96.2 | 91.5 | 78.7 | 88.8 | 93.1 | 86.0 | 75.1 | 84.7 |
MvGGAN[ | 2021 | 97.1 | 91.9 | 75.6 | 88.2 | ||||
SEFM-P[ | 2022 | 95.7 | 87.8 | 77.0 | 86.8 | ||||
Deng[ | 2023 | 95.0 | 87.2 | 70.5 | 84.2 | ||||
GaitMFF | 97.2 | 92.2 | 80.3 | 89.9 | 95.5 | 91.4 | 81.5 | 89.4 |
算法 | 提出年份 | 不同视角下的平均识别准确率 | |||
---|---|---|---|---|---|
0° | 30° | 60° | 90° | ||
GEINet[ | 2016 | 8.2 | 32.3 | 33.6 | 28.5 |
Input/Output[ | 2019 | 25.5 | 50.0 | 45.3 | 40.6 |
DigGAN[ | 2020 | 30.8 | 43.6 | 41.3 | 42.5 |
GaitSet[ | 2021 | 79.6 | 87.4 | 86.2 | 84.3 |
文献[ | 2022 | 58.4 | 70.6 | 85.3 | 83.5 |
GaitMFF | 77.9 | 90.0 | 88.2 | 87.3 |
表4 不同算法在OU-MVLP数据集上4种典型视角的平均识别准确率对比 ( %)
Tab. 4 Average recognition accuracies of different algorithms on OU-MVLP dataset under four representative views
算法 | 提出年份 | 不同视角下的平均识别准确率 | |||
---|---|---|---|---|---|
0° | 30° | 60° | 90° | ||
GEINet[ | 2016 | 8.2 | 32.3 | 33.6 | 28.5 |
Input/Output[ | 2019 | 25.5 | 50.0 | 45.3 | 40.6 |
DigGAN[ | 2020 | 30.8 | 43.6 | 41.3 | 42.5 |
GaitSet[ | 2021 | 79.6 | 87.4 | 86.2 | 84.3 |
文献[ | 2022 | 58.4 | 70.6 | 85.3 | 83.5 |
GaitMFF | 77.9 | 90.0 | 88.2 | 87.3 |
结构 | 识别准确率 | ||
---|---|---|---|
NM | BG | CL | |
串联 | 96.3 | 91.2 | 77.6 |
级联 | 97.2 | 91.7 | 78.0 |
MFEM | 97.2 | 92.2 | 80.3 |
表5 CASIA-B数据集上多层级特征提取结构的消融实验结果 ( %)
Tab. 5 Ablation experimental results of multi-layer structures in CASIA-B dataset
结构 | 识别准确率 | ||
---|---|---|---|
NM | BG | CL | |
串联 | 96.3 | 91.2 | 77.6 |
级联 | 97.2 | 91.7 | 78.0 |
MFEM | 97.2 | 92.2 | 80.3 |
RM的位置 | 识别准确率 | ||||
---|---|---|---|---|---|
layer1 | layer2 | layer3 | NM | BG | CL |
— | — | — | 97.0 | 91.0 | 78.9 |
√ | — | — | 97.1 | 91.5 | 77.6 |
— | √ | — | 97.1 | 91.8 | 78.5 |
— | — | √ | 97.0 | 91.0 | 78.0 |
√ | √ | — | 97.2 | 92.2 | 80.3 |
√ | — | √ | 96.6 | 91.2 | 77.6 |
— | √ | √ | 96.9 | 90.8 | 77.9 |
√ | √ | √ | 96.8 | 90.8 | 76.5 |
表6 精细化模块的消融实验结果 ( %)
Tab. 6 Ablation experimental results of RM
RM的位置 | 识别准确率 | ||||
---|---|---|---|---|---|
layer1 | layer2 | layer3 | NM | BG | CL |
— | — | — | 97.0 | 91.0 | 78.9 |
√ | — | — | 97.1 | 91.5 | 77.6 |
— | √ | — | 97.1 | 91.8 | 78.5 |
— | — | √ | 97.0 | 91.0 | 78.0 |
√ | √ | — | 97.2 | 92.2 | 80.3 |
√ | — | √ | 96.6 | 91.2 | 77.6 |
— | √ | √ | 96.9 | 90.8 | 77.9 |
√ | √ | √ | 96.8 | 90.8 | 76.5 |
EMCM | NM | 识别准确率 | |
---|---|---|---|
BG | CL | ||
— | 97.2 | 91.9 | 78.1 |
√ | 97.2 | 92.2 | 80.3 |
表7 CASIA-B数据集上EMCM的消融实验结果 ( %)
Tab. 7 Ablation experimental results of EMCM on CASIA-B dataset
EMCM | NM | 识别准确率 | |
---|---|---|---|
BG | CL | ||
— | 97.2 | 91.9 | 78.1 |
√ | 97.2 | 92.2 | 80.3 |
1 | 孙哲南,赫然,王亮,等.生物特征识别学科发展报告[J].中国图象图形学报, 2021, 26(6): 1254-1329. |
SUN Z N, HE R, WANG L, et al. Overview of biometrics research [J]. Journal of Image and Graphics, 2021, 26(6): 1254-1329. | |
2 | 贲晛烨,徐森,王科俊.行人步态的特征表达及识别综述[J].模式识别与人工智能, 2012, 25(1): 71-81. |
BEN X Y, XU S, WANG K J. Review on pedestrian gait feature expression and recognition [J]. Pattern Recognition and Artificial Intelligence, 2012, 25(1): 71-81. | |
3 | WU Z, HUANG Y, WANG L, et al. A comprehensive study on cross-view gait based human identification with deep CNNs [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(2): 209-226. |
4 | DENG M, WANG C, CHENG F, et al. Fusion of spatial-temporal and kinematic features for gait recognition with deterministic learning [J]. Pattern Recognition, 2017, 67: 186-200. |
5 | ZHAO G, LIU G, LI H, et al. 3D gait recognition using multiple cameras [C]// Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition. Piscataway: IEEE, 2006: 529-534. |
6 | LEE L, GRIMSON W E L. Gait analysis for recognition and classification [C]// Proceedings of the 5th IEEE International Conference on Automatic Face Gesture Recognition. Piscataway: IEEE, 2002: 155-162. |
7 | YOO J-H, NIXON M S, HARRIS C J. Extracting human gait signatures by body segment properties [C]// Proceedings of the 5th IEEE Southwest Symposium on Image Analysis and Interpretation. Piscataway: IEEE, 2002: 35-39. |
8 | YOO J-H, NIXON M S, C-J HARRIS. Model-driven statistical analysis of human gait motion [C]// Proceedings of the 2022 International Conference on Image Processing. Piscataway: IEEE, 2002, 1: 285-288. |
9 | ZHANG R, VOGLER C, METAXAS D. Human gait recognition at sagittal plane [J]. Image and Vision Computing, 2007, 25(3): 321-330. |
10 | ARIYANTO G, NIXON M S. Model-based 3D gait biometrics [C]// Proceedings of the 2011 International Joint Conference on Biometrics. Piscataway: IEEE, 2011: 1-7. |
11 | SHIRAGA K, MAKIHARA Y, MURAMATSU D, et al. GEINet: view-invariant gait recognition using a convolutional neural network [C]// Proceedings of the 2016 International Conference on Biometrics. Piscataway: IEEE, 2016: 1-8. |
12 | CHAO H, HE Y, ZHANG J, et al. GaitSet: regarding gait as a set for cross-view gait recognition [C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2019: 8126-8133. |
13 | ZHANG Y, HUANG Y, YU S, et al. Cross-view gait recognition by discriminative feature learning [J]. IEEE Transactions on Image Processing, 2019, 29: 1001-1015. |
14 | FAN C, PENG Y, CAO C, et al. GaitPart: temporal part-based model for gait recognition [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 14213-14221. |
15 | WOLF T, BABAEE M, RIGOLL G. Multi-view gait recognition using 3D convolutional neural networks [C]// Proceedings of the 2016 IEEE International Conference on Image Processing. Piscataway: IEEE, 2016: 4165-4169. |
16 | SUN K, XIAO B, LIU D, et al. Deep high-resolution representation learning for human pose estimation [C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 5686-5696. |
17 | YU S, TAN D, TAN T. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition [C]// Proceedings of the 18th International Conference on Pattern Recognition. Piscataway: IEEE, 2006: 441-444. |
18 | TAKEMURA N, MAKIHARA Y, MURAMATSU D, et al. Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition [J]. IPSJ Transactions on Computer Vision and Applications, 2018, 10: No.4. |
19 | ZHANG Z, TRAN L, YIN X, et al. Gait recognition via disentangled representation learning [C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4705-4714. |
20 | LIANG J, FAN C, HOU S, et al. GaitEdge: beyond plain end-to-end gait recognition for better practicality [C]// Proceedings of the 17th European Conference on Computer Vision. Cham: Springer, 2022: 375-390. |
21 | CHEN X, LUO X, WENG J, et al. Multi-view gait image generation for cross-view gait recognition [J]. IEEE Transactions on Image Processing, 2021, 30: 3041-3055. |
22 | 徐硕,郑锋,唐俊,等.双分支特征融合网络的步态识别算法[J].中国图象图形学报, 2022, 27(7): 2263-2273. |
XU S, ZHENG F, TANG J, et al. Dual branch feature fusion network based gait recognition algorithm [J]. Journal of Image and Graphics, 2022, 27(7): 2263-2273. | |
23 | 邓帆,曾渊,刘博文,等.基于Transformer时间特征聚合的步态识别模型[J].计算机应用, 2023, 43(S1): 15-18. |
DENG F, ZENG Y, LIU B W, et al. Gait recognition model based on temporal feature aggregation with Transformer [J]. Journal of Computer Applications, 2023, 43(S1): 15-18. | |
24 | FAN C, LIANG J, SHEN C, et al. OpenGait: revisiting gait recognition towards better practicality [C]// Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 9707-9716. |
25 | TAKEMURA N, MAKIHARA Y, MURAMATSU D, et al. On input/output architectures for convolutional neural network-based cross-view gait recognition [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 29(9): 2708-2719. |
26 | HU B, GAO Y, GUAN Y, et al. Robust cross-view gait identification with evidence: a discriminant gait GAN (DiGGAN) approach on 10000 people [EB/OL]. [2023-07-01]. . |
27 | CHAO H, WANG K, HE Y, et al. GaitSet: cross-view gait recognition through utilizing gait as a deep set [J]. IEEE Transactions on Pattern Analysis and Machine Untelligence, 2021, 44(7): 3467-3478. |
28 | 张红颖,包雯静.融合自注意力机制的生成对抗网络跨视角步态识别[J].中国图象图形学报, 2022, 27(4): 1097-1109. |
ZHANG H Y, BAO W J. The cross-view gait recognition analysis based on generative adversarial networks derived of self-attention mechanism [J]. Journal of Image and Graphics, 2022, 27(4): 1097-1109. |
[1] | 杨鑫, 陈雪妮, 吴春江, 周世杰. 结合变种残差模型和Transformer的城市公路短时交通流预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2947-2951. |
[2] | 付帅, 郭小英, 白茹意, 闫涛, 陈斌. 改进的CloFormer模型与有序回归相结合的年龄评估方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2372-2380. |
[3] | 陈彤, 杨丰玉, 熊宇, 严荭, 邱福星. 基于多尺度频率通道注意力融合的声纹库构建方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2407-2413. |
[4] | 龙伍丹, 彭博, 胡节, 申颖, 丁丹妮. 基于加强特征提取的道路病害检测算法[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2264-2270. |
[5] | 王晓路, 千王菲. 基于双支路卷积网络的步态识别方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1965-1971. |
[6] | 吴郅昊, 迟子秋, 肖婷, 王喆. 基于元学习自适应的小样本语音合成[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1629-1635. |
[7] | 崔晨辉, 蔺素珍, 李大威, 禄晓飞, 武杰. 基于孪生网络和Transformer的红外弱小目标跟踪方法[J]. 《计算机应用》唯一官方网站, 2024, 44(2): 563-571. |
[8] | 刘涛, 鞠事宏, 高一萌. 基于改进YOLOv8n的无人机视角下小目标检测算法[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3603-3609. |
[9] | 范艺扬, 张洋, 曾尚, 曾渝, 付茂栗. 基于分解和频域特征提取的多变量长时间序列预测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3442-3448. |
[10] | 赵培, 乔焰, 胡荣耀, 袁新宇, 李敏悦, 张本初. 基于多域特征提取的多变量时间序列异常检测[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3419-3426. |
[11] | 花晓雨, 李冬芬, 付优, 毕可骏, 应时, 王瑞锦. 结合层次图神经网络与长短期记忆的产业链风险评估预警模型[J]. 《计算机应用》唯一官方网站, 2024, 44(10): 3223-3231. |
[12] | 张雨宁, 阿布都克力木·阿布力孜, 梅悌胜, 徐春, 麦尔达娜·买买提热依木, 哈里旦木·阿布都克里木, 侯钰涛. 基于自监督特征提取的骨骼X线影像异常检测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(1): 175-181. |
[13] | 李牧, 杨宇恒, 柯熙政. 基于混合特征提取与跨模态特征预测融合的情感识别模型[J]. 《计算机应用》唯一官方网站, 2024, 44(1): 86-93. |
[14] | 田悦霖, 黄瑞章, 任丽娜. 融合局部语义特征的学者细粒度信息提取方法[J]. 《计算机应用》唯一官方网站, 2023, 43(9): 2707-2714. |
[15] | 王彬, 向甜, 吕艺东, 王晓帆. 基于NSGA‑Ⅱ的自适应多尺度特征通道分组优化算法[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1401-1408. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||