Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (4): 1271-1284.DOI: 10.11772/j.issn.1001-9081.2024040561
• Multimedia computing and computer simulation • Previous Articles Next Articles
					
						                                                                                                                                                                                    Qingqing ZHAO1,2, Bin HU1,2,3( )
)
												  
						
						
						
					
				
Received:2024-05-07
															
							
																	Revised:2024-09-24
															
							
																	Accepted:2024-09-26
															
							
							
																	Online:2025-04-08
															
							
																	Published:2025-04-10
															
							
						Contact:
								Bin HU   
													About author:ZHAO Qingqing, born in 1995, M. S. candidate. Her research interests include computing intelligence, computer vision.				
													Supported by:通讯作者:
					胡滨
							作者简介:赵轻轻(1995—),女,贵州仁怀人,硕士研究生,主要研究方向:计算智能、计算机视觉;
				
							基金资助:CLC Number:
Qingqing ZHAO, Bin HU. Moving pedestrian detection neural network with invariant global sparse contour point representation[J]. Journal of Computer Applications, 2025, 45(4): 1271-1284.
赵轻轻, 胡滨. 不变性全局稀疏轮廓点表征的运动行人检测神经网络[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1271-1284.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024040561
| 参数名 | 值 | 参数名 | 值 | 
|---|---|---|---|
| 1 280 | 0.8 | ||
| 720 | 3 | ||
| 220 | 1 | ||
| 2 | 3 | ||
| 9 | 0.6 | ||
| 0.92 | 0.5 | ||
| 200 | 800 | 
Tab. 1 Parameter setting of MPDNN
| 参数名 | 值 | 参数名 | 值 | 
|---|---|---|---|
| 1 280 | 0.8 | ||
| 720 | 3 | ||
| 220 | 1 | ||
| 2 | 3 | ||
| 9 | 0.6 | ||
| 0.92 | 0.5 | ||
| 200 | 800 | 
| 视频序号 | 总帧数 | 实际行人所在 时间序列的帧号范围 | MPDNN检测出的 时间序列的帧号范围 | 帧数 | ACC/% | FNR/% | FPR/% | |||
|---|---|---|---|---|---|---|---|---|---|---|
| TP | TN | FN | FP | |||||||
| Ⅰ | 287 | 46~287 | 49~287 | 239 | 45 | 3 | 0 | 98.95 | 1.24 | 0.00 | 
| Ⅱ | 300 | 45~300 | 47~295 | 249 | 44 | 7 | 0 | 97.67 | 2.73 | 0.00 | 
| Ⅲ | 325 | 105~325 | 107~324 | 218 | 104 | 3 | 0 | 99.08 | 1.36 | 0.00 | 
| Ⅳ | 175 | 34~175 | 49~170 | 122 | 33 | 20 | 0 | 88.57 | 14.08 | 0.00 | 
Tab. 2 Numerical statistical results of validity test
| 视频序号 | 总帧数 | 实际行人所在 时间序列的帧号范围 | MPDNN检测出的 时间序列的帧号范围 | 帧数 | ACC/% | FNR/% | FPR/% | |||
|---|---|---|---|---|---|---|---|---|---|---|
| TP | TN | FN | FP | |||||||
| Ⅰ | 287 | 46~287 | 49~287 | 239 | 45 | 3 | 0 | 98.95 | 1.24 | 0.00 | 
| Ⅱ | 300 | 45~300 | 47~295 | 249 | 44 | 7 | 0 | 97.67 | 2.73 | 0.00 | 
| Ⅲ | 325 | 105~325 | 107~324 | 218 | 104 | 3 | 0 | 99.08 | 1.36 | 0.00 | 
| Ⅳ | 175 | 34~175 | 49~170 | 122 | 33 | 20 | 0 | 88.57 | 14.08 | 0.00 | 
| 视频 | 总帧数 | 行人尺度/(°) | 实际行人所在时间序列的帧号范围 | MPDNN检测时间序列的帧号范围 | ACC/% | FNR/% | FPR/% | 
|---|---|---|---|---|---|---|---|
| 200 | 0.73 | 90~200 | 0 | 0.00 | 100.00 | 0.00 | |
| 333 | 1.76 | 16~333 | 41~333 | 92.14 | 7.86 | 0.00 | |
| 290 | 3.51 | 13~273 | 13~273 | 100.00 | 0.00 | 0.00 | |
| 253 | 10.31 | 16~236 | 21~236 | 100.00 | 0.00 | 0.00 | |
| 130 | 25.59 | 11~119 | 11~119 | 100.00 | 0.00 | 0.00 | |
| 87 | 54.32 | 12~76 | 12~76 | 100.00 | 0.00 | 0.00 | 
Tab. 3 Numerical statistical results of scale test
| 视频 | 总帧数 | 行人尺度/(°) | 实际行人所在时间序列的帧号范围 | MPDNN检测时间序列的帧号范围 | ACC/% | FNR/% | FPR/% | 
|---|---|---|---|---|---|---|---|
| 200 | 0.73 | 90~200 | 0 | 0.00 | 100.00 | 0.00 | |
| 333 | 1.76 | 16~333 | 41~333 | 92.14 | 7.86 | 0.00 | |
| 290 | 3.51 | 13~273 | 13~273 | 100.00 | 0.00 | 0.00 | |
| 253 | 10.31 | 16~236 | 21~236 | 100.00 | 0.00 | 0.00 | |
| 130 | 25.59 | 11~119 | 11~119 | 100.00 | 0.00 | 0.00 | |
| 87 | 54.32 | 12~76 | 12~76 | 100.00 | 0.00 | 0.00 | 
| 视频 | 总帧数 | 实际行人所在时间序列的帧号范围 | MPDNN检测时间序列的帧号范围 | ACC/% | FNR/% | FPR/% | 
|---|---|---|---|---|---|---|
| 355 | 12~342 | 12~342 | 100.00 | 0.00 | 0.00 | |
| 195 | 67~147 | 0 | 58.46 | 100.00 | 0.00 | |
| 224 | 16~207 | 16~207 | 100.00 | 0.00 | 0.00 | |
| 162 | 21~156 | 21~156 | 100.00 | 0.00 | 0.00 | 
Tab. 4 Numerical statistical results of motion posture test
| 视频 | 总帧数 | 实际行人所在时间序列的帧号范围 | MPDNN检测时间序列的帧号范围 | ACC/% | FNR/% | FPR/% | 
|---|---|---|---|---|---|---|
| 355 | 12~342 | 12~342 | 100.00 | 0.00 | 0.00 | |
| 195 | 67~147 | 0 | 58.46 | 100.00 | 0.00 | |
| 224 | 16~207 | 16~207 | 100.00 | 0.00 | 0.00 | |
| 162 | 21~156 | 21~156 | 100.00 | 0.00 | 0.00 | 
| 视频 | 总帧数 | 实际行人所在时间序列 | MPDNN检测时间序列 | ACC/% | FNR/% | FPR/% | 
|---|---|---|---|---|---|---|
| 165 | 22~118 | 27~118 | 96.95 | 5.15 | 0.00 | |
| 180 | 14~120 | 0 | 40.56 | 100.00 | 0.00 | |
| 300 | 71~283 | 82~279 | 95.00 | 7.04 | 0.00 | |
| 240 | 34~196 | 0 | 32.37 | 100.00 | 0.00 | 
Tab. 5 Numerical statistical results of occlusion test
| 视频 | 总帧数 | 实际行人所在时间序列 | MPDNN检测时间序列 | ACC/% | FNR/% | FPR/% | 
|---|---|---|---|---|---|---|
| 165 | 22~118 | 27~118 | 96.95 | 5.15 | 0.00 | |
| 180 | 14~120 | 0 | 40.56 | 100.00 | 0.00 | |
| 300 | 71~283 | 82~279 | 95.00 | 7.04 | 0.00 | |
| 240 | 34~196 | 0 | 32.37 | 100.00 | 0.00 | 
| 模型 | 帧数 | ACC/% | FNR/% | FPR/% | |||
|---|---|---|---|---|---|---|---|
| TP | TN | FN | FP | ||||
| Faster R-CNN[ | 842 | 198 | 19 | 66 | 92.44 | 2.21 | 6.07 | 
| Cascade R-CNN[ | 861 | 177 | 0 | 102 | 91.05 | 0.00 | 9.38 | 
| YOLOv5[ | 856 | 221 | 5 | 217 | 82.91 | 0.58 | 19.96 | 
| YOLOv8[ | 851 | 215 | 10 | 241 | 80.94 | 1.16 | 22.17 | 
| SSD[ | 786 | 205 | 75 | 24 | 90.92 | 8.71 | 2.21 | 
| MPDNN | 828 | 226 | 33 | 0 | 96.96 | 3.83 | 0.00 | 
Tab. 6 Numerical statistical results of comparison experiments of object detection models
| 模型 | 帧数 | ACC/% | FNR/% | FPR/% | |||
|---|---|---|---|---|---|---|---|
| TP | TN | FN | FP | ||||
| Faster R-CNN[ | 842 | 198 | 19 | 66 | 92.44 | 2.21 | 6.07 | 
| Cascade R-CNN[ | 861 | 177 | 0 | 102 | 91.05 | 0.00 | 9.38 | 
| YOLOv5[ | 856 | 221 | 5 | 217 | 82.91 | 0.58 | 19.96 | 
| YOLOv8[ | 851 | 215 | 10 | 241 | 80.94 | 1.16 | 22.17 | 
| SSD[ | 786 | 205 | 75 | 24 | 90.92 | 8.71 | 2.21 | 
| MPDNN | 828 | 226 | 33 | 0 | 96.96 | 3.83 | 0.00 | 
| 模型 | 帧数 | ACC/% | FNR/% | FPR/% | |||
|---|---|---|---|---|---|---|---|
| TP | TN | FN | FP | ||||
| F2DNet[ | 171 | 226 | 690 | 0 | 36.52 | 80.14 | 0.00 | 
| VLPD[ | 265 | 201 | 596 | 45 | 42.10 | 69.22 | 4.14 | 
| Pedestron[ | 661 | 197 | 200 | 16 | 79.89 | 23.23 | 1.47 | 
| BFDA[ | 725 | 226 | 136 | 0 | 87.49 | 15.80 | 0.00 | 
| MPDNN | 828 | 226 | 33 | 0 | 96.96 | 3.83 | 0.00 | 
Tab. 7 Numerical statistical results of comparison experiments of pedestrian detection models
| 模型 | 帧数 | ACC/% | FNR/% | FPR/% | |||
|---|---|---|---|---|---|---|---|
| TP | TN | FN | FP | ||||
| F2DNet[ | 171 | 226 | 690 | 0 | 36.52 | 80.14 | 0.00 | 
| VLPD[ | 265 | 201 | 596 | 45 | 42.10 | 69.22 | 4.14 | 
| Pedestron[ | 661 | 197 | 200 | 16 | 79.89 | 23.23 | 1.47 | 
| BFDA[ | 725 | 226 | 136 | 0 | 87.49 | 15.80 | 0.00 | 
| MPDNN | 828 | 226 | 33 | 0 | 96.96 | 3.83 | 0.00 | 
| 模型 | 有效性测试数据集 | 尺度测试数据集 | 运动姿势测试数据集 | 遮挡测试数据集 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ACC | FNR | FPR | ACC | FNR | FPR | ACC | FNR | FPR | ACC | FNR | FPR | |
| EMFD+SVM[ | 95.78 | 4.56 | 0.00 | 46.64 | 63.59 | 0.00 | 76.98 | 29.03 | 0.00 | 38.54 | 81.59 | 0.00 | 
| SOF+ACF[ | 63.29 | 49.21 | 0.00 | 66.82 | 39.54 | 0.00 | 67.37 | 34.51 | 0.00 | 41.23 | 78.35 | 0.00 | 
| TSM[ | 92.37 | 4.15 | 0.00 | 66.56 | 39.54 | 0.00 | 63.46 | 46.22 | 0.00 | 67.57 | 49.48 | 0.00 | 
| DeepStep[ | 98.31 | 2.41 | 0.00 | 59.09 | 51.86 | 0.00 | 87.69 | 13.47 | 0.00 | 57.81 | 72.24 | 0.00 | 
| MPDNN | 96.96 | 3.83 | 0.00 | 89.48 | 12.53 | 0.00 | 91.45 | 10.79 | 0.00 | 68.86 | 46.21 | 0.00 | 
Tab. 8 Numerical statistical results of comparison experiments of moving pedestrian detection models
| 模型 | 有效性测试数据集 | 尺度测试数据集 | 运动姿势测试数据集 | 遮挡测试数据集 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ACC | FNR | FPR | ACC | FNR | FPR | ACC | FNR | FPR | ACC | FNR | FPR | |
| EMFD+SVM[ | 95.78 | 4.56 | 0.00 | 46.64 | 63.59 | 0.00 | 76.98 | 29.03 | 0.00 | 38.54 | 81.59 | 0.00 | 
| SOF+ACF[ | 63.29 | 49.21 | 0.00 | 66.82 | 39.54 | 0.00 | 67.37 | 34.51 | 0.00 | 41.23 | 78.35 | 0.00 | 
| TSM[ | 92.37 | 4.15 | 0.00 | 66.56 | 39.54 | 0.00 | 63.46 | 46.22 | 0.00 | 67.57 | 49.48 | 0.00 | 
| DeepStep[ | 98.31 | 2.41 | 0.00 | 59.09 | 51.86 | 0.00 | 87.69 | 13.47 | 0.00 | 57.81 | 72.24 | 0.00 | 
| MPDNN | 96.96 | 3.83 | 0.00 | 89.48 | 12.53 | 0.00 | 91.45 | 10.79 | 0.00 | 68.86 | 46.21 | 0.00 | 
| 模型 | 帧数 | ACC/% | FNR/% | FPR/% | |||
|---|---|---|---|---|---|---|---|
| TP | TN | FN | FP | ||||
| CDNN[ | 0 | 45 | 242 | 0 | 15.68 | 100.00 | 0.00 | 
| CEBDNN[ | 0 | 45 | 242 | 0 | 15.68 | 100.00 | 0.00 | 
| DSNN[ | 124 | 45 | 118 | 124 | 41.12 | 48.76 | 43.21 | 
| STPDNN[ | 138 | 45 | 104 | 0 | 63.76 | 42.98 | 0.00 | 
| MPDNN | 239 | 45 | 3 | 0 | 98.95 | 1.24 | 0.00 | 
Tab. 9 Numerical statistical results of comparison experiments of homologous models
| 模型 | 帧数 | ACC/% | FNR/% | FPR/% | |||
|---|---|---|---|---|---|---|---|
| TP | TN | FN | FP | ||||
| CDNN[ | 0 | 45 | 242 | 0 | 15.68 | 100.00 | 0.00 | 
| CEBDNN[ | 0 | 45 | 242 | 0 | 15.68 | 100.00 | 0.00 | 
| DSNN[ | 124 | 45 | 118 | 124 | 41.12 | 48.76 | 43.21 | 
| STPDNN[ | 138 | 45 | 104 | 0 | 63.76 | 42.98 | 0.00 | 
| MPDNN | 239 | 45 | 3 | 0 | 98.95 | 1.24 | 0.00 | 
| 1 | BRUNETTI A, BUONGIORNO D, TROTTA G F, et al. Computer vision and deep learning techniques for pedestrian detection and tracking: a survey[J]. Neurocomputing, 2018, 300: 17-33. | 
| 2 | DiCARLO J J, ZOCCOLAN D, RUST N C. How does the brain solve visual object recognition?[J]. Neuron, 2012, 73(3): 415-434. | 
| 3 | ELDER J H. Shape from contour: computation and representation[J]. Annual Review of Vision Science, 2018, 4: 423-450. | 
| 4 | AYZENBERG V, LOURENCO S. Perception of an object’s global shape is best described by a model of skeletal structure in human infants[J]. eLife, 2022, 11: No.e74943. | 
| 5 | BAKER N, LU H, ERLIKHMAN G, et al. Deep convolutional networks do not make classifications based on global object shape[J]. Journal of Vision, 2018, 18(10): No.904. | 
| 6 | WOOD J N, WOOD S M W. The development of invariant object recognition requires visual experience with temporally smooth objects[J]. Cognitive Science, 2018, 42(4): 1391-1406. | 
| 7 | EL-SHAMAYLEH Y, PASUPATHY A. Contour curvature as an invariant code for objects in visual area V4[J]. Journal of Neuroscience, 2016, 36(20): 5532-5543. | 
| 8 | HU B, ZHANG Z, LI L. LGMD-based visual neural network for detection crowd escape behavior[C]// Proceedings of the 5th IEEE International Conference on Cloud Computing and Intelligence Systems. Piscataway: IEEE, 2018: 772-778. | 
| 9 | FU Q, WANG H, HU C, et al. Towards computational models and applications of insect visual systems for motion perception: a review[J]. Artificial Life, 2019, 25(3): 263-311. | 
| 10 | HU B, ZHANG Z. Bio-inspired visual neural network on spatio-temporal depth rotation perception[J]. Neural Computing and Applications, 2021, 33(16): 10351-10370. | 
| 11 | HUANG X, QIAO H, LI H, et al. Bioinspired approach-sensitive neural network for collision detection in cluttered and dynamic backgrounds[J]. Applied Soft Computing, 2022, 122: No.108782. | 
| 12 | 张本康,胡滨. 基于情景记忆的运动小目标行人检测神经网络[J]. 计算机工程与应用, 2022, 58(15): 169-183. | 
| ZHANG B K, HU B. Neural network for moving small target pedestrian detection based on episodic memory[J]. Computer Engineering and Applications, 2022, 58(15): 169-183. | |
| 13 | LAD B V, HASHMI M F, KESKAR A G. Parameter adaptive pulse coupled neural network-based saliency map fusion strategy for salient object detection[J]. Neural Computing and Applications, 2023, 35(21): 15743-15757. | 
| 14 | CAO J, PANG Y, XIE J, et al. From handcrafted to deep features for pedestrian detection: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(9): 4913-4934. | 
| 15 | VARGA D, HAVASI L, SZIRÁNYI T. Pedestrian detection in surveillance videos based on CS-LBP feature[C]// Proceedings of the 2015 International Conference on Models and Technologies for Intelligent Transportation Systems. Piscataway: IEEE, 2015: 413-417. | 
| 16 | NAN M, LI C, HU J, et al. Pedestrian detection based on HOG features and SVM realizes vehicle-human-environment interaction[C]// Proceedings of the 15th International Conference on Computational Intelligence and Security. Piscataway: IEEE, 2019: 287-291. | 
| 17 | DONG E, JING C, ZHANG Z. A multi-feature fusion based pedestrian detection method[C]// Proceedings of the 2020 IEEE International Conference on Mechatronics and Automation. Piscataway: IEEE, 2020: 176-180. | 
| 18 | REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. | 
| 19 | CAI Z, VASCONCELOS N. Cascade R-CNN: high quality object detection and instance segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(5): 1483-1498. | 
| 20 | Ultralytics. YOLOv5[EB/OL]. [2024-07-10].. | 
| 21 | Ultralytics. YOLOv8[EB/OL]. [2024-07-10].. | 
| 22 | ZHOU S, QIU J. Enhanced SSD with interactive multi-scale attention features for object detection[J]. Multimedia Tools and Applications, 2021, 80(8): 11539-11556. | 
| 23 | GAWANDE U, HAJARI K, GOLHAR Y. SIRA: scale illumination rotation affine invariant mask R-CNN for pedestrian detection[J]. Applied Intelligence, 2022, 52: 10398-10416. | 
| 24 | KOLLURI J, DAS R. Intelligent multimodal pedestrian detection using hybrid metaheuristic optimization with deep learning model[J]. Image and Vision Computing, 2023, 131: No.104628. | 
| 25 | KHAN A H, MUNIR M, VAN ELST L, et al. F2DNet: fast focal detection network for pedestrian detection[C]// Proceedings of the 26th International Conference on Pattern Recognition. Piscataway: IEEE, 2022: 4658-4664. | 
| 26 | ZHAO K, DENG J, CHENG D. Real-time moving pedestrian detection using contour features[J]. Multimedia Tools and Applications, 2018, 77(23): 30891-30910. | 
| 27 | JIANG Y, WANG J, LIANG Y, et al. Combining static and dynamic features for real-time moving pedestrian detection[J]. Multimedia Tools and Applications, 2019, 78(3): 3781-3795. | 
| 28 | CHENG G, ZHENG J Y. Semantic segmentation for pedestrian detection from motion in temporal domain[C]// Proceedings of the 2020 25th International Conference on Pattern Recognition. Piscataway: IEEE, 2021: 6897-6903. | 
| 29 | KILICARSLAN M, ZHENG J Y. DeepStep: direct detection of walking pedestrian from motion by a vehicle camera[J]. IEEE Transactions on Intelligent Vehicles, 2023, 8(2): 1652-1663. | 
| 30 | YANG P, ZHANG G, WANG L, et al. A part-aware multi-scale fully convolutional network for pedestrian detection[J]. IEEE Transactions on Intelligent Transportation Systems, 2021, 22(2): 1125-1137. | 
| 31 | KIM M, ILYAS N, KIM K. AMSASeg: an attention-based multi-scale atrous convolutional neural network for real-time object segmentation from 3D point cloud[J]. IEEE Access, 2021, 9: 70789-70796. | 
| 32 | HE Y, HE N, YU H, et al. From macro to micro: rethinking multi-scale pedestrian detection[J]. Multimedia Systems, 2023, 29(3): 1417-1429. | 
| 33 | LIU M, JIANG J, ZHU C, et al. VLPD: context-aware pedestrian detection via vision-language semantic self-supervision[C]// Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 6662-6671. | 
| 34 | HASAN I, LIAO S, LI J, et al. Generalizable pedestrian detection: the elephant in the room[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 11323-11332. | 
| 35 | SHEN G, YU Y, TANG Z R, et al. HQA‐Trans: an end‐to‐end high‑quality‑awareness image translation framework for unsupervised cross‑domain pedestrian detection[J]. IET Computer Vision, 2022, 16(3): 218-229. | 
| 36 | CAI Y, ZHANG B, LI B, et al. Rethinking cross-domain pedestrian detection: a background-focused distribution alignment framework for instance-free one-stage detectors[J]. IEEE Transactions on Image Processing, 2023, 32: 4935-4950. | 
| 37 | ROUPA I, SILVA M R DA, MARQUES F, et al. On the modeling of biomechanical systems for human movement analysis: a narrative review[J]. Archives of Computational Methods in Engineering, 2022, 29(7): 4915-4958. | 
| 38 | AUBRET A, TEULIÈR C, TRIESCH J. Toddler-inspired embodied vision for learning object representations[C]// Proceedings of the 2022 IEEE International Conference on Development and Learning. Piscataway: IEEE, 2022: 81-87. | 
| 39 | ISIK L, MEYERS E M, LEIBO J Z, et al. The dynamics of invariant object recognition in the human visual system[J]. Journal of Neurophysiology, 2014, 111(1): 91-102. | 
| 40 | ZOCCOLAN D. Invariant visual object recognition and shape processing in rats[J]. Behavioural Brain Research, 2015, 285: 10-33. | 
| 41 | WOOD J N, WOOD S M W. One-shot learning of view-invariant object representations in newborn chicks[J]. Cognition, 2020, 199: No.104192. | 
| 42 | PRASAD A, WOOD S M W, WOOD J N. Using automated controlled rearing to explore the origins of object permanence[J]. Developmental Science, 2019, 22(3): No.e12796. | 
| 43 | JIAO L, YANG Y, LIU F, et al. The new generation brain-inspired sparse learning: a comprehensive survey[J]. IEEE Transactions on Artificial Intelligence, 2022, 3(6): 887-907. | 
| 44 | JOHANSSON G. Visual perception of biological motion and a model for its analysis[J]. Perception and Psychophysics, 1973, 14(2): 201-211. | 
| 45 | CARLSON E T, RASQUINHA R J, ZHANG K, et al. A sparse object coding scheme in area V4[J]. Current Biology, 2011, 21(4): 288-293. | 
| 46 | VLASITS A L, EULER T, FRANKE K. Function first: classifying cell types and circuits of the retina[J]. Current Opinion in Neurobiology, 2019, 56: 8-15. | 
| 47 | HAHN J, MONAVARFESHANI A, QIAO M, et al. Evolution of neuronal cell classes and types in the vertebrate retina[J]. Nature, 2023, 624(7991): 415-424. | 
| 48 | SETYOKO B H, NOERSASONGKO E, SHIDIK G F, et al. Gaussian mixture model in dynamic background of video sequences for human detection[C]// Proceedings of the 5th International Conference on Research of Information Technology and Intelligent Systems. Piscataway: IEEE, 2022: 595-600. | 
| 49 | GAYNES J A, BUDOFF S A, GRYBKO M J, et al. Classical center-surround receptive fields facilitate novel object detection in retinal bipolar cells[J]. Nature Communications, 2022, 13: No.5575. | 
| 50 | 刘倡,胡滨. 生物启发的人群突发局部聚集感知神经网络[J]. 计算机工程与应用, 2022, 58(16): 164-174. | 
| LIU C, HU B. Bio-inspired neural network for perceiving suddenly localized crowd gathering[J]. Computer Engineering and Applications, 2022, 58(16): 164-174. | |
| 51 | GARCIA-MOLLA V M, ALONSO-JORDÁ P. Parallel border tracking in binary images for multicore computers[J]. The Journal of Supercomputing, 2023, 79(9): 9915-9931. | 
| 52 | PENG P, TIAN Y, WANG Y, et al. Robust multiple cameras pedestrian detection with multi-view Bayesian network[J]. Pattern Recognition, 2015, 48(5): 1760-1772. | 
| 53 | LU C, SHI J, JIA J. Abnormal event detection at 150 FPS in MATLAB[C]// Proceedings of the 2013 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2013: 2720-2727. | 
| 54 | VIDEEZ Y. Shot of a seagull flying with blue sky on background in 4K[EB/OL]. [2024-07-10].. | 
| 55 | WANG Y, LI H, ZHENG Y, et al. A directionally selective collision-sensing visual neural network based on fractional-order differential operator[J]. Frontiers in Neurorobotics, 2023, 17: No.1149675. | 
| [1] | Yang HOU, Qiong ZHANG, Zixuan ZHAO, Zhengyu ZHU, Xiaobo ZHANG. YOLOv5s-MRD: efficient fire and smoke detection algorithm for complex scenarios based on YOLOv5s [J]. Journal of Computer Applications, 2025, 45(4): 1317-1324. | 
| [2] | Liwei ZHANG, Quan LIANG, Yutao HU, Qiaole ZHU. Channel shuffle attention mechanism based on group convolution [J]. Journal of Computer Applications, 2025, 45(4): 1069-1076. | 
| [3] | Chuanhao ZHANG, Xiaohan TU, Xuehui GU, Bo XUAN. LiDAR-camera 3D object detection based on multi-modal information mutual guidance and supplementation [J]. Journal of Computer Applications, 2025, 45(3): 946-952. | 
| [4] | Songsen YU, Zhifan LIN, Guopeng XUE, Jianyu XU. Lightweight large-format tile defect detection algorithm based on improved YOLOv8 [J]. Journal of Computer Applications, 2025, 45(2): 647-654. | 
| [5] | Sheng YANG, Yan LI. Contrastive knowledge distillation method for object detection [J]. Journal of Computer Applications, 2025, 45(2): 354-361. | 
| [6] | Jiayang GUI, Shunji WANG, Zhengkang ZHOU, Jiashan TANG. Tunnel foreign object detection algorithm based on improved YOLOv8n [J]. Journal of Computer Applications, 2025, 45(2): 655-661. | 
| [7] | Shijia WEN, Shijun JING. Dynamic visual SLAM algorithm incorporating object detection and feature point association [J]. Journal of Computer Applications, 2025, 45(2): 610-615. | 
| [8] | Zhongwei ZHANG, Jun WANG, Shudong LIU, Zhiheng WANG. Object detection in remote sensing image based on multi-scale feature fusion and weighted boxes fusion [J]. Journal of Computer Applications, 2025, 45(2): 633-639. | 
| [9] | Yexin PAN, Zhe YANG. Optimization model for small object detection based on multi-level feature bidirectional fusion [J]. Journal of Computer Applications, 2024, 44(9): 2871-2877. | 
| [10] | Yingjun ZHANG, Niuniu LI, Binhong XIE, Rui ZHANG, Wangdong LU. Semi-supervised object detection framework guided by curriculum learning [J]. Journal of Computer Applications, 2024, 44(8): 2326-2333. | 
| [11] | Yeheng LI, Guangsheng LUO, Qianmin SU. Logo detection algorithm based on improved YOLOv5 [J]. Journal of Computer Applications, 2024, 44(8): 2580-2587. | 
| [12] | Song XU, Wenbo ZHANG, Yifan WANG. Lightweight video salient object detection network based on spatiotemporal information [J]. Journal of Computer Applications, 2024, 44(7): 2192-2199. | 
| [13] | Xun SUN, Ruifeng FENG, Yanru CHEN. Monocular 3D object detection method integrating depth and instance segmentation [J]. Journal of Computer Applications, 2024, 44(7): 2208-2215. | 
| [14] | Yue LIU, Fang LIU, Aoyun WU, Qiuyue CHAI, Tianxiao WANG. 3D object detection network based on self-attention mechanism and graph convolution [J]. Journal of Computer Applications, 2024, 44(6): 1972-1977. | 
| [15] | Yaping DENG, Yingjiang LI. Review of YOLO algorithm and its applications to object detection in autonomous driving scenes [J]. Journal of Computer Applications, 2024, 44(6): 1949-1958. | 
| Viewed | ||||||
| Full text |  | |||||
| Abstract |  | |||||