《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (8): 2537-2545.DOI: 10.11772/j.issn.1001-9081.2022070972
收稿日期:
2022-07-06
修回日期:
2022-09-19
接受日期:
2022-09-19
发布日期:
2023-01-15
出版日期:
2023-08-10
通讯作者:
杨观赐
作者简介:
朱东莹(1996—),男,浙江杭州人,硕士研究生,CCF会员,主要研究方向:自主智能系统基金资助:
Dongying ZHU1, Yong ZHONG2, Guanci YANG1,3,4(), Yang LI3
Received:
2022-07-06
Revised:
2022-09-19
Accepted:
2022-09-19
Online:
2023-01-15
Published:
2023-08-10
Contact:
Guanci YANG
About author:
ZHU Dongying, born in 1996, M. S. candidate. His research interests include intelligent autonomous system.Supported by:
摘要:
动态环境中视觉定位与建图系统受环境中动态物体的影响,定位与建图误差增加同时鲁棒性下降。而对输入图像的运动分割可显著提高动态环境下视觉定位与建图系统的性能。动态环境中的动态物体可分为运动物体与潜在运动物体。当前动态物体识别方法存在运动主体混乱、实时性差的问题。因此,综述了视觉定位与建图系统在动态环境下的运动分割策略。首先,从场景的预设条件出发,将运动分割策略分为基于图像主体静止假设方法、基于先验语义知识的方法和不引入假设的多传感融合方法;然后,对这三类方法进行总结,并分析各方法的准确性和实时性;最后,针对视觉定位与建图系统在动态环境下运动分割策略的准确性、实时性难以平衡的问题,讨论并展望了动态环境下运动分割方法的发展趋势。
中图分类号:
朱东莹, 钟勇, 杨观赐, 李杨. 动态环境下视觉定位与建图的运动分割研究进展[J]. 计算机应用, 2023, 43(8): 2537-2545.
Dongying ZHU, Yong ZHONG, Guanci YANG, Yang LI. Research progress on motion segmentation of visual localization and mapping in dynamic environment[J]. Journal of Computer Applications, 2023, 43(8): 2537-2545.
数据集 | 场景 | 采集方式 | 数据类型 | 动态物体 |
---|---|---|---|---|
TUM[ | 室内 | 手持/机器人 | RGB-D图像 | 人 |
Bonn[ | 室内 | 手持 | RGB-D图像 | 人、物品 |
OpenLORIS[ | 室内 | 机器人 | RGB-D图像、双目图像、2D/3D激光点云、IMU、轮式里程计 | 人、物品 |
ADVIO[ | 室内/室外 | 手持 | 单目、IMU | 人、车、电梯 |
KITTI[ | 室外道路 | 汽车 | 双目图像、激光点云、IMU | 行人、汽车 |
Oxford[ | 室外道路 | 汽车 | 单/双目图像、激光点云、GPS/INS | 行人、汽车 |
表1 常用的动态环境下视觉SLAM数据集
Tab. 1 Commonly used visual SLAM datasets in dynamic environment
数据集 | 场景 | 采集方式 | 数据类型 | 动态物体 |
---|---|---|---|---|
TUM[ | 室内 | 手持/机器人 | RGB-D图像 | 人 |
Bonn[ | 室内 | 手持 | RGB-D图像 | 人、物品 |
OpenLORIS[ | 室内 | 机器人 | RGB-D图像、双目图像、2D/3D激光点云、IMU、轮式里程计 | 人、物品 |
ADVIO[ | 室内/室外 | 手持 | 单目、IMU | 人、车、电梯 |
KITTI[ | 室外道路 | 汽车 | 双目图像、激光点云、IMU | 行人、汽车 |
Oxford[ | 室外道路 | 汽车 | 单/双目图像、激光点云、GPS/INS | 行人、汽车 |
方法类别 | 环境 预设 | 传感器 安装要求 | 分割 精度 | 实时性 | 识别潜在运动物体 |
---|---|---|---|---|---|
基于图像主体静止 假设的运动分割方法 | 环境 主体静止 | 低 | 低 | 中 | 否 |
基于先验语义信息的运动分割方法 | 先验 语义标注 | 低 | 高 | 较差 | 是 |
基于多传感器信息 补偿运动分割方法 | 无 | 高 | 中 | 好 | 否 |
表2 三类运动分割方法的性能对比
Tab. 2 Performance comparison of three types of motion segmentation methods
方法类别 | 环境 预设 | 传感器 安装要求 | 分割 精度 | 实时性 | 识别潜在运动物体 |
---|---|---|---|---|---|
基于图像主体静止 假设的运动分割方法 | 环境 主体静止 | 低 | 低 | 中 | 否 |
基于先验语义信息的运动分割方法 | 先验 语义标注 | 低 | 高 | 较差 | 是 |
基于多传感器信息 补偿运动分割方法 | 无 | 高 | 中 | 好 | 否 |
类别 | 优点 | 缺点 |
---|---|---|
基于相机运动的 几何约束分割方法 | 精度较好 | 高动态 场景精度差 |
直接几何约束分割方法 | 计算量小 | 分割精度较差 |
构建静态背景方法 | 高动态场景精度较好 | 计算复杂 |
表3 基于图像主体静止假设的运动分割方法优缺点
Tab. 3 Advantages and disadvantages of motion segmentation methods based on static assumption of image subject
类别 | 优点 | 缺点 |
---|---|---|
基于相机运动的 几何约束分割方法 | 精度较好 | 高动态 场景精度差 |
直接几何约束分割方法 | 计算量小 | 分割精度较差 |
构建静态背景方法 | 高动态场景精度较好 | 计算复杂 |
方法 | 绝对轨迹均方根误差/m | 相机类型 | 运行环境 | 基础框架 | 单帧跟踪时间/ms |
---|---|---|---|---|---|
文献[ | 0.037 1 | RGB-D | E3-1230+12 GB | 半直接法SLAM | 15 |
文献[ | 0.093 2 | RGB-D | i3+4 GB | DVO-SLAM | 500 |
文献[ | 0.035 4 | RGB-D | i5-4200H+8 GB | ORB-SLAM2 | 37 |
文献[ | 0.243 3 | RGB-D | i5+4 GB+GPU | 线特征里程计 | 50 |
文献[ | 0.310 3* | 双目 | i7-4720HQ+8 GB | ORB-SLAM | 114 |
文献[ | 0.079 9 | RGB-D | i7-6700+20 GB | ORB-SLAM2 | — |
文献[ | 0.036 0 | RGB-D | i7+16 GB | ORB-SLAM2 | 57 |
文献[ | 0.160 8 | RGB-D | i5-3470+8 GB | ORB-SLAM2 | 30 |
文献[ | 0.518 6 | RGB-D | i7+8 GB | DVO-SLAM | 43 |
文献[ | 0.065 7 | RGB-D | i7+16 GB | DVO-SLAM | 7 000 |
表4 基于图像主体静止假设方法
Tab. 4 Segmentation methods based on static assumption of image subject
方法 | 绝对轨迹均方根误差/m | 相机类型 | 运行环境 | 基础框架 | 单帧跟踪时间/ms |
---|---|---|---|---|---|
文献[ | 0.037 1 | RGB-D | E3-1230+12 GB | 半直接法SLAM | 15 |
文献[ | 0.093 2 | RGB-D | i3+4 GB | DVO-SLAM | 500 |
文献[ | 0.035 4 | RGB-D | i5-4200H+8 GB | ORB-SLAM2 | 37 |
文献[ | 0.243 3 | RGB-D | i5+4 GB+GPU | 线特征里程计 | 50 |
文献[ | 0.310 3* | 双目 | i7-4720HQ+8 GB | ORB-SLAM | 114 |
文献[ | 0.079 9 | RGB-D | i7-6700+20 GB | ORB-SLAM2 | — |
文献[ | 0.036 0 | RGB-D | i7+16 GB | ORB-SLAM2 | 57 |
文献[ | 0.160 8 | RGB-D | i5-3470+8 GB | ORB-SLAM2 | 30 |
文献[ | 0.518 6 | RGB-D | i7+8 GB | DVO-SLAM | 43 |
文献[ | 0.065 7 | RGB-D | i7+16 GB | DVO-SLAM | 7 000 |
方法 | 语义帧选择 | 运动分割方法 | 相机类型 | 年份 | 运行环境 | 单帧跟踪时间/ms |
---|---|---|---|---|---|---|
DS-SLAM[ | 每帧 | SegNet | RGB-D | 2018 | i7+P4000 | 59 |
DynaSLAM[ | 每帧 | Mask R-CNN+几何约束 | 单、双目及RGB-D | 2018 | Tesla M40 | 700 |
Dynamic-SLAM[ | 每帧 | SSD | 单目 | 2019 | i5-7300HQ+GTX1050Ti | 100 |
SDF-SLAM[ | 仅初始化 | SegNet+静态背景 | RGB-D | 2020 | i9-9940X+TITAN RTX | — |
DDL-SLAM[ | 每帧 | DUNet | RGB-D | 2020 | i7+NVIDIA TITAN | 非实时 |
Fan等[ | 每帧 | BlitzNet | RGB-D | 2020 | — | — |
Miao等[ | 每帧 | PWC-Net+Deeplabv3 | RGB-D | 2021 | — | — |
RS-SLAM[ | 每帧 | PSPNet | RGB-D | 2021 | i5-7500+GTX 1060 | 180 |
YOLO-SLAM[ | 每帧 | YOLO | RGB-D | 2021 | i5-4288U | 696 |
OC-SLAM[ | 每帧 | YOLO-Fastest+几何约束 | RGB-D | 2021 | — | — |
Detect-SLAM[ | ORB-SLAM关键帧 | SSD+运动概率传播 | RGB-D | 2018 | i7-4700+ GTX960M | 实时 |
Zhang等[ | ORB-SLAM关键帧 | Tiny YOLO | RGB-D | 2018 | GT730 | 20(仅YOLO) |
Wen等[ | 每5个ORB-SLAM关键帧 | Mask R-CNN+三项误差 | RGB-D | 2021 | — | 非实时 |
DE-SLAM[ | ORB-SLAM关键帧 | MobileNet V2 | RGB-D | 2022 | i5 | 37 |
DGS-SLAM[ | 独立的语义帧选择策略 | YOLACT++ | RGB-D | 2022 | AMD 5900HX+ GTX3070 | 38 |
RDS-SLAM[ | 首尾关键帧优先策略 | Mask R-CNN, SegNet运动概率 | RGB-D | 2021 | RTX 2080Ti | 30 |
RDMO-SLAM[ | 首尾关键帧优先策略 | Mask R-CNN+光流 | RGB-D | 2021 | RTX 2080Ti | 22~35 |
表5 基于语义信息的动态SLAM运动分割方法
Tab. 5 Dynamic SLAM motion segmentation methods based on semantic information
方法 | 语义帧选择 | 运动分割方法 | 相机类型 | 年份 | 运行环境 | 单帧跟踪时间/ms |
---|---|---|---|---|---|---|
DS-SLAM[ | 每帧 | SegNet | RGB-D | 2018 | i7+P4000 | 59 |
DynaSLAM[ | 每帧 | Mask R-CNN+几何约束 | 单、双目及RGB-D | 2018 | Tesla M40 | 700 |
Dynamic-SLAM[ | 每帧 | SSD | 单目 | 2019 | i5-7300HQ+GTX1050Ti | 100 |
SDF-SLAM[ | 仅初始化 | SegNet+静态背景 | RGB-D | 2020 | i9-9940X+TITAN RTX | — |
DDL-SLAM[ | 每帧 | DUNet | RGB-D | 2020 | i7+NVIDIA TITAN | 非实时 |
Fan等[ | 每帧 | BlitzNet | RGB-D | 2020 | — | — |
Miao等[ | 每帧 | PWC-Net+Deeplabv3 | RGB-D | 2021 | — | — |
RS-SLAM[ | 每帧 | PSPNet | RGB-D | 2021 | i5-7500+GTX 1060 | 180 |
YOLO-SLAM[ | 每帧 | YOLO | RGB-D | 2021 | i5-4288U | 696 |
OC-SLAM[ | 每帧 | YOLO-Fastest+几何约束 | RGB-D | 2021 | — | — |
Detect-SLAM[ | ORB-SLAM关键帧 | SSD+运动概率传播 | RGB-D | 2018 | i7-4700+ GTX960M | 实时 |
Zhang等[ | ORB-SLAM关键帧 | Tiny YOLO | RGB-D | 2018 | GT730 | 20(仅YOLO) |
Wen等[ | 每5个ORB-SLAM关键帧 | Mask R-CNN+三项误差 | RGB-D | 2021 | — | 非实时 |
DE-SLAM[ | ORB-SLAM关键帧 | MobileNet V2 | RGB-D | 2022 | i5 | 37 |
DGS-SLAM[ | 独立的语义帧选择策略 | YOLACT++ | RGB-D | 2022 | AMD 5900HX+ GTX3070 | 38 |
RDS-SLAM[ | 首尾关键帧优先策略 | Mask R-CNN, SegNet运动概率 | RGB-D | 2021 | RTX 2080Ti | 30 |
RDMO-SLAM[ | 首尾关键帧优先策略 | Mask R-CNN+光流 | RGB-D | 2021 | RTX 2080Ti | 22~35 |
方法 | 绝对轨迹均方根误差/m | 传感器 | 分割方法 | 运行环境 | 单帧跟踪时间/ms |
---|---|---|---|---|---|
文献[ | 1.814 0* | 单目+IMU | 相机位姿估计后运动分割 | i7-7700HQ+16 GB | — |
DRE-SLAM[ | 0.017 1 | RGB-D+轮式里程计 | 相机位姿估计后运动分割 | i7-8700+16 GB | 33±12 |
AcousticFusion[ | 0.127 6 | RGB-D+麦克风阵列 | 依据声源方向直接判断动态区域 | i7-10875H +64 GB | 71 |
表6 基于多传感器信息补偿的动态SLAM运动分割方法
Tab. 6 Dynamic SLAM motion segmentation methods based on multi-sensor information compensation
方法 | 绝对轨迹均方根误差/m | 传感器 | 分割方法 | 运行环境 | 单帧跟踪时间/ms |
---|---|---|---|---|---|
文献[ | 1.814 0* | 单目+IMU | 相机位姿估计后运动分割 | i7-7700HQ+16 GB | — |
DRE-SLAM[ | 0.017 1 | RGB-D+轮式里程计 | 相机位姿估计后运动分割 | i7-8700+16 GB | 33±12 |
AcousticFusion[ | 0.127 6 | RGB-D+麦克风阵列 | 依据声源方向直接判断动态区域 | i7-10875H +64 GB | 71 |
1 | 孙长银,吴国政,王志衡,等. 自动化学科面临的挑战[J]. 自动化学报, 2021, 47(2): 464-474. 10.16383/j.aas.c200904 |
SUN C Y, WU G Z, WANG Z H, et.al. On challenges in automation science and technology[J]. Acta Automatica Sinica, 2021, 47(2): 464-474. 10.16383/j.aas.c200904 | |
2 | 陈虹宇,艾红,王晓,等. 社会交通中的社会信号分析与感知[J]. 自动化学报, 2021, 47(6): 1256-1272. 10.16383/j.aas.c200055 |
CHEN H Y, AI H, WANG X, et al. Analysis and perception of social signals in social transportation[J]. Acta Automatica Sinica, 2021,47(6):1256-1272. 10.16383/j.aas.c200055 | |
3 | FUENTES-PACHECO J, RUIZ-ASCENCIO J, RENDÓN-MANCHA J M. Visual simultaneous localization and mapping: a survey[J]. Artificial Intelligence Review, 2015, 43(1): 55-81. 10.1007/s10462-012-9365-8 |
4 | 杨观赐,王霄远,蒋亚汶,等. 视觉与惯性传感器融合的SLAM技术综述[J]. 贵州大学学报(自然科学版), 2020, 37(6): 1-12. 10.15958/j.cnki.gdxbzrb.2020.06.01 |
YANG G C, WANG X Y, JIANG Y W, et al. Review of SLAM technologies based on visual and inertial sensor fusion[J]. Journal of Guizhou University (Natural Sciences), 2020, 37(6): 1-12. 10.15958/j.cnki.gdxbzrb.2020.06.01 | |
5 | YANG G C, CHEN Z J, LI Y, et al. Rapid relocation method for mobile robot based on improved ORB-SLAM2 algorithm[J]. Remote Sensing, 2019, 11(2): No.149. 10.3390/rs11020149 |
6 | 李云天,穆荣军,单永志. 无人系统视觉SLAM技术发展现状简析[J]. 控制与决策, 2021, 36(3): 513-522. 10.13195/j.kzyjc.2019.1149 |
LI Y T, MU R J, SHAN Y Z, et al. A survey of visual SLAM in unmanned systems[J]. Control and Decision, 2021, 36(3): 513-522. 10.13195/j.kzyjc.2019.1149 | |
7 | ELVIRA R, TARDÓS J D, MONTIEL J M M. ORBSLAM-Atlas: a robust and accurate multi-map system[C]// Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2019: 6253-6259. 10.1109/iros40897.2019.8967572 |
8 | 曹风魁,庄严,闫飞,等. 移动机器人长期自主环境适应研究进展和展望[J]. 自动化学报, 2020, 46(2): 205-221. |
CAO F K, ZHUANG Y, YAN F, et al. Long-term autonomous environment adaptation of mobile robots: state-of-the-art methods and prospects[J]. Acta Automatica Sinica, 2020, 46(2): 205-221. | |
9 | 徐晓苏,安仲帅. 基于深度学习的室内动态场景下的VSLAM方法[J]. 中国惯性技术学报, 2020, 28(4): 480-486. 10.13695/j.cnki.12-1222/o3.2020.04.010 |
XU X S, AN Z S. Vision SLAM method in indoor dynamic scene based on deep learning[J]. Journal of Chinese Inertial Technology, 2020, 28(4): 480-486. 10.13695/j.cnki.12-1222/o3.2020.04.010 | |
10 | 刘强,段富海,桑勇,等. 复杂环境下视觉SLAM闭环检测方法综述[J]. 机器人, 2019, 41(1): 112-123. 10.13973/j.cnki.robot.180004 |
LIU Q, DUAN F H, SANG Y, et al. A survey of loop-closure detection method of visual SLAM in complex environments[J]. Robot, 2019, 41(1): 112-123. 10.13973/j.cnki.robot.180004 | |
11 | 王忠立,李文仪. 基于点云分割的运动目标跟踪与SLAM方法[J]. 机器人, 2021, 43(2): 177-192. |
WANG Z L, LI W Y. Moving objects tracking and SLAM method based on point cloud segmentation[J]. Robot, 2021, 43(2): 177-192. | |
12 | 王柯赛,姚锡凡,黄宇,等. 动态环境下的视觉SLAM研究评述[J]. 机器人, 2021, 43(6): 715-732. 10.13973/j.cnki.robot.200468 |
WANG K S, YAO X F, HUANG Y, et al. Review of visual SLAM in dynamic environment[J]. Robot, 2021, 43(6): 715-732. 10.13973/j.cnki.robot.200468 | |
13 | 兰凤崇,李继文,陈吉清. 面向动态场景复合深度学习与并行计算的DG-SLAM算法[J]. 吉林大学学报(工学版), 2021, 51(4): 1437-1446. |
LAN F C, LI J W, CHEN J Q. DG-SLAM algorithm for dynamic scene compound deep learning and parallel computing[J]. Journal of Jilin University (Engineering and Technology Edition), 2021, 51(4): 1437-1446. | |
14 | STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]// Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2012: 573-580. 10.1109/iros.2012.6385773 |
15 | PALAZZOLO E, BEHLEY J, LOTTES P, et al. ReFusion: 3D reconstruction in dynamic environments for RGB-D cameras exploiting residuals[C]// Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2019: 7855-7862. 10.1109/iros40897.2019.8967590 |
16 | SHI X S, LI D J, ZHAO P P, et al. Are we ready for service robots? the OpenLORIS-Scene datasets for lifelong SLAM[C]// Proceedings of the 2020 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2020: 3139-3145. 10.1109/icra40945.2020.9196638 |
17 | CORTÉS S, SOLIN A, RAHTU E, et al. ADVIO: an authentic dataset for visual-inertial odometry[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11214. Cham: Springer, 2018: 425-440. |
18 | GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? the KITTI vision benchmark suite[C]// Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2012: 3354-3361. 10.1109/cvpr.2012.6248074 |
19 | MADDERN W, PASCOE G, LINEGAR C, et al. 1 year, 1000 km: the Oxford RobotCar dataset[J]. The International Journal of Robotics Research, 2017, 36(1): 3-15. 10.1177/0278364916679498 |
20 | 李帅鑫,李广云,王力,等. LiDAR/IMU紧耦合的实时定位方法[J]. 自动化学报, 2021, 47(6): 1377-1389. 10.16383/j.aas.c190424 |
LI S X, LI G Y, WANG L, et al. LiDAR/IMU tightly coupled real-time localization method[J]. Acta Automatica Sinica, 2021, 47(6): 1377-1389. 10.16383/j.aas.c190424 | |
21 | 俞毓锋,赵卉菁. 基于相机与摇摆激光雷达融合的非结构化环境定位[J]. 自动化学报, 2019, 45(9): 1791-1798. |
YU Y F, ZHAO H J. Off-road localization using monocular camera and nodding LiDAR[J]. Acta Automatica Sinica, 2019, 45(9): 1791-1798. | |
22 | 王硕,祝海江,李和平,等. 基于共面圆的距离传感器与相机的相对位姿标定[J]. 自动化学报, 2020, 46(6): 1154-1165. |
WANG S, ZHU H J, LI H P, et al. Relative pose calibration between a range sensor and a camera using two coplanar circles[J]. Acta Automatica Sinica, 2020, 46(6): 1154-1165. | |
23 | 艾青林,刘刚江,徐巧宁. 动态环境下基于改进几何与运动约束的机器人RGB-D SLAM算法[J]. 机器人, 2021, 43(2): 167-176. |
AI Q L, LIU G J, XU Q N. An RGB-D SLAM algorithm for robot based on the improved geometric and motion constraints in dynamic environment[J]. Robot, 2021, 43(2): 167-176. | |
24 | 高成强,张云洲,王晓哲,等. 面向室内动态环境的半直接法RGB-D SLAM算法[J]. 机器人, 2019, 41(3): 372-383. |
GAO C Q, ZHANG Y Z, WANG X Z, et al. Semi-direct RGB-D SLAM algorithm for dynamic indoor environments[J]. Robot, 2019, 41(3): 372-383. | |
25 | SUN Y X, LIU M, MENG M Q H. Improving RGB-D SLAM in dynamic environments: a motion removal approach[J]. Robotics and Autonomous Systems, 2017, 89: 110-122. 10.1016/j.robot.2016.11.012 |
26 | 张慧娟,方灶军,杨桂林. 动态环境下基于线特征的RGB-D视觉里程计[J]. 机器人, 2019, 41(1): 75-82. |
ZHANG H J, FANG Z J, YANG G L. RGB-D visual odometry in dynamic environments using line features[J]. Robot, 2019, 41(1): 75-82. | |
27 | 魏彤,李绪. 动态环境下基于动态区域剔除的双目视觉SLAM算法[J]. 机器人, 2020, 42(3): 336-345. |
WEI T, LI X. Binocular vision SLAM algorithm based on dynamic region elimination in dynamic environment[J]. Robot, 2020, 42(3): 336-345. | |
28 | 杨世强,范国豪,白乐乐,等. 基于几何约束的室内动态环境视觉SLAM[J]. 计算机工程与应用, 2021, 57(16): 203-212. |
YANG S Q, FAN G H, BAI L L, et al. Geometric constraint-based visual SLAM under dynamic indoor environment[J]. Computer Engineering and Applications, 2021, 57(16): 203-212. | |
29 | WEI H Y, ZHANG T, ZHANG L. GMSK-SLAM: a new RGB-D SLAM method with dynamic areas detection towards dynamic environments[J]. Multimedia Tools and Applications, 2021, 80(21/22/23): 31729-31751. 10.1007/s11042-021-11168-5 |
30 | DAI W C, ZHANG Y, LI P, et al. RGB-D SLAM in dynamic environments using point correlations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 373-389. 10.1109/tpami.2020.3010942 |
31 | KIM D H, KIM J H. Effective background model-based RGB-D dense visual odometry in a dynamic environment[J]. IEEE Transactions on Robotics, 2016, 32(6): 1565-1573. 10.1109/tro.2016.2609395 |
32 | SUN Y X, LIU M, MENG M Q H. Motion removal for reliable RGB-D SLAM in dynamic environments[J]. Robotics and Autonomous Systems, 2018, 108: 115-128. 10.1016/j.robot.2018.07.002 |
33 | 张岩,孙世宇,胡永江,等. 基于特征距离与内点的随机抽样一致性算法[J]. 电子与信息学报, 2018, 40(4): 928-935. 10.11999/JEIT170703 |
ZHANG Y, SUN S Y, HU Y J, et al. Random sample consensus algorithm based on feature distance and inliers[J]. Journal of Electronics and Information Technology, 2018, 40(4): 928-935. 10.11999/JEIT170703 | |
34 | ALCANTARILLA P F, YEBES J J, ALMAZÁN J, et al. On combining visual SLAM and dense scene flow to increase the robustness of localization and mapping in dynamic environments[C]// Proceedings of the 2012 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2012: 1290-1297. 10.1109/icra.2012.6224690 |
35 | BARBER C B, DOBKIN D P, HUHDANPAA H. The Quickhull algorithm for convex hulls[J]. ACM Transactions on Mathematical Software, 1996, 22(4): 469-483. 10.1145/235815.235821 |
36 | CUI L Y, MA C W. SDF-SLAM: semantic depth filter SLAM for dynamic environments[J]. IEEE Access, 2020, 8: 95301-95311. 10.1109/access.2020.2994348 |
37 | REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 779-788. 10.1109/cvpr.2016.91 |
38 | BADRINARAYANAN V, KENDALL A, CIPOLLA R. SegNet: a deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481-2495. 10.1109/tpami.2016.2644615 |
39 | HE K M, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 2980-2988. 10.1109/iccv.2017.322 |
40 | YU C, LIU Z X, LIU X J, et al. DS-SLAM: a semantic visual SLAM towards dynamic environments[C]// Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2018: 1168-1174. 10.1109/iros.2018.8593691 |
41 | BESCOS B, FÁCIL J M, CIVERA J, et al. DynaSLAM: tracking, mapping, and inpainting in dynamic scenes[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083. 10.1109/lra.2018.2860039 |
42 | XIAO L H, WANG J G, QIU X S, et al. Dynamic-SLAM: semantic monocular visual localization and mapping based on deep learning in dynamic environment[J]. Robotics and Autonomous Systems, 2019, 117: 1-16. 10.1016/j.robot.2019.03.012 |
43 | SUN L Y, KANEHIRO F, KUMAGAI I, et al. Multi-purpose SLAM framework for dynamic environment[C]// Proceedings of the 2020 IEEE/SICE International Symposium on System Integration. Piscataway: IEEE, 2020: 519-524. 10.1109/sii46433.2020.9026299 |
44 | FAN Y C, ZHANG Q C, LIU S F, et al. Semantic SLAM with more accurate point cloud map in dynamic environments[J]. IEEE Access, 2020, 8: 112237-112252. 10.1109/access.2020.3003160 |
45 | ZHAO X, ZUO T, HU X Y. OFM-SLAM: a visual semantic SLAM for dynamic indoor environments[J]. Mathematical Problems in Engineering, 2021, 2021: No.5538840. 10.1155/2021/5538840 |
46 | FAN Y C, ZHANG Q C, TANG Y L, et al. Blitz-SLAM: a semantic SLAM in dynamic environments[J]. Pattern Recognition, 2022, 121: No.108225. 10.1016/j.patcog.2021.108225 |
47 | MIAO S, LIU X X, WEI D Z, et al. A visual SLAM robust against dynamic objects based on hybrid semantic-geometry information[J]. ISPRS International Journal of Geo-Information, 2021, 10(10): No.673. 10.3390/ijgi10100673 |
48 | AI Y B, RUI T, LU M, et al. DDL-SLAM: a robust RGB-D SLAM in dynamic environments combined with deep learning[J]. IEEE Access, 2020, 8: 162335-162342. 10.1109/access.2020.2991441 |
49 | RAN T, YUAN L, ZHANG J B, et al. RS-SLAM: a robust semantic SLAM in dynamic environments based on RGB-D sensor[J]. IEEE Sensors Journal, 2021, 21(18): 20657-20664. 10.1109/jsen.2021.3099511 |
50 | WU W X, GUO L, GAO H L, et al. YOLO-SLAM: a semantic SLAM system towards dynamic environment with geometric constraint[J]. Neural Computing and Applications, 2022, 34(8): 6011-6026. 10.1007/s00521-021-06764-3 |
51 | WU Z Y, DENG X Y, LI S M, et al. OC-SLAM: steadily tracking and mapping in dynamic environments[J]. Frontiers in Energy Research, 2021, 9: No.803631. 10.3389/fenrg.2021.803631 |
52 | ZHONG F W, WANG S, ZHANG Z Q, et al. Detect-SLAM: making object detection and SLAM mutually beneficial[C]// Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2018: 1001-1010. 10.1109/wacv.2018.00115 |
53 | ZHANG L, WEI L Q, SHEN P Y, et al. Semantic SLAM based on object detection and improved Octomap[J]. IEEE Access, 2018, 6: 75545-75559. 10.1109/access.2018.2873617 |
54 | WEN S H, LI P J, ZHAO Y J, et al. Semantic visual SLAM in dynamic environment[J]. Autonomous Robots, 2021, 45(4): 493-504. |
55 | XING Z W, ZHU X R, DONG D C. DE-SLAM: SLAM for highly dynamic environment[J]. Journal of Field Robotics, 2022, 39(5): 528-542. 10.1002/rob.22062 |
56 | SANDLER M, HOWARD A, ZHU M L, et al. MobileNetV2: inverted residuals and linear bottlenecks[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 4510-4520. 10.1109/cvpr.2018.00474 |
57 | YAN L, HU X, ZHAO L, et al. DGS-SLAM: a fast and robust RGBD SLAM in dynamic environments combined by geometric and semantic information[J]. Remote Sensing, 2022, 14(3): 795-819. 10.3390/rs14030795 |
58 | LIU Y B, MIURA J. RDS-SLAM: real-time dynamic SLAM using semantic segmentation methods[J]. IEEE Access, 2021, 9: 23772-23785. 10.1109/access.2021.3050617 |
59 | CAMPOS C, ELVIRA R, GÓMEZ RODRÍGUEZ J J, et al. ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM[J]. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890. 10.1109/tro.2021.3075644 |
60 | LIU Y B, MIURA J. RDMO-SLAM: real-time visual SLAM for dynamic environments using semantic label prediction with optical flow[J]. IEEE Access, 2021, 9: 106981-106997. 10.1109/access.2021.3100426 |
61 | FU D, XIA H, QIAO Y Y. Monocular visual-inertial navigation for dynamic environment[J]. Remote Sensing, 2021, 13(9): No.1610. 10.3390/rs13091610 |
62 | YAO E L, ZHANG H X, SONG H T, et al. Fast and robust visual odometry with a low-cost IMU in dynamic environments[J]. Industrial Robot, 2019, 46(6): 882-894. 10.1108/ir-01-2019-0001 |
63 | KIM D H, HAN S B, KIM J H. Visual odometry algorithm using an RGB-D sensor and IMU in a highly dynamic environment[M]// KIM J H, YANG W M, JO J, et al. Robot Intelligence Technology and Applications 3: Results from the 3th International Conference on Robot Intelligence Technology and Applications, AISC 345. Cham: Springer, 2015: 11-26. 10.1007/978-3-319-16841-8_2 |
64 | BLOESCH M, OMARI S, HUTTER M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]// Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2015: 298-304. 10.1109/iros.2015.7353389 |
65 | NAM D V, GON-WOO K. Robust stereo visual inertial navigation system based on multi-stage outlier removal in dynamic environments[J]. Sensors, 2020, 20(10): No.2922. 10.3390/s20102922 |
66 | YANG D S, BI S S, WANG W, et al. DRE-SLAM: dynamic RGB-D encoder SLAM for a differential-drive robot[J]. Remote Sensing, 2019, 11(4): No.380. 10.3390/rs11040380 |
67 | 葛振华,王纪凯,王鹏,等. 室内环境SLAM过程中动态目标的检测与消除[C]// 第18届中国系统仿真技术及其应用学术年会论文集. 合肥:中国科学技术大学出版社, 2017: 259-265. |
GE Z H, WANG J K, WANG P, et al. Detection and removal of moving objects in indoor environment SLAM[C]// Proceedings of the 18th CCSSTA. Hefei: University of Science and Technology of China Press, 2017: 259-265. | |
68 | ZHANG T W, ZHANG H Y, LI X F, et al. AcousticFusion: fusing sound source localization to visual SLAM in dynamic environments[C]// Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2021: 6868-6875. 10.1109/iros51168.2021.9636585 |
69 | YUAN Z A, XU K, ZHOU X Y, et al. SVG-Loop: semantic-visual-geometric information-based loop closure detection[J]. Remote Sensing, 2021, 13(17): No.3520. 10.3390/rs13173520 |
70 | HU M Y, LI S, WU J Y, et al. Loop closure detection for visual SLAM fusing semantic information[C]// Proceedings of the 38th Chinese Control Conference. Piscataway: IEEE, 2019: 4136-4141. 10.23919/chicc.2019.8866283 |
[1] | 肖田邹子, 周小博, 罗欣, 唐其鹏. 动态环境下结合实例分割与聚类的鲁棒RGB-D SLAM系统[J]. 《计算机应用》唯一官方网站, 2023, 43(4): 1220-1225. |
[2] | 李永迪, 李彩虹, 张耀玉, 张国胜. 基于改进SAC算法的移动机器人路径规划[J]. 《计算机应用》唯一官方网站, 2023, 43(2): 654-660. |
[3] | 申炳琦, 张志明, 舒少龙. 移动机器人超宽带与视觉惯性里程计组合的室内定位算法[J]. 《计算机应用》唯一官方网站, 2022, 42(12): 3924-3930. |
[4] | 刘晓光, 靳少康, 韦子辉, 梁铁, 王洪瑞, 刘秀玲. 基于阈值和极端随机树的实时跌倒检测方法[J]. 计算机应用, 2021, 41(9): 2761-2766. |
[5] | 李开荣, 刘爽, 胡倩倩, 唐亦媛. 基于转角约束的改进蚁群优化算法路径规划[J]. 计算机应用, 2021, 41(9): 2560-2568. |
[6] | 陆国庆, 孙昊. 基于随机行走的群机器人二维地图构建[J]. 计算机应用, 2021, 41(7): 2121-2127. |
[7] | 栾佳宁, 张伟, 孙伟, 张奥, 韩冬. 基于二维码视觉与激光雷达融合的高精度定位算法[J]. 计算机应用, 2021, 41(5): 1484-1491. |
[8] | 李二超, 齐款款. B样条曲线融合蚁群算法的机器人路径规划[J]. 《计算机应用》唯一官方网站, 2021, 41(12): 3558-3564. |
[9] | 郑思诚, 孔令华, 游通飞, 易定容. 动态环境下基于深度学习的语义SLAM算法[J]. 计算机应用, 2021, 41(10): 2945-2951. |
[10] | 刘昂, 蒋近, 徐克锋. 改进蚁群和鸽群算法的机器人路径规划[J]. 计算机应用, 2020, 40(11): 3366-3372. |
[11] | 王坤, 曾国辉, 鲁敦科, 黄勃, 李晓斌. 基于改进渐进最优的双向快速扩展随机树的移动机器人路径规划算法[J]. 计算机应用, 2019, 39(5): 1312-1317. |
[12] | 陈若男, 文聪聪, 彭玲, 尤承增. 改进A*算法在机器人室内路径规划中的应用[J]. 计算机应用, 2019, 39(4): 1006-1011. |
[13] | 黄超, 梁圣涛, 张毅, 张杰. 基于多目标蝗虫优化算法的移动机器人路径规划[J]. 计算机应用, 2019, 39(10): 2859-2864. |
[14] | 赵宏, 常兆斌, 王乐. 基于词法特征的恶意域名快速检测算法[J]. 计算机应用, 2019, 39(1): 227-231. |
[15] | 罗蕊, 师五喜, 李宝全. 受侧滑和滑移影响的移动机器人自抗扰控制[J]. 计算机应用, 2018, 38(5): 1517-1522. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||