《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (2): 432-438.DOI: 10.11772/j.issn.1001-9081.2023020193

• 人工智能 • 上一篇    

基于请求与应答通信机制和局部注意力机制的多机器人强化学习路径规划方法

邓辅秦1,2,3, 官桧锋1, 谭朝恩1, 付兰慧1, 王宏民1, 林天麟2, 张建民1()   

  1. 1.五邑大学 智能制造学部, 广东 江门 529000
    2.香港中文大学(深圳) 深圳市人工智能与机器人研究院, 广东 深圳 518000
    3.深圳市杉川机器人有限公司, 广东 深圳 518000
  • 收稿日期:2023-02-28 修回日期:2023-05-26 接受日期:2023-05-29 发布日期:2024-02-22 出版日期:2024-02-10
  • 通讯作者: 张建民
  • 作者简介:邓辅秦(1982—),男,湖南郴州人,高级工程师,博士,主要研究方向:机器学习、移动机器人系统、多机器人系统
    官桧锋(1998—),男,广东韶关人,硕士研究生,主要研究方向:多机器人路径规划
    谭朝恩(1999—),男,广东顺德人,硕士研究生,主要研究方向:多机器人路径规划
    付兰慧(1987—),女,河南新乡人,讲师,博士,主要研究方向:机器学习、图像信息处理
    王宏民(1981—),男,河北承德人,副教授,博士,主要研究方向:机器人、仿生机器人、机器人运动控制操作
    林天麟(1984—),男,香港人,助理教授,博士,CCF会员,主要研究方向:模块化自重构机器人、多机器人系统;
  • 基金资助:
    国家重点研发计划项目(2020YFB1313300);深圳市科技计划项目(KQTD2016113010470345);深圳市人工智能与机器人研究院探索性研究项目(AC01202101103);五邑大学横向课题(33520098)

Multi-robot reinforcement learning path planning method based on request-response communication mechanism and local attention mechanism

Fuqin DENG1,2,3, Huifeng GUAN1, Chaoen TAN1, Lanhui FU1, Hongmin WANG1, Tinlun LAM2, Jianmin ZHANG1()   

  1. 1.School of Intelligent Manufacturing,Wuyi University,Jiangmen Guangdong 529000,China
    2.Shenzhen Institute of Artifical Intelligence and Robotics for Society,The Chinese University of Hong Kong (Shenzhen),Shenzhen Guangdong 518000,China
    3.Shenzhen 3irobotix Company Limited,Shenzhen Guangdong 518000,China
  • Received:2023-02-28 Revised:2023-05-26 Accepted:2023-05-29 Online:2024-02-22 Published:2024-02-10
  • Contact: Jianmin ZHANG
  • About author:DENG Fuqin, born in 1982, Ph. D., senior engineer. His research interests include machine learning, mobile robotic systems, multi-robot systems.
    GUAN Huifeng, born in 1998, M. S. candidate. His research interests include multi-agent path planning.
    TAN Chaoen, born in 1999, M. S. candidate. His research interests include multi-robot path planning.
    FU Lanhui, born in 1987, Ph. D., lecturer. Her research interests include machine learning, image information processing.
    WANG Hongmin, born in 1981, Ph. D., associate professor. His research interests include robotics, bionic robots, robot motion control teleoperation.
    LAM Tinlun, born in 1984,Ph. D., assistant professor. His research interests include modularized self-reconfigurable robots, multi-robot systems.
  • Supported by:
    National Key Research and Development Program(2020YFB1313300);Shenzhen Science and Technology Plan Project(KQTD2016113010470345);Shenzhen Institute of Artificial Intelligence and Robotics Exploratory Research Project(AC01202101103);Wuyi University Horizontal Project(33520098)

摘要:

为降低多机器人在动态环境下路径规划的阻塞率,基于深度强化学习方法框架Actor-Critic,设计一种基于请求与应答通信机制和局部注意力机制的分布式深度强化学习路径规划方法(DCAMAPF)。在Actor网络,基于请求与应答通信机制,每个机器人请求视野内的其他机器人的局部观测信息和动作信息,进而规划出协同的动作策略。在Critic网络,每个机器人基于局部注意力机制将注意力权重动态地分配到在视野内成功应答的其他机器人局部观测和动作信息上。实验结果表明,与传统动态路径规划方法D* Lite、最新的分布式强化学习方法MAPPER和最新的集中式强化学习方法AB-MAPPER相比,DCAMAPF在离散初始化环境,阻塞率均值均约降低了6.91、4.97、3.56个百分点;在集中初始化环境下能更高效地避免发生阻塞,阻塞率均值均约降低了15.86、11.71、5.54个百分点,并减少占用的计算缓存。所提方法确保了路径规划的效率,适用于求解不同动态环境下的多机器人路径规划任务。

关键词: 多机器人路径规划, 深度强化学习, 注意力机制, 通信, 动态环境

Abstract:

To reduce the blocking rate of multi-robot path planning in dynamic environments, a Distributed Communication and local Attention based Multi-Agent Path Finding (DCAMAPF) was proposed based on Actor-Critic deep reinforcement learning method framework, using request-response communication mechanism and local attention mechanism. In the Actor network, local observation and action information was requested by each robot from other robots in its field of view based on the request-response communication mechanism, and a coordinated action strategy was planned accordingly. In the Critic network, attention weights were dynamically allocated by each robot to the local observation and action information of other robots that had successfully responded within its field of view based on the local attention mechanism. The experimental results showed that, the blocking rate was reduced by approximately 6.91, 4.97, and 3.56 percentage points, respectively, in a discrete initialization environment, compared with traditional dynamic path planning methods such as D* Lite, the latest distributed reinforcement learning method MAPPER, and the latest centralized reinforcement learning method AB-MAPPER (Attention and BicNet based MAPPER); in a centralized initialization environment, the mean blocking rate was reduced by approximately 15.86, 11.71 and 5.54 percentage points; while the occupied computing cache was also reduced. Therefore, the proposed method ensures the efficiency of path planning and is applicable for solving multi-robot path planning tasks in different dynamic environments.

Key words: multi-agent path finding, deep reinforcement learning, attention mechanism, communication, dynamic environment

中图分类号: