Journal of Computer Applications ›› 2023, Vol. 43 ›› Issue (1): 202-208.DOI: 10.11772/j.issn.1001-9081.2021111886

Special Issue: 先进计算

• Advanced computing • Previous Articles     Next Articles

Improved QMIX algorithm from communication and exploration for multi-agent reinforcement learning

DENG Huiyi1,2, LI Yongzhen1, YIN Qiyue3   

  1. 1.School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing 102616, China
    2.Department of Automation, Xiamen University, Xiamen Fujian 361002, China
    3.Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
  • Received:2021-11-08 Revised:2022-05-26 Online:2023-01-12
  • Contact: LI Yongzhen, born in 1983, Ph. D., senior experimentalist. His research interests include software theory, artificial intelligence.
  • About author:DENG Huiyi, born in 1999, M. S. candidate. His research interests include reinforcement learning, deep learning;YIN Qiyue, born in 1990, Ph. D., research associate. His research interests include machine learning, game AI;

引入通信与探索的多智能体强化学习QMIX算法

邓晖奕1,2, 李勇振1, 尹奇跃3   

  1. 1.北京建筑大学 电气与信息工程学院,北京 102616
    2.厦门大学 自动化系,福建 厦门 361002
    3.中国科学院 自动化研究所,北京 100190
  • 通讯作者: 李勇振(1983—),男,北京人,高级实验师,博士,主要研究方向:软件理论、人工智能liyongzhen@bucea.edu.cn
  • 作者简介:邓晖奕(1999—),男,福建武夷山人,硕士研究生,主要研究方向:强化学习、深度学习;尹奇跃(1990—),男,河南南阳人,副研究员,博士,CCF会员,主要研究方向:机器学习、游戏AI;
  • 基金资助:
    北京高等学校高水平人才交叉培养“实培计划”项目;北京建筑大学2022年度青年教师科研能力提升计划项目(X22022)。

Abstract: Non-stationarity that breaks the Markov assumption followed by most single-agent reinforcement learning algorithms is one of the main challenges in multi-agent environment, making each agent may be caught in an infinite loop caused by the environment created by the other agents during the learning process. To solve above problem, the implementation method of Centralized Training with Decentralized Execution (CTDE) structure in reinforcement learning was studied, and from two perspectives of agent communication and exploration, the QMIX algorithm was improved by introducing a Variance Control-Based (VBC) communication model and a curiosity mechanism. The proposed algorithm was validated in micro control scenarios of StarCraft Ⅱ Learning Environment (SC2LE). Experimental results show that the proposed algorithm can improve the performance and obtain a training model with higher convergence speed compared to QMIX algorithm.

Key words: multi-agent environment, deep reinforcement learning, Centralized Training with Decentralized Execution (CTDE) structure, curiosity mechanism, agent communication

摘要: 非平稳性问题是多智能体环境中深度学习面临的主要挑战之一,它打破了大多数单智能体强化学习算法都遵循的马尔可夫假设,使每个智能体在学习过程中都有可能会陷入由其他智能体所创建的环境而导致无终止的循环。为解决上述问题,研究了中心式训练分布式执行(CTDE)架构在强化学习中的实现方法,并分别从智能体间通信和智能体探索这两个角度入手,采用通过方差控制的强化学习算法(VBC)并引入好奇心机制来改进QMIX算法。通过星际争霸Ⅱ学习环境(SC2LE)中的微操场景对所提算法加以验证。实验结果表明,与QMIX算法相比,所提算法的性能有所提升,并且能够得到收敛速度更快的训练模型。

关键词: 多智能体环境, 深度强化学习, 中心式训练分布式执行架构, 好奇心机制, 智能体通信

CLC Number: