Journal of Computer Applications ›› 2017, Vol. 37 ›› Issue (12): 3493-3497.DOI: 10.11772/j.issn.1001-9081.2017.12.3493

Previous Articles     Next Articles

Improved grey wolf optimizer algorithm using dynamic weighting and probabilistic disturbance strategy

CHEN Chuang1, Ryad CHELLALI1, XING Yin2   

  1. 1. College of Electrical Engineering and Control Science, Nanjing Tech University, Nanjing Jiangsu 211816, China;
    2. College of Geomatics and Geoinformation, Guilin University of Technology, Guilin Guangxi 541004, China
  • Received:2017-06-23 Revised:2017-09-07 Online:2017-12-10 Published:2017-12-18


陈闯1, Ryad Chellali1, 邢尹2   

  1. 1. 南京工业大学 电气工程与控制科学学院, 南京 211816;
    2. 桂林理工大学 测绘地理信息学院, 广西 桂林 541004
  • 通讯作者: Ryad Chellali
  • 作者简介:陈闯(1992-),男,江苏宿迁人,硕士研究生,主要研究方向:最优化理论、语音信号处理;Ryad Chellali (1964-),男,法国人,教授,博士,主要研究方向:机器人、虚拟和增强现实、机器学习;邢尹(1992-),女,江苏南通人,硕士研究生,主要研究方向:机器学习、全球导航卫星系统。

Abstract: The basic Grey Wolf Optimizer (GWO) algorithm is easy to fall into local optimum, which leads to low search precision. In order to solve the problem, an Improved GWO (IGWO) was proposed. On the one hand, the position vector updating equation was dynamically adjusted by introducing weighting factor derived from coefficient vector of the GWO algorithm. On the other hand, the probabilistic disturbance strategy was adopted to increase the population diversity of the algorithm at later stage of iteration, thus the ability of the algorithm for jumping out of the local optimum was enhanced. The simulation experiments were carried out on multiple benchmark test functions. The experimental results show that, compared with the GWO algorithm, Hybrid GWO (HGWO) algorithm, Gravitational Search Agorithm (GSA) and Differential Evolution (DE) algorithm, the proposed IGWO can effectively get rid of local convergence and has obvious advantages in search precision, algorithm stability and convergence speed.

Key words: meta-heuristic algorithm, Grey Wolf Optimizer (GWO) algorithm, function optimization, weighting factor, disturbance strategy

摘要: 针对基本灰狼优化(GWO)算法存在易陷入局部最优,进而导致搜索精度偏低的问题,提出了一种改进的GWO (IGWO)算法。一方面,通过引入由GWO算法系数向量构成的权值因子,动态调整算法的位置向量更新方程;另一方面,通过采用概率扰动策略,增强算法迭代后期的种群多样性,从而提升算法跳出局部最优的能力。对多个基准测试函数进行仿真实验,实验结果表明,相对于GWO算法、混合GWO (HGWO)算法、引力搜索算法(GSA)和差分进化(DE)算法,所提IGWO算法有效摆脱了局部收敛,在搜索精度、算法稳定性以及收敛速度上具有明显优势。

关键词: 元启发式算法, 灰狼优化算法, 函数优化, 权值因子, 扰动策略

CLC Number: