摘要:Grey wolf optimizer (GWO) is a global search algorithm based on grey wolf hunting activity. However, the traditional GWO is prone to fall into local optimum, affecting the performance of the algorithm. Therefore, to solve this problem, an equalized grey wolf optimizer with refraction opposite learning (REGWO) is proposed in this study. In REGWO, the issue about the low swarm population variety of GWO in the late iteration is well overcome by the opposing learning of refraction. In addition, the equilibrium pool strategy reduces the likelihood of wolves going to the local extremum. To investigate the effectiveness of REGWO, it is evaluated on 21 widely used benchmark functions and IEEE CEC 2019 test functions. Experimental results show/ that REGWO performs better than the other competitors on most benchmarks.