TY - JOUR
T1 - A framework for co-evolutionary algorithm using Q-learning with meme
AU - Jiao, Keming
AU - Chen, Jie
AU - Xin, Bin
AU - Li, Li
AU - Zhao, Zhixin
AU - Zheng, Yifan
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2023/9/1
Y1 - 2023/9/1
N2 - A large number of metaheuristic algorithms have been proposed in the last three decades, but no metaheuristic algorithm is superior to the others for all the optimization problems. It is laborious to select an appropriate metaheuristic algorithm to solve a problem, especially for laymen. In this paper, a framework for co-evolutionary algorithm using Q-learning with the meme is proposed, which is called as QLMA. The solution generation method in the metaheuristic algorithm is named meme, which is also viewed as the action for an agent in Q-learning, and multiple memes form the action set. In the initial stage, the tent map and opposition based learning are employed to obtain the initial population. In updating stage, a new population is generated by an action that is chosen from the action set by the agent in Q-learning, then the disruption operation is applied, avoiding excessive aggregation of solutions around the current global optimal solution and improving the balance between exploration and exploitation. QLMA is compared to sixteen algorithms on 23 classical benchmark functions, CEC 2017, and CEC 2019 benchmark functions. The experimental results demonstrate that QLMA is superior to the peer algorithms and has a good balance between exploration and exploitation.
AB - A large number of metaheuristic algorithms have been proposed in the last three decades, but no metaheuristic algorithm is superior to the others for all the optimization problems. It is laborious to select an appropriate metaheuristic algorithm to solve a problem, especially for laymen. In this paper, a framework for co-evolutionary algorithm using Q-learning with the meme is proposed, which is called as QLMA. The solution generation method in the metaheuristic algorithm is named meme, which is also viewed as the action for an agent in Q-learning, and multiple memes form the action set. In the initial stage, the tent map and opposition based learning are employed to obtain the initial population. In updating stage, a new population is generated by an action that is chosen from the action set by the agent in Q-learning, then the disruption operation is applied, avoiding excessive aggregation of solutions around the current global optimal solution and improving the balance between exploration and exploitation. QLMA is compared to sixteen algorithms on 23 classical benchmark functions, CEC 2017, and CEC 2019 benchmark functions. The experimental results demonstrate that QLMA is superior to the peer algorithms and has a good balance between exploration and exploitation.
KW - Chaos
KW - Disruption
KW - Metaheuristic algorithm
KW - Opposition based learning
KW - Q-learning
UR - http://www.scopus.com/inward/record.url?scp=85153564749&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2023.120186
DO - 10.1016/j.eswa.2023.120186
M3 - Article
AN - SCOPUS:85153564749
SN - 0957-4174
VL - 225
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 120186
ER -