Evaluating cooperative-competitive dynamics with deep Q-learning (2023)

Abstract

We model cooperative-competitive social group dynamics with multi-agent environments, specialized in cases with a large number of agents from only a few distinct types. The multi-agent optimization problems are addressed in turn with multi-agent reinforcement learning algorithms to obtain flexible and robust solutions. We analyze the effectiveness of centralized and decentralized algorithms using three variants of deep Q-networks on these cooperative-competitive environments: first, we use the decentralized training independent learning with deep Q-networks, secondly the centralized monotonic value factorizations for deep learning, and lastly the multi-agent variational exploration. We test the algorithms in simulated predator–prey multi-agent environments in two distinct environments: the adversary pursuit and simple tag. The experiments highlight the performance of the different deep Q-learning methods, and we conclude that decentralized training of deep Q-networks accumulates higher episode rewards during training and evaluation in comparison with the selected centralized learning approaches.

Citare

@Inproceedings{Kopacz2023EvaluatingCD,
 author = {Anikó Kopacz and Lehel Csató and Camelia Chira},
 booktitle = {Neurocomputing},
 title = {Evaluating cooperative-competitive dynamics with deep Q-learning},
 year = {2023}
}

Leave a Reply

Your email address will not be published. Required fields are marked *