Abstract
Cooperative-competitive social group dynamics may be modelled with multi-agent environments with a large number of agents from a few distinct agent-types. Even the simplest games modelling social interactions are suitable to analyze emerging group dynamics. In many cases, the underlying computational problem is NP-complex, thus various machine learning techniques are implemented to accelerate the optimization process. Multi-agent reinforcement learning provides an effective framework to train autonomous agents with an adaptive nature. We analyze the performance of centralized and decentralized training featuring Deep Q-Networks on cooperative-competitive environments introduced in the MAgent library. Our experiments demonstrate that sensible policies may be constructed utilizing centralized and decentralized reinforcement learning methods by observing the mean rewards accumulated during training episodes.
Citare
@Inproceedings{Kopacz2022ApplyingDQ,
author = {Anikó Kopacz and L. Csató and Camelia Chira},
booktitle = {Soft Computing Models in Industrial and Environmental Applications},
title = {Applying Deep Q-learning for Multi-agent Cooperative-Competitive Environments},
year = {2022}
}
