Multiple Types of AI and Their Performance in Video Games

  • I Prajescu Department of Computer Science, Babes-Bolyai University, 1, M. Kogalniceanu Street, 400084, Cluj-Napoca, Romania
  • A.-D. Calin Department of Computer Science, Babes-Bolyai University, 1, M. Kogalniceanu Street, 400084, Cluj-Napoca, Romania

Abstract

In this article, we present a comparative study of Artificial Intelligence training methods, in the context of a racing video game. The algorithms Proximal Policy Policy Optimization (PPO), Generative Adversarial Imitation Learning (GAIL) and Behavioral Cloning (BC), present in the Machine Learning Agents (ML-Agents) toolkit have been used in several scenarios. We measured their learning capability and performance in terms of speed, correct level traversal, number of training steps required and we explored ways to improve their performance. These algorithms prove to be suitable for racing games and the toolkit is highly accessible within the ML-Agents toolkit.

References

[1] Berndt, C., Watson, I., and Guesgen, H. Oasis: an open ai standard interface specification to support reasoning, representation and learning in computer games. In IJCAI-05 Workshop on Reasoning, Representation, and Learning in Computer Games (2005), Citeseer, pp. 19–24.
[2] Bhattacharyya, R., Wulfe, B., Phillips, D., Kuefler, A., Morton, J., Senanayake, R., and Kochenderfer, M. Modeling human driving behavior through generative adversarial imitation learning. arXiv preprint arXiv:2006.06412 (2020).
[3] Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L. D., Monfort, M., Muller, U., Zhang, J., et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016).
[4] Fan, X., Wu, J., and Tian, L. A review of artificial intelligence for games. Artificial Intelligence in China (2020), 298–303.
[5] Giusti, A., Guzzi, J., Cires¸an, D. C., He, F.-L., Rodr´ıguez, J. P., Fontana, F., Faessler, M., Forster, C., Schmidhuber, J., Caro, G. D., Scaramuzza, D., and Gambardella, L. M. A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters 1, 2 (2016), 661–667.
[6] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. Advances in neural information processing systems 27 (2014).
[7] Ho, J., and Ermon, S. Generative adversarial imitation learning. Advances in neural information processing systems 29 (2016), 4565–4573.
[8] Juliani, A., Berges, V.-P., Teng, E., Cohen, A., Harper, J., Elion, C., Goy, C., Gao, Y., Henry, H., Mattar, M., et al. Unity: A general platform for intelligent agents. arXiv preprint arXiv:1809.02627 (2018).
[9] Kreminski, M., Samuel, B., Melcer, E., and Wardrip-Fruin, N. Evaluating AI-based games through retellings. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (2019), vol. 15, pp. 45–51.
[10] Kuefler, A., Morton, J., Wheeler, T., and Kochenderfer, M. Imitating driver behavior with generative adversarial networks. In 2017 IEEE Intelligent Vehicles Symposium (IV) (2017), IEEE, pp. 204–211.
[11] Nandy, A., and Biswas, M. Unity ml-agents. In Neural Networks in Unity. Springer, 2018, pp. 27–67.
[12] Perez-Liebana, D., Liu, J., Khalifa, A., Gaina, R. D., Togelius, J., and Lucas, S. M. General video game ai: A multitrack framework for evaluating agents, games, and content generation algorithms. IEEE Transactions on Games 11, 3 (2019), 195–214.
[13] Quang Tran, D., and Bae, S.-H. Proximal policy optimization through a deep reinforcement learning framework for multiple autonomous vehicles at a non-signalized intersection. Applied Sciences 10, 16 (2020), 5722.
[14] Rollings, A., and Adams, E. Andrew Rollings and Ernest Adams on game design. New Riders, 2003.
[15] Safadi, F., Fonteneau, R., and Ernst, D. Artificial intelligence in video games: Towards a unified framework. International Journal of Computer Games Technology (2015).
[16] Samak, T. V., Samak, C. V., and Kandhasamy, S. Robust behavioral cloning for autonomous vehicles using end-to-end imitation learning. arXiv preprint arXiv:2010.04767 (2020).
[17] Sander, R. Emergent autonomous racing via multi-agent proximal policy optimization. Embodied Intelligence (2020).
[18] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).
[19] Torabi, F., Warnell, G., and Stone, P. Behavioral cloning from observation. arXiv preprint arXiv:1805.01954 (2018).
[20] Yannakakis, G. N. Game ai revisited. In Proceedings of the 9th conference on Computing Frontiers (2012), pp. 285–292.
[21] Zhu, J., Villareale, J., Javvaji, N., Risi, S., L¨owe, M., Weigelt, R., and Harteveld, C. Player-ai interaction: What neural network games reveal about AI as play. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (2021), pp. 1–17.
Published
2022-07-03
How to Cite
PRAJESCU, I; CALIN, A.-D.. Multiple Types of AI and Their Performance in Video Games. Studia Universitatis Babeș-Bolyai Informatica, [S.l.], v. 67, n. 1, p. 21-36, july 2022. ISSN 2065-9601. Available at: <https://www.cs.ubbcluj.ro/~studia-i/journal/journal/article/view/76>. Date accessed: 29 mar. 2024. doi: https://doi.org/10.24193/subbi.2022.1.02.
Section
Articles