Competitive Influence Maximization in Trust-Based Social Networks with Deep Q-Learning

  • A. Kopacz Department of Computer Science, Babes-Bolyai University, 1, M. Kogalniceanu Street, 400084, Cluj-Napoca, Romania

Abstract

Social network analysis is a rapidly evolving research area having several real-life application areas, e.g. digital marketing, epidemiology, spread of misinformation. Influence maximization aims to select a subset of nodes in such manner that the information propagated over the network is maximized. Competitive influence maximization, which describes the phenomena of multiple actors competing for resources within the same infrastructure, can be solved with a greedy approach selecting the seed nodes utilizing the influence strength between nodes. Recently, deep reinforcement learning methods were applied for estimating the influence strength. We train a controller with reinforcement learning for selecting a node list of given length as the initial seed set for the information spread. Our experiments show that deep Q-learning methods are suitable to analyze the competitive influence maximization on trust and distrust based social networks.

References

1] Carnes, T., Nagarajan, C., Wild, S. M., and van Zuylen, A. (2007). Maximizing influence in a competitive social network: A follower’s perspective. In Proceedings of the Ninth International Conference on Electronic Commerce, ICEC ’07, page 351–360, New York, NY, USA. Association for Computing Machinery.
[2] Chen, M., Zheng, Q., Boginski, V., and Pasiliao, E. (2021). Influence maximization in social media networks concerning dynamic user behaviors via reinforcement learning. Computational Social Networks, 8.
[3] Kempe, D., Kleinberg, J., and Tardos, ´E. (2003). Maximizing the spread of influence through a social network. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 137-146.
[4] Leskovec, J., Huttenlocher, D. P., and Kleinberg, J. M. (2010). Signed networks in social media. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
[5] Li, D., Xu, Z., Chakraborty, N., Gupta, A., Sycara, K. P., and Li, S. (2014). Polarity related influence maximization in signed social networks. PLoS ONE, 9.
[6] Li, H., Xu, M., Bhowmick, S. S., Rayhan, J. S., Sun, C., and Cui, J. (2022). Piano: Influence maximization meets deep reinforcement learning. IEEE Transactions on Computational Social Systems, pages 1–13.
[7] Lin, M., Li, W., and Lu, S. (2020). Balanced Influence Maximization in Attributed Social Network Based on Sampling, page 375–383. Association for Computing Machinery, New York, NY, USA.
[8] McCallum, A. K., Nigam, K., Rennie, J., and Seymore, K. (2000). Automating the construction of internet portals with machine learning. Information Retrieval, 3(2):127–163.
[9] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
[10] Sutton, R. S. and Barto, A. G. (2018). Reinforcement Learning: An Introduction. The MIT Press, second edition.
[11] Xie, L., Huang, H., and Du, Q. (2022). A hierarchical generative embedding model for influence maximization in attributed social networks. Applied Sciences, 12(3):1321
Published
2024-06-05
How to Cite
KOPACZ, A.. Competitive Influence Maximization in Trust-Based Social Networks with Deep Q-Learning. Studia Universitatis Babeș-Bolyai Informatica, [S.l.], v. 69, n. 1, p. 57-69, june 2024. ISSN 2065-9601. Available at: <https://www.cs.ubbcluj.ro/~studia-i/journal/journal/article/view/97>. Date accessed: 15 oct. 2024. doi: https://doi.org/10.24193/subbi.2024.1.04.
Section
Articles