A Review and Analysis of The Existing Literature on Monochromatic Photography Colorization Using Deep Learning

  • A.M. Adăscăliței Department of Computer Science, Faculty of Mathematics and Computer Science, Babeș-Bolyai University, Mihail Kogălniceanu 1, 400084, Cluj-Napoca, Romania

Abstract

It is universally known that, through the process of colorization, one aims at converting a monochrome image into one of color, usually because it was taken by the limited technology of previous decades. Our work introduces the problem, summarizes the general deep learning solutions, and discusses the experimental results obtained from open-source repositories. Although the surveyed methods can be applied to other fields, solely the content of photography is being considered. Our contribution stands in the analysis of colorization in photography by examining used datasets and methodologies for evaluation, data processing activities, and the infrastructure demanded by these systems. We curated some of the most promising papers, published between 2016 and 2021, and centered our observations around software reliability, and key advancements in solutions employing Generative Adversarial Networks and Neural Networ


 

References

[1] Antic, J. jantic/deoldify: A deep learning based project for colorizing and restoring old images (and video!). github.com/jantic/DeOldify, accessed on Dec 4, 2020.
[2] Ardizzone, L., Luth, C., Kruse, J., Rother, C., and K ¨ othe, U. ¨ Guided image generation with conditional invertible neural networks, 2019.
[3] Bahng, H., Yoo, S., Cho, W., Park, D. K., Wu, Z., Ma, X., and Choo, J. Coloring with words: Guiding image colorization through text-based palette generation, 2018.
[4] Baldassarre, F., Mor´ın, D. G., and Rodes-Guirao, L. ´ Deep koalarization: Image colorization using cnns and inception-resnet-v2, 2017.
[5] Caesar, H., Uijlings, J., and Ferrari, V. Coco-stuff: Thing and stuff classes in context, 2018.
[6] Cao, Y., Zhou, Z., Zhang, W., and Yu, Y. Unsupervised diverse colorization via generative adversarial networks, 2017.
[7] Chen, J., Shen, Y., Gao, J., Liu, J., and Liu, X. Language-based image editing with recurrent attentive models, 2018.
[8] Cheng, Z., Yang, Q., and Sheng, B. Deep colorization, 2016.
[9] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., and Schiele, B. The cityscapes dataset for semantic urban scene understanding, 2016.
[10] Deshpande, A., Lu, J., Yeh, M.-C., Chong, M. J., and Forsyth, D. Learning diverse image colorization. github.com/aditya12agd5/divcolor, 2017.
[11] Dynamichrome. Showcase. dynamichrome.com, accessed on Dec 4, 2020.
[12] El Helou, M., and Susstrunk, S. ¨ BIGPrior: Towards decoupling learned prior hallucination and data fidelity in image restoration. arXiv preprint arXiv:2011.01406 (2020).
[13] Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., and Zisserman, A. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results, 2012.
[14] Gross, R., Matthews, I., Cohn, J., Kanade, T., and Baker, S. Multi-pie. In 2008 8th IEEE International Conference on Automatic Face Gesture Recognition (2008), pp. 1–8.
[15] Guadarrama, S., Dahl, R., Bieber, D., Norouzi, M., Shlens, J., and Murphy, K. Pixcolor: Pixel recursive colorization, 2017.
[16] He, M., Chen, D., Liao, J., Sander, P. V., and Yuan, L. Deep exemplar-based colorization, 2018.
[17] Hu, R., Rohrbach, M., and Darrell, T. Segmentation from natural language expressions, 2016.
[18] Iizuka, S., Simo-Serra, E., and Ishikawa, H. Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Transactions on Graphics 35 (07 2016), 1–11.
[19] Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. Image-to-image translation with conditional adversarial networks, 2018.
[20] Kiani, L., Saeed, M., and Nezamabadi-pour, H. Image colorization using generative adversarial networks and transfer learning. In 2020 International Conference on Machine Vision and Image Processing (MVIP) (2020), pp. 1–6.
[21] Kodak. Chronology of film. www.kodak.com/en/motion/page/chronology-of-film.
[22] Krizhevsky, A. Learning multiple layers of features from tiny images. Tech. rep., 2009.
[23] Krizhevsky, A., Nair, V., and Hinton, G. Cifar-10 dataset. https://www.cs.toronto.edu/~kriz/cifar.html.
[24] Krizhevsky, A., Sutskever, I., and Hinton, G. Imagenet classification with deep convolutional neural networks. Neural Information Processing Systems 25 (01 2012).
[25] Kumar, M., Weissenborn, D., and Kalchbrenner, N. Colorization transformer. github.com/google-research/google-research/tree/master/coltran, 2021.
[26] Larsson, G., Maire, M., and Shakhnarovich, G. Colorization as a proxy task for visual understanding. github.com/gustavla/self-supervision, 2017.
[27] Larsson, G., Maire, M., and Shakhnarovich, G. Learning representations for automatic colorization. github.com/gustavla/autocolorize, 2017.
[28] Li, Y., Zhuo, J., Fan, L., and Wang, H. J. Cys: Chinese youth subculture dataset. https://github.com/tezignlab/subculture-colorization/tree/main/CYS_dataset.
[29] Li, Y., Zhuo, J., Fan, L., and Wang, H. J. Culture-inspired multi-modal color palette generation and colorization: A chinese youth subculture case, 2021.
[30] Liao, J., Yao, Y., Yuan, L., Hua, G., and Kang, S. B. Visual attribute transfer through deep image analogy, 2017.
[31] Lin, T.-Y., Maire, M., Belongie, S., Bourdev, L., Girshick, R., Hays, J., Perona, P., Ramanan, D., Zitnick, C. L., and Dollar, P. ´ Microsoft coco: Common objects in context, 2015.
[32] Loreto, V., Mukherjee, A., and Tria, F. On the origin of the hierarchy of color names. Proceedings of the National Academy of Sciences of the United States of America 109 (04 2012), 6819–24.
[33] Manjunatha, V., Iyyer, M., Boyd-Graber, J., and Davis, L. Learning to color from language. github.com/superhans/colorfromlanguage, 2018.
[34] Markle, W., and Hunt, B. Coloring black and white signal using motion detection. Canadian Patent Nr 1291260 (01 1988).
[35] Nazeri, K., Ng, E., and Ebrahimi, M. Image colorization using generative adversarial networks. Lecture Notes in Computer Science (2018), 85–94.
[36] Nilsback, M.-E., and Zisserman, A. Automated flower classification over a large number of classes. In Indian Conference on Computer Vision, Graphics and Image Processing (Dec 2008).
[37] Perez, E., Strub, F., de Vries, H., Dumoulin, V., and Courville, A. Film: Visual reasoning with a general conditioning layer, 2017.
[38] Royer, A., Kolesnikov, A., and Lampert, C. H. Probabilistic image colorization. github.com/ameroyer/PIC, 2017.
[39] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. Imagenet large scale visual recognition challenge, 2015.
[40] Santhanam, V., Morariu, V. I., and Davis, L. S. Generalized deep image to image regression. github.com/venkai/RBDN, 2016.
[41] Schanda, J. CIE 1931 and 1964 Standard Colorimetric Observers: History, Data, and Recent Assessments. Springer New York, New York, NY, 2016, pp. 125–129.
[42] Su, J.-W., Chu, H.-K., and Huang, J.-B. Instance-aware image colorization. github.com/ericsujw/InstColorization, 2020.
[43] Timofte, R., Agustsson, E., Gu, S., Wu, J., Ignatov, A., and Gool, L. V. Div2k dataset: Diverse 2k resolution high quality images.
[44] Tylecek, R., and Sara, R. ´ Spatial pattern templates for recognition of objects with regular structure. In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2013, pp. 364–374.
[45] Vitoria, P., Raad, L., and Ballester, C. Chromagan: Adversarial picture colorization with semantic class distribution, 2020.
[46] Wang, L., Guo, S., Huang, W., Xiong, Y., and Qiao, Y. Knowledge guided disambiguation for large-scale scene classification with multi-resolution cnns. IEEE Transactions on Image Processing 26, 4 (Apr 2017), 2055–2068.
[47] Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., and Torralba, A. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2010), pp. 3485–3492.
[48] Xiao, Y., Zhou, P., and Zheng, Y. Interactive deep colorization with simultaneous global and local inputs, 2018.
[49] Xu, Z., Wang, T., Fang, F., Sheng, Y., and Zhang, G. Stylization-based architecture for fast deep exemplar colorization. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020), pp. 9360–9369.
[50] Yang, Z., Liu, H., and Cai, D. On the diversity of realistic image synthesis. github.com/ZJULearning/diverse_image_synthesis, 2017.
[51] Zhang, R., Isola, P., and Efros, A. A. Colorful image colorization, 2016.
[52] Zhang, R., Zhu, J.-Y., Isola, P., Geng, X., Lin, A. S., Yu, T., and Efros, A. A. Real-time user-guided image colorization with learned deep priors, 2017.
[53] Zhao, J., Han, J., Shao, L., and Snoek, C. G. M. Pixelated semantic colorization, 2019.
[54] Zhou, B., Khosla, A., Lapedriza, A., Torralba, A., and Oliva, A. Places: An image database for deep scene understanding, 2016.
[55] Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., and Oliva, A. Learning deep features for scene recognition using places database. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1 (Cambridge, MA, USA, 2014), NIPS’14, MIT Press, p. 487–495.
Published
2021-12-15
How to Cite
ADĂSCĂLIȚEI, A.M.. A Review and Analysis of The Existing Literature on Monochromatic Photography Colorization Using Deep Learning. Studia Universitatis Babeș-Bolyai Informatica, [S.l.], v. 66, n. 2, p. 35-50, dec. 2021. ISSN 2065-9601. Available at: <https://www.cs.ubbcluj.ro/~studia-i/journal/journal/article/view/71>. Date accessed: 20 apr. 2024. doi: https://doi.org/10.24193/subbi.2021.2.03.
Section
Articles