Abstract
Image style transfer is a method that extracts its style from the style image and applies this style to content image. Since the introduction of neural style transfer, the field of style transfer has developed rapidly, and many new methods have been proposed. There are also some methods based on feed-forward networks, but these methods can usually only transfer one style or several styles. And these methods usually use feature gram matrix to calculate style loss. However, due to the global nature of the gram matrix, some local details cannot be stylized well, which is prone to distortion and artifacts. In this work, we propose an arbitrary style transfer method based on feed-forward network, in which the gram matrix and feature similarity are used together to calculate the style loss. The loss calculated by the similarity between features (called contextual loss in this paper) is minimized to produce stylized images with better details and fewer artifacts. Experimental results and user study prove that our method has achieved the state-of-the-art performance compared with existing arbitrary style transfer methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. ar**v preprint ar**v:1508.06576 (2015)
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)
Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. ar**v preprint ar**v:1502.03167 (2015)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43
Keskar, N.S., Mudigere, D., Nocedal, J., Smelyanskiy, M., Tang, P.T.P.: On large-batch training for deep learning: Generalization gap and sharp minima. ar**v preprint ar**v:1609.04836 (2016)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. ar**v preprint ar**v:1412.6980 (2014)
Li, X., Liu, S., Kautz, J., Yang, M.H.: Learning linear transformations for fast image and video style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3809–3817 (2019)
Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., Yang, M.H.: Universal style transfer via feature transforms. In: Advances in Neural Information Processing Systems, pp. 386–396 (2017)
Li, Y., Liu, M.Y., Li, X., Yang, M.H., Kautz, J.: A closed-form solution to photorealistic image stylization. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 453–468 (2018)
Liao, J., Yao, Y., Yuan, L., Hua, G., Kang, S.B.: Visual attribute transfer through deep image analogy. ar**v preprint ar**v:1705.01088 (2017)
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Lu, M., Zhao, H., Yao, A., Chen, Y., Xu, F., Zhang, L.: A closed-form solution to universal style transfer. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5952–5961 (2019)
Mechrez, R., Talmi, I., Zelnik-Manor, L.: The contextual loss for image transformation with non-aligned data. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 768–783 (2018)
Nichol, K.: Painter by numbers (2016). https://www.kaggle.com/c/painter-by-numbers
Risser, E., Wilmot, P., Barnes, C.: Stable and controllable neural texture synthesis and style transfer using histogram losses. ar**v preprint ar**v:1701.08893 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. ar**v preprint ar**v:1409.1556 (2014)
Ulyanov, D., Lebedev, V., Vedaldi, A., Lempitsky, V.S.: Texture networks: Feed-forward synthesis of textures and stylized images. In: ICML, vol. 1, p. 4 (2016)
Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: The missing ingredient for fast stylization. ar**v preprint ar**v:1607.08022 (2016)
Yin, R.: Content aware neural style transfer. ar**v preprint ar**v:1601.04568 (2016)
Yoo, J., Uh, Y., Chun, S., Kang, B., Ha, J.W.: Photorealistic style transfer via wavelet transforms. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9036–9045 (2019)
Zhang, C., Zhu, Y., Zhu, S.C.: Metastyle: three-way trade-off among speed, flexibility, and quality in neural style transfer. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 1254–1261 (2019)
Zhang, H., Dana, K.: Multi-style generative network for real-time transfer. ar**v preprint ar**v:1703.06953 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chen, P., Zhang, Y., Huang, J., Liu, Z. (2021). Using Feed-Forward Network for Fast Arbitrary Style Transfer with Contextual Loss. In: Ning, L., Chau, V., Lau, F. (eds) Parallel Architectures, Algorithms and Programming. PAAP 2020. Communications in Computer and Information Science, vol 1362. Springer, Singapore. https://doi.org/10.1007/978-981-16-0010-4_22
Download citation
DOI: https://doi.org/10.1007/978-981-16-0010-4_22
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-0009-8
Online ISBN: 978-981-16-0010-4
eBook Packages: Computer ScienceComputer Science (R0)