Abstract
Semantic inpainting or image completion alludes to the task of inferring arbitrary large missing regions in images based on image semantics. Since the prediction of image pixels requires an indication of high-level context, this makes it significantly tougher than image completion, which is often more concerned with correcting data corruption and removing entire objects from the input image. On the other hand, image enhancement attempts to eliminate unwanted noise and blur from the image along with sustaining most of the image details. Efficient image completion and enhancement model should be able to recover the corrupted and masked regions in images and then refine the image further to increase the quality of the output image. Generative Adversarial Networks (GAN) have turned out to be helpful in picture completion tasks. In this chapter, we will discuss the underlying GAN architecture and how they can be used for image completion tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
LeCun, Yann. “RL Seminar: The Next Frontier in AI: Unsupervised Learning”.
- 2.
Nash Equilibrium is named after the American economist and mathematician John Forbes Nash Jr, whose life and career were captured in the biography titled A Beautiful Mind and inspired the eponymous film.
- 3.
References
I. Goodfellow et al., Generative adversarial nets, in Advances in Neural Information Processing Systems (2014)
J. Langr, V. Bok, GAN in Action
T. Karras et al., Progressive growing of GANs for improved quality, stability, and variation (2017), ar**v:1710.10196
J.-Y. Zhu et al., Unpaired image-to-image translation using cycle-consistent adversarial networks, in Proceedings of the IEEE International Conference on Computer Vision (2017)
H. Zhang et al., StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks, in Proceedings of the IEEE International Conference on Computer Vision (2017)
S. Reed et al., Generative adversarial text to image synthesis (2016), ar**v:1605.05396
R.A. Yeh et al., Semantic image inpainting with deep generative models, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
J.-B. Huang, N. Ahuja, Image completion using planar structure guidance. ACM Trans. Graph. (Proc. SIGGRAPH) 33(4) (2014)
C. Barnes, E. Shechtman, A. Finkelstein, D.B. Goldman, PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. (Proc. SIGGRAPH) 28(3) (2009)
Y. Chen, T. Pock, Trainable nonlinear reaction diffusion: a flexible framework for fast and effective image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1256–1272 (2017)
Y. Deng, Q. Dai, Z. Zhang, Graph Laplace for occluded face completion and recognition. IEEE Trans. Image Process. 20(8), 2329–2338 (2011)
Y. Li, S. Liu, J. Yang, M.-H. Yang, Generative face completion. CoRR (2017), ar**v:1704.05838
R. Liu, R. Yang, S. Li, Y. Shi, X. **, Painting completion with generative translation models. Multimed. Tools Appl. (Springer) 1–14 (2018)
D. Pathak, P. Krahenbuhl, Context encoders: feature learning by inpainting, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544
J.S.J. Ren, L. Xu, Q. Yan, W. Sun, Shepard convolutional neural networks. Adv. Neural Inf. Process. Syst. 28, 901–909 (2015)
L. Shen, Z. Yue, F. Feng, Q. Chen, S. Liu, J. Ma, MSR-Net: low-light image enhancement using deep convolutional network. CoRR (2017), ar**v:1711.02488
J. Sulam, M. Elad, Large inpainting of face images with trainlets. IEEE Signal Process. Lett. 23(2), 1839–1843 (2016)
R.A. Yeh, C. Chen, T.-Y. Lim, M. Hasegawa-Johnson, M.N. Do, Semantic image inpainting with perceptual and contextual losses. CoRR (2016), ar**v:1607.07539
K. Zhang, W. Zuo, Y. Chen, D. Meng, L. Zhang, Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26, 3142–3155 (2017)
C. Chen, T.-Y. Lim, R.A. Yeh, Semantic image inpainting with deep generative models, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
A. Hore, D. Ziou, Image quality metrics: PSNR vs SSIM, in In 2010 20th International Conference on Pattern Recognition (2010), pp. 2366–2369
Y. Chen, H. Hu, An improved method for semantic image inpainting with GANs: progressive inpainting. Neural Process. Lett. (Springer) 1–13 (2018)
X. Wu et al., Deep portrait image completion and extrapolation. IEEE Trans. Image Process. (2019)
C. Zheng, T.-J. Cham, J. Cai, Pluralistic image completion, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019)
M. Bertalmio et al., Image inpainting, in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (ACM Press/Addison-Wesley Publishing Co., 2000)
I. Goodfellow, M. Mirza, J. Pouget-Abadie, Generative adversarial nets, in International Conference on Neural Information Processing Systems (2014), pp. 2672–2680
H. Ren, J. Lee, M. El-khamy, DN-ResNet: efficient deep residual network for image denoising. CoRR (2018), ar**v:1810.06766
Z. Liu, P. Luo, X. Wang, X. Tang, Deep learning face attributes in the wild, in Proceedings of International Conference on Computer Vision (ICCV) (2015)
G. Peyr, Manifold models for signals and images. Comput. Vis. Image Underst. 113(2), 249–260 (2009)
M. Arjovsky, S. Chintala, L. Bottou, Wasserstein GAN, Courant Institute of Mathematical Sciences Facebook AI Research (2017)
G. Zhao, J. Liu, J. Jiang, W. Wang, A deep cascade of neural networks for image inpainting, deblurring and denoising. Multimed. Tools Appl. 77(22), 29589–29604 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Saxena, P., Gupta, R., Maheshwari, A., Maheshwari, S. (2020). Semantic Image Completion and Enhancement Using GANs. In: Nanda, A., Chaurasia, N. (eds) High Performance Vision Intelligence. Studies in Computational Intelligence, vol 913. Springer, Singapore. https://doi.org/10.1007/978-981-15-6844-2_11
Download citation
DOI: https://doi.org/10.1007/978-981-15-6844-2_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-6843-5
Online ISBN: 978-981-15-6844-2
eBook Packages: EngineeringEngineering (R0)