Alternative Data Augmentation for Industrial Monitoring Using Adversarial Learning

  • Conference paper
  • First Online:
Deep Learning Theory and Applications (DeLTA 2020, DeLTA 2021)

Abstract

Visual inspection software has become a key factor in the manufacturing industry for quality control and process monitoring. Semantic segmentation models have gained importance since they allow for more precise examination. These models, however, require large image datasets in order to achieve a fair accuracy level. In some cases, training data is sparse or lacks of sufficient annotation, a fact that especially applies to highly specialized production environments. Data augmentation represents a common strategy to extend the dataset. Still, it only varies the image within a narrow range. In this article, a novel strategy is proposed to augment small image datasets. The approach is applied to surface monitoring of carbon fibers, a specific industry use case. We apply two different methods to create binary labels: a problem-tailored trigonometric function and a WGAN model. Afterwards, the labels are translated into color images using pix2pix and used to train a U-Net. The results suggest that the trigonometric function is superior to the WGAN model. However, a precise examination of the resulting images indicate that WGAN and image-to-image translation achieve good segmentation results and only deviate to a small degree from traditional data augmentation. In summary, this study examines an industry application of data synthesization using generative adversarial networks and explores its potential for monitoring systems of production environments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 53.49
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 69.54
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arjovsky, M., Bottou, L.: Towards principled methods for training generative adversarial networks. ar**v preprint ar**v:1701.04862 (2017)

  2. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN. ar**v preprint ar**v:1701.07875 (2017)

  3. Bowles, C., et al.: Gan augmentation: augmenting training data using generative adversarial networks. ar**v preprint ar**v:1810.10863 (2018)

  4. Cavigelli, L., Hager, P., Benini, L.: CAS-CNN: a deep convolutional neural network for image compression artifact suppression. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 752–759, May 2017

    Google Scholar 

  5. Choi, J., Kim, T., Kim, C.: Self-ensembling with GAN-based data augmentation for domain adaptation in semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 6830–6840 (2019)

    Google Scholar 

  6. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797 (2018)

    Google Scholar 

  7. Cui, Y.R., Liu, Q., Gao, C.Y., Su, Z.: FashionGAN: display your fashion design using conditional generative adversarial nets. Comput. Graph. Forum 37, 109–119 (2018)

    Google Scholar 

  8. Di Mattia, F., Galeone, P., De Simoni, M., Ghelfi, E.: A survey on GANs for anomaly detection. ar**v preprint ar**v:1906.11632 (2019)

  9. Ferguson, M.K., Ronay, A., Lee, Y.T.T., Law, K.H.: Detection and segmentation of manufacturing defects with convolutional neural networks and transfer learning. Smart Sustain. Manuf Syst. 2 (2018)

    Google Scholar 

  10. Frid-Adar, M., Klang, E., Amitai, M., Goldberger, J., Greenspan, H.: Synthetic data augmentation using GAN for improved liver lesion classification. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 289–293. IEEE (2018)

    Google Scholar 

  11. Geinitz, S., Margraf, A., Wedel, A., Witthus, S., Drechsler, K.: Detection of filament misalignment in carbon fiber production using a stereovision line scan camera system. In: Proceedings of 19th World Conference on Non-Destructive Testing (2016)

    Google Scholar 

  12. Geinitz, S., Wedel, A., Margraf, A.: Online detection and categorisation of defects along carbon fiber production using a high resolution, high width line scan vision system. In: Proceedings of the 17th European Conference on Composite Materials ECCM17, European Society for Composite Materials, Munich (2016)

    Google Scholar 

  13. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems, pp. 5767–5777 (2017)

    Google Scholar 

  14. Haselmann, M., Gruber, D.: Supervised machine learning based surface inspection by synthetizing artificial defects. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 390–395. IEEE (2017)

    Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  16. Huang, S.-W., Lin, C.-T., Chen, S.-P., Wu, Y.-Y., Hsu, P.-H., Lai, S.-H.: AugGAN: cross domain adaptation with GAN-based data augmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 731–744. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01240-3_44

    Chapter  Google Scholar 

  17. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  18. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  19. Liu, L., Zhang, H., Ji, Y., Wu, Q.M.J.: Toward AI fashion design: an attribute-GAN model for clothing match. Neurocomputing 341(2019). https://doi.org/10.1016/j.neucom.2019.03.011

  20. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440, June 2015

    Google Scholar 

  21. Margraf., A., Hähner., J., Braml., P., Geinitz., S.: Towards self-adaptive defect classification in industrial monitoring. In: Proceedings of the 9th International Conference on Data Science, Technology and Applications - Volume 1: DATA, pp. 318–327. INSTICC, SciTePress (2020). https://doi.org/10.5220/0009893003180327

  22. Margraf, A., Stein, A., Engstler, L., Geinitz, S., Hähner, J.: An evolutionary learning approach to self-configuring image pipelines in the context of carbon fiber fault detection. In: 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, December 2017

    Google Scholar 

  23. Mariani, G., Scheidegger, F., Istrate, R., Bekas, C., Malossi, C.: BAGAN: data augmentation with balancing GAN. ar**v preprint ar**v:1803.09655 (2018)

  24. Masci, J., Meier, U., Ciresan, D., Schmidhuber, J., Fricout, G.: Steel defect classification with max-pooling convolutional neural networks. In: The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1–6. IEEE (2012)

    Google Scholar 

  25. McCann, M.T., **, K.H., Unser, M.: Convolutional neural networks for inverse problems in imaging: a review. IEEE Sig. Process. Mag. 34(6), 85–95 (2017). https://doi.org/10.1109/msp.2017.2739299

  26. Mertes., S., Baird., A., Schiller., D., Schuller., B., André., E.: An evolutionary-based generative approach for audio data augmentation. In: Proceedings of the 22nd International Workshop on Multimedia Signal Processing (MMSP). IEEE (2020)

    Google Scholar 

  27. Mertes., S., Margraf., A., Kommer., C., Geinitz., S., André., E.: Data augmentation for semantic segmentation in the context of carbon fiber defect detection using adversarial learning. In: Proceedings of the 1st International Conference on Deep Learning Theory and Applications - Volume 1: DeLTA, pp. 59–67. INSTICC, SciTePress (2020). https://doi.org/10.5220/0009823500590067

  28. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)

    Google Scholar 

  29. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 91–99. Curran Associates, Inc. (2015). http://papers.nips.cc/paper/5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks.pdf

  30. Rizki, M.M., Zmuda, M.A., Tamurino, L.A.: Evolving pattern recognition systems. IEEE Trans. Evol. Comput. 6, 594–609 (2002)

    Google Scholar 

  31. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  32. Schlegl, T., Seeböck, P., Waldstein, S.M., Schmidt-Erfurth, U., Langs, G.: Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 146–157. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59050-9_12

    Chapter  Google Scholar 

  33. Shorten, C., Khoshgoftaar, T.M.: A survey on image data augmentation for deep learning. J. Big Data 6(1), 60 (2019)

    Article  Google Scholar 

  34. Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., Webb, R.: Learning from simulated and unsupervised images through adversarial training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2107–2116 (2017)

    Google Scholar 

  35. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. ar**v preprint ar**v:1409.1556 (2014)

  36. Soukup, D., Huber-Mörk, R.: Convolutional neural networks for steel surface defect detection from photometric stereo images. In: Bebis, G., et al. (eds.) ISVC 2014. LNCS, vol. 8887, pp. 668–677. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-14249-4_64

    Chapter  Google Scholar 

  37. Staar, B., Lütjen, M., Freitag, M.: Anomaly detection with convolutional neural networks for industrial surface inspection. Procedia CIRP 79, 484–489 (2019)

    Article  Google Scholar 

  38. Stein, A., Margraf, A., Moroskow, J., Geinitz, S., Haehner, J.: Toward an organic computing approach to automated design of processing pipelines. In: 31th International Conference on Architecture of Computing Systems, VDE ARCS Workshop 2018 (2018)

    Google Scholar 

  39. Strumberger, I., Tuba, E., Bacanin, N., Jovanovic, R., Tuba, M.: Convolutional neural network architecture design by the tree growth algorithm framework. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2019)

    Google Scholar 

  40. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  41. **e, S., Tu, Z.: Holistically-nested edge detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1395–1403 (2015)

    Google Scholar 

  42. Yakubovskiy, P.: Segmentation models (2019). https://github.com/qubvel/segmentation_models

  43. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 649–666. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_40

    Chapter  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Administration of Swabia and the Bavarian Ministry of Economic Affairs and Media, Energy and Technology for funding and support to conduct this research as part of the program Competence Expansion of Fraunhofer IGCV formerly Fraunhofer Project Group for “Functional Lightweight Design” FIL of ICT.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Margraf .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mertes, S., Margraf, A., Geinitz, S., André, E. (2023). Alternative Data Augmentation for Industrial Monitoring Using Adversarial Learning. In: Fred, A., Sansone, C., Madani, K. (eds) Deep Learning Theory and Applications. DeLTA DeLTA 2020 2021. Communications in Computer and Information Science, vol 1854. Springer, Cham. https://doi.org/10.1007/978-3-031-37320-6_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-37320-6_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-37319-0

  • Online ISBN: 978-3-031-37320-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation