Log in

An optimized visual measurement method for cell parallelism based on edge-aware dynamic re-weighted U-Net (EADRU-Net)

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Due to the reflective surfaces of battery cells, which introduce ambiguities during visual inspection, segmenting light strip edges accurately and extracting the center of the light strip become challenging. These tasks are crucial for measuring the parallelism between cells. To tackle this issue, this paper introduces a novel neural network model named Edge-Aware Dynamic Re-weighted U-Net (EADRU-Net). This model significantly improves edge detection and segmentation by incorporating an Edge Emphasis Loss. Moreover, we integrate a Context-Aware Cross-Dimensional Adaptive Attention mechanism. This mechanism optimizes the capture and expression of key features of light strips through context-aware layers and cross-dimensional learning strategies. EADRU-Net features a dynamic re-weighting mechanism that adaptively adjusts the weight of each pixel, optimizing the recognition and segmentation of reflective light strips on cell surfaces. Experimental results demonstrate EADRU-Net’s superior performance in noise suppression and precise edge segmentation of light strips, achieving a Mean Intersection over Union of 90.95% and a Mean Pixel Accuracy of 93.89%. This represents a 3.94% improvement over the enhanced U-Net, highlighting EADRU-Net’s effectiveness and superiority in detecting and segmenting light strips on cell surfaces.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

The dataset consists of a total of 2000 images, of which 400 are images we collected ourselves, and 1600 are augmented images. The distribution ratio for the training, testing, and validation sets is 8:1:1. The datasets used in the current study can be obtained from the corresponding authors upon reasonable request.

References

  1. Larcher, D., Tarascon, J.M.: Towards greener and more sustainable batteries for electrical energy storage. Nat. Chem. 7(1), 19–29 (2015). https://doi.org/10.1038/nchem.2085

    Article  Google Scholar 

  2. Li, R., Li, W., Singh, A., Ren, D., Hou, Z., Ouyang, M.: Effect of external pressure and internal stress on battery performance and lifespan. Energy Storage Mater. 52, 395–429 (2022). https://doi.org/10.1016/j.ensm.2022.07.034

    Article  Google Scholar 

  3. Wang, H.F., Wang, Y.F., Zhang, J.J., Cao, J.: Laser stripe center detection under the condition of uneven scattering metal surface for geometric measurement. IEEE Trans. Instrum. Meas. 69(5), 2182–2192 (2019). https://doi.org/10.1109/TIM.2019.2921440

    Article  Google Scholar 

  4. He, L., Wu, S., Wu, C.: Robust laser stripe extraction for three-dimensional reconstruction based on a cross-structured light sensor. Appl. Opt. 56(4), 823–832 (2017). https://doi.org/10.1364/AO.56.000823

    Article  Google Scholar 

  5. Yao, R., Wang, B., Hu, M., Hua, D., Wu, L., Lu, H., Liu, X.: A method for extracting a laser center line based on an improved grayscale center of gravity method: application on the 3D reconstruction of battery film defects. Appl. Sci. 13(17), 9831 (2023). https://doi.org/10.3390/app13179831

    Article  Google Scholar 

  6. Ma, X., Zhang, Z., Hao, C., Meng, F., Zhou, W., Zhu, L.: An improved method of light stripe extraction. In: Aopc 2019: Optical Sensing and Imaging Technology, pp. 925–928 (2019, December). https://doi.org/10.1117/12.2548150

  7. Li, Y., Zhou, J., Huang, F., Liu, L.: Sub-pixel extraction of laser stripe center using an improved gray-gravity method. Sensors 17(4), 814 (2017). https://doi.org/10.3390/s17040814

    Article  Google Scholar 

  8. Yu, W., Li, Y., Yang, H., Qian, B.: The centerline extraction algorithm of weld line structured light stripe based on pyramid scene parsing network. IEEE Access 9, 105144–105152 (2021). https://doi.org/10.1109/ACCESS.2021.3098833

    Article  Google Scholar 

  9. Kamanli, A.F.: A novel multi-scale cross-patch attention with dilated convolution (MCPAD-UNET) for metallic surface defect detection. Signal Image Video Process. 1–10 (2023). https://doi.org/10.1007/s11760-023-02745-2

  10. Huang, M., Xu, X.: A method of laser stripe centerline extraction based on deep learning for structured light 3D reconstruction. J. Phys. Conf. Ser. 2522(1), 012015 (2023). https://doi.org/10.1088/1742-6596/2522/1/012015

    Article  MathSciNet  Google Scholar 

  11. Zhao, C., Yang, J., Zhou, F., Sun, J., Li, X., **e, W.: A robust laser stripe extraction method for structured-light vision sensing. Sensors 20(16), 4544 (2020). https://doi.org/10.3390/s20164544

    Article  Google Scholar 

  12. Ye, C., Feng, W., Wang, Q., Wang, C., Pan, B., **e, Y., Hu, Y., Chen, J.: Laser stripe segmentation and centerline extraction based on 3D scanning imaging. Appl. Opt. 61(18), 5409–5418 (2022). https://doi.org/10.1364/AO.457427

    Article  Google Scholar 

  13. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI (2015). https://doi.org/10.48550/ar**v.1505.04597

  14. Siddique, N., Paheding, S., Elkin, C.P., Devabhaktuni, V.: U-net and its variants for medical image segmentation: a review of theory and applications. IEEE Access 9, 82031–82057 (2021). https://doi.org/10.48550/ar**v.2011.01118

    Article  Google Scholar 

  15. Li, C., Tan, Y., Chen, W., Luo, X., He, Y., Gao, Y., Li, F.: ANU-Net: attention-based nested U-Net to exploit full resolution features for medical image segmentation. Comput. Graph. 90, 11–20 (2020). https://doi.org/10.1016/j.cag.2020.05.003

    Article  Google Scholar 

  16. Ma, J., Chen, J., Ng, M., Huang, R., Li, Y., Li, C., Yang, X.P., Martel, A.L.: Loss odyssey in medical image segmentation. Med. Image Anal. 71, 102035 (2021). https://doi.org/10.1016/j.media.2021.102035

    Article  Google Scholar 

  17. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861

    Article  Google Scholar 

  18. Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018). https://doi.org/10.48550/ar**v.1807.06521

  19. Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., Hu, Q.: ECA-Net: efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11534–11542 (2020). https://doi.org/10.48550/ar**v.1910.03151

  20. Steger, C.: An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (1998). https://doi.org/10.1109/34.659930

    Article  Google Scholar 

  21. Penczek, P.A.: Fundamentals of three-dimensional reconstruction from projections. Methods Enzymol. 482, 1–33 (2010). https://doi.org/10.1016/S0076-6879(10)82001-4

    Article  Google Scholar 

  22. Isensee, F., Jaeger, P.F., Kohl, S.A.A., et al.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18, 203–211 (2021). https://doi.org/10.1038/s41592-020-01008-z

    Article  Google Scholar 

  23. Isensee, F., Petersen, J., Klein, A., Zimmerer, D., Jaeger, P.F., Kohl, S., Maier-Hein, K.H. nnu-net: self-adapting framework for u-net-based medical image segmentation. ar**v preprint ar**v:1809.10486 (2018). https://doi.org/10.48550/ar**v.1809.10486

  24. **e, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: SegFormer: Simple and efficient design for semantic segmentation with transformers. Adv. Neural. Inf. Process. Syst. 34, 12077–12090 (2021). https://doi.org/10.48550/ar**v.2105.15203

    Article  Google Scholar 

  25. Li, Z., Wang, W., **e, E., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P., Lu, T. Panoptic segformer: delving deeper into panoptic segmentation with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1280–1289 (2022).https://doi.org/10.48550/ar**v.2109.03814

  26. Al-Amri, S.S., Kalyankar, N.V.: Image segmentation by using threshold techniques. ar**v preprint ar**v:1005.4020 (2010). https://doi.org/10.48550/ar**v.1005.4020

  27. Bhargavi, K., Jyothi, S.: A survey on threshold based segmentation technique in image processing. Int. J. Innov. Res. Dev. 3(12), 234–239 (2014). https://doi.org/10.1049/iet-ipr.2018.6150

    Article  Google Scholar 

  28. Zweig, M.H., Campbell, G.: Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine. Clin. Chem. 39(4), 561–577 (1993). https://doi.org/10.1093/clinchem/39.4.561

    Article  Google Scholar 

  29. Liu, D., et al.: Sg-net: Spatial granularity network for one-stage video instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021). https://doi.org/10.48550/ar**v.2103.10284

  30. Cui, Y., et al.: Tf-blender: temporal feature blender for video object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2021). https://doi.org/10.48550/ar**v.2108.05821

  31. Liu, D., et al.: Densernet: weakly supervised visual localization using multi-scale feature aggregation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 7 (2021). https://doi.org/10.48550/ar**v.2012.02366

  32. Wang, W., et al.: Visual recognition with deep nearest centroids. ar**v preprint ar**v:2209.07383 (2022). https://doi.org/10.48550/ar**v.2209.07383

Download references

Funding

This work was supported by the Program for Innovative Research Team in University of Tian** (No. TD13-5036), and the Tian** Science and Technology Popularization Project (No. 22KPXMRC00090).

Author information

Authors and Affiliations

Authors

Contributions

LS and QH completed the main manuscript text and experiments. WS and YY prepared Table 2. All authors reviewed the manuscript.

Corresponding author

Correspondence to Limei Song.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, L., Hu, Q., Shu, W. et al. An optimized visual measurement method for cell parallelism based on edge-aware dynamic re-weighted U-Net (EADRU-Net). SIViP (2024). https://doi.org/10.1007/s11760-024-03308-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-024-03308-9

Keywords

Navigation