1 Introduction

Intestinal parasitic diseases are the most widespread infectious disease, affecting millions of people globally, and they are particularly prevalent in underdeveloped regions where individuals live in unsanitary conditions. The World Health Organization reported that around 1.5 billion people were afflicted with soil-transmitted helminth infections in 2020 [1]. Human intestinal parasites, which are responsible for causing these diseases, such as diarrhea, malnutrition, and anemia, particularly impacting children and impeding their growth, can be categorized into three groups: helminths, protozoa, and ectoparasites [2]. It also affects physical and mental growth, job performance, and education, potentially influencing the quality of the future population and the country’s long-term growth [3]. The physical similarity of parasites and the presence of impurities in samples present difficulties in manually distinguishing between different types of parasite eggs using a microscope [4, 5]. As a result, significant training is required to develop skilled experts to perform diagnoses. This manual evaluation is both labor-intensive and time-consuming, taking an experienced technician an average of 30 min to analyze a single sample [6]. As a result, the development of an automated diagnostic faecal examination for parasitic diseases is essential to overcome the limitations of traditional diagnostic methods. Further, most infected people exhibit no or mild symptoms, it’s important to recognize that parasitic infection grow during pregnancy may result in severe nerve damage and, in some cases, infant mortality [7]. Leishmania is a neglected tropical disease spread out with female phlebotomine sandflies affecting over 700,000 people annually [8]. Moreover, trichomonad parasites found in the intestines, and oral cavity cause the human disease trichomoniasis [9].

Machine learning methods have been used in several studies to analyze microscopic images containing parasite eggs/cysts. Support Vector Machine (SVM) [10, 11] and Artificial Neural Networks (ANN) [12, 13] are examples of such systems. Prior attempts towards automating the detection and estimation of intestinal parasites [14, 15] involved complex processes that involved image processing and machine learning classification. These methods generally rely on extracting features from a set of measurements, especially intensity, dimension, and surface texture. As a result, considerable work is necessary during the feature extraction stage to fine-tune the features. Despite these efforts, none of these methods have achieved widespread acceptance due to generalizability issues as well as replication, comparison, and extension difficulties. Over the last decade, deep learning-based algorithms have been improved as a result of advances in computer performance and the availability of image datasets [16]. Deep learning has been shown to be extremely effective in solving a wide range of problems in a variety of disciplines, including text recognition, computer-aided diagnosis, face identification, and drug development [17, 18].

We applied the Faster-RCNN detector [19] as the foundation for our research, as it has exhibited favourable precision and speed when applied to images as to other deep models. Medical image analysis, on the other hand, presents distinct obstacles. Supervised deep learning requires large training datasets, which can be difficult to come by for medical images due to their high acquisition costs and the labour-intensive nature of manual annotation. To overcome these limitations, we propose expanding the baseline training dataset through data augmentation. While many data augmentation approaches use image transformations such as rotations and translations [20], we adopt a different CycleGAN approach [21], an unsupervised system capable of generating images based on annotated source images from a different modality. Our findings show that combining CycleGAN and Faster-RCNN provides an efficient and effective method for augmenting datasets and recognizing intestinal parasites in microscopy images.

Our work encapsulates several major contributions, which are summarized as below:

  • To provide fully automated proposal for dealing with low-quality intestinal parasite images captured using portable devices in clinical practice.

  • To provide an oversampling strategy that does not require a paired dataset, effectively capturing domain variability and improving dataset representativeness.

  • To provide robust methodology for detecting parasites in data-scarce contexts, significantly improving on existing state-of-the-art methods.

  • Extensive experimentation is used to validate our methodology, demonstrating its suitability, robustness, and uniqueness in augmenting intestinal parasite images with CycleGAN architectures and detecting with Faster-RCNN.

The rest of the present manuscript is organized into the following sections: Sect. 2, “Related Work," comprehensively outlines the important resources to reproduce our work. Section 3 “Methodology” provides details of the proposed strategy, experiment setup and specific parameters for each experiment to evaluate the performance that is discussed in Sect. 6 “result and Discussion.” Sect. 5, ‘‘Evaluation’’, shows the various metric tools to validate the recent work. Section 6, ‘‘Results and Discussion’’, shows the findings and detailed analysis achieved after the validation of the proposed method was performed. Finally, Sect. 7, “Conclusions,” succinctly provides contributions and noteworthy aspects, emphasizing the importance of our findings validated through extensive experimentation.

2 Related work

2.1 Object detection

Various architectural designs that perform better in object detection tasks have served as inspiration for the development of deep convolutional neural networks, which are used in modern methodologies in medical images for detection, classification, and segmentation tasks. Here, we presented an examination of some current methodologies used in the field of microorganisms to detect parasite eggs/cysts from microscopy images. Waithe et al. [22] evaluated how well state-of-the-art neural network designs detected luminous cells in microscope images. L Von et al. [23] introduced the ZeroCostDL4Mic, which allows researchers with no coding expertise to train and apply key deep learning networks to perform tasks including segmentation, object detection, and denoising. Kumar et al. [18] proposed an efficient and effective framework for intestinal parasite egg detection using YOLOv5, which achieved a mean average precision of approximately 97% for detection. Deep learning-based detection methods are widely classified into two approaches: two-stage and one-stage methods. In the former, models are trained separately for two unique tasks: detecting regions of interest and classifying and localizing objects. The Region-Based Convolutional Neural Network (R-CNN) algorithms are among the best in this area [13, 24]. These approaches make use of modules for feature extraction, classification, and regression, with region proposal handled by a distinct convolutional network in [4]. In the field of medical image analysis, regression forests have typically been the most effective statistical detection methods [2]. As observed in [25, 26], these methodologies have been deployed in a cascaded fashion, going from a global to a local environment. AI platform enables non-programmers to use AI for microscope image processing. ResNeXt-50–32 4d algorithm outperforms others with 96.83% accuracy and an F1-score of 96.82%. MobileNet-V2 strikes a balance between 95.72% accuracy and computational cost. Deep learning methods, on the other hand, are fast gaining popularity in this domain. Primarily, Faster RCNN has been used to recognize objects in parasite images [27], while Fast RCNN has been used to detect parasite eggs in medical images [28]. Our proposed framework applies a deep learning framework in two steps. The first step involves image enhancement before input into the object detection model. This enhancement is achieved through a Cycle Generative Adversarial Network (CycleGAN) model that is trained to convert low-resolution images into high-resolution ones. Finally, the object detection is then performed using a Faster-RCNN model, with ResNet50 as its backbone.

2.2 Data augmentation

Research organizations have explored the use of CycleGAN, an unsupervised technique to synthesize unpaired images particularly from one domain to another domain [29]. CycleGAN has been a frequently used method for creating synthetic datasets of image collection. Its primary potential to handle unpaired data, which is extremely useful in our situation. Image acquisition of multiple modalities for the same subject under identical conditions is usually not possible. CycleGAN has been used in prior studies, including [30], which used it to produce chest X-rays images for pneumonia detection, and [31], which used it to generate lung MRI images from CT images for lung tumour segmentation. CycleGAN is used to produce target modality images from labeled source images, and the source labels are then translated to the target domain. Additionally, several proposals were explored for synthetic image creation for over-sampling the original sample collection. These techniques, as demonstrated by Bouteldja N et al. [32, 33] and Motamed S et al. [34], make use of distinct GAN frameworks in similar contexts.

3 Methodology

The proposed approach has divided into two stages, which are meant for data augmentation and object detection, as shown in Fig. 1. The first stage focuses on synthetic image synthesis, with the CycleGAN algorithm. Section 3.2 has more information about the first stage. The second stage focuses on the detection of intestinal parasites from microscopy images with customized Faster-RCNN algorithm. Section 3.3 explores the workings of the second module.

Fig. 1
figure 1

Experimental setup of the proposed framework

3.1 Datasets

To evaluate the proposed framework, we obtained the intestinal parasite image dataset from Chulalongkorn University in Thailand. The total number of parasite images and dimensions are shown in Table 1. The dataset collection contains images obtained with different devices under different environmental conditions. The dataset includes 2,500 images categorized into 5 classes, consisting of 500 images of Ascaris lumbricoides (AL), 500 of Hookworm (HW), 509 of Fasciolopsis buski (FB), 500 of Taenia spp. (TS), and 500 of Hymenolepis nana (HN). These images, shown in Fig. 2, display distinct features, with some showing clear definitions and others showing blurriness or variations in lighting conditions. Furthermore, the resolution, color saturation, and contrast differ depending on the microscope used. The presence of debris in the background also varies significantly among the images. Thus, we proposed a framework that allows the transformation of input data so that the architecture would not experience any reduction in performance or model generalization. To train the CycleGAN model, images from [35] were utilized. We standardized the image size to 416 × 416 for compatibility with the Faster-RCNN algorithm. Furthermore, we divided the dataset into training, validation, and testing sets, with proportions of 70%, 20%, and 10%, respectively.

Table 1 The dataset include different species of parasite eggs in varying sizes and resolutions
Fig. 2
figure 2

View of parasitic cyst/eggs under microscope

3.2 Model architectures and training details

3.2.1 Augmentation methods

In this experiment, we implemented deep neural network based CycleGAN algorithm() to generate synthetic intestinal parasitic images. The cyclic nature of this algorithm employs reverse transformation i.e. the architecture capable of converting generated images back into original images. CycleGAN architectures are widely employed in medical image analysis for image-to-image generation due to their robustness, flexibility and encouraging results on related problems. The CycleGAN model has two generators, each paired with a discriminator. The key concept in CycleGAN is the cycle consistency loss function, which is used to optimize the framework. Here's how it performs the operation: the output from the first generator can serve as the input image for the second generator, and the resulting image from the second generator should match the original image. Similarly, the output image from the second generator can be used as the input image for the first generator, and it should match the input image from the second generator, as shown in Fig. 3.

Fig. 3
figure 3

A representation of the CycleGAN architecture, which was modified for this work's research studies to detect intestinal parasite

CycleGAN operates at the batch level: it is given a set of images in domain X and another set of images in domain Y. The goal is to learn the map** G:X → Y in such a way that the distribution of images in domain X closely approaches the distribution of images in domain Y, such that the training images are indistinguishable from the original dataset. It uses adversarial losses for the map** function, just like regular Generative Adversarial Networks. Equation (5) describes this function and its related discriminator, Dy.

$${L}_{GAN}\left(G,{D}_{y},X,Y\right)= {E}_{y\sim Pdata(y)}\left[logD\left(y\right)\right]+{E}_{x\sim Pdata(x)}\left[1-log{D}_{y}\left(x\right)\right]$$
(1)

In this context, we have the generator G, which aims to produce images similar to those in domain Y, and the discriminator D, whose task is to separate the generated image using G as effectively as possible from a genuine image y. When the parameters of the generator model G are changed, G attempts to minimize certain factors, whereas D aims to maximize certain aspects when the parameters of the discriminator model D are updated. However, if the network's capacity is high enough, it may translate the same collection of input images to any arbitrary arrangement of images in the destination domain. This, however, does not guarantee that each input x and output y are properly matched. Through successive map**s using G, this can result in a similar distribution in y, rendering the loss useless. To overcome this problem, CycleGAN combines the original and inverse map**s and employs a cyclic consistency loss to provide a meaningful link in both directions.

In this study’s CycleGAN model incorporates two map** functions, G: → X and F: → Y, as well as the related adversarial discriminators Dy and Dx. CycleGAN introduces two cycle consistency losses to further regularize the map** process: the forward cycle loss assures that when an image travels from one domain to another and back, it recovers to its initial state, as represented by the equation x → G(x) → F (G(x)) ≈x. Similarly, the backward cycle loss assures that an image closely approximates y when it moves from y to (y) and then back to G(F(y)). In the CycleGAN network, the overall loss is composed of various components, including the discriminator loss for X → Y, as indicated in Eq. 3.

$${L}_{GAN}\left(G,{D}_{y},X,Y\right)= {E}_{y\sim Pdata(y)}\left[log{D}_{y}\left(y\right)\right]+{E}_{x\sim Pdata(x)}\left[{\text{log}}(1-{D}_{y}\left(G(x)\right))\right]$$
(2)

The discriminator loss Y → X is indicated in Eq. 4

$${L}_{GAN}\left(F,x,X,Y\right)= {E}_{x\sim Pdata(x)}\left[log{D}_{X}\left(x\right)\right]+{E}_{y\sim Pdata(y)}\left[{\text{log}}(1-{D}_{X}(F\left(x\right)))\right]$$
(3)

The cycle consistency loss generated by generators is indicated in Eq. 5

$${L}_{cyc}\left(G,F\right)= {E}_{x\sim Pdata(x)}\left[||F(G\left(x\right))\right]-x{||}_{1}]+{E}_{y\sim Pdata(y)}\left[||G(F\left(y\right))\right]-y{||}_{1}]$$
(4)

CycleGAN network final loss is given by Eq. 6

figure a

The goal is to solve:

$${G}^{*},{F}^{*}={{\text{arg}}min}_{G,Y}{min}_{G,Y}{L}_{GAN}\left(G,F,,{D}_{x},{D}_{y}\right)$$
(6)

Regarding the training configuration, following parameters are applied for CycleGAN setup. About 200 epochs are required for the training of CycleGAN, with a fixed learning rate of 0.0002 for the first 100 epochs and linear decay till zero for the remaining epochs. The training process utilizes the Adam algorithm (Kingma & Ba, 2014) with decay rates of β1 = 0.5 and β2 = 0.999. Loss weights are set as follows: λA = 10.0, λB = 10.0, and λidt = 0.5. The Table 2, outlines the hyper-parameter settings utilized in the training process of the CycleGAN model. The process of hyper-parameter tuning is a crucial and iterative one that requires several rounds of experimentation. It is essential to strike a balance between exploring new configurations and refining promising ones while undertaking this process. In this regard, we acknowledge that CycleGAN has a higher convergence rate, which mitigates the risk of mode collapse, a common concern in GANs. Additionally, we have observed that the Adam optimizer requires less hyper-parameter tuning compared to SGD.

Table 2 Hyper-parameters setting for CycleGAN model

3.3 Detection module

As we mentioned in the section above, Faster RCNN is our detector since it broadly shows a good balance between speed and accuracy. Using this method, an image is first divided into a grid of S × S cells. Three important elements must be predicted for each grid cell: the coordinates of bounding boxes, a confidence score indicating the presence of an item, and a class probability if an object is detected within the bounding box. We use Faster RCNN with ResNet50 as the backbone in our research, as shown in Fig. 4. We use scale-dependent box priors, which we learn from the training set, to improve prediction accuracy. Faster RCNN also incorporates cross-layer connections between each pair of prediction layers, except for the output layer. Specifically, the dataset is randomly partitioned into three subsets, 70% of the samples allocated for training, 20% for validation, and 10% for testing. The initialization of the trained model involves adopting weights from another model previously trained on the ImageNet dataset [36]. These weights were optimized over 200 epochs using the [

Data availability

The datasets analyzed during the current study are available in the ICIP 2022 Challenge: Parasitic Egg Detection and Classification in Microscopic Images (https://icip2022challenge.piclab.ai/).

References

  1. N. Q. Viet, D. T. T. Tuyen, and T. H. Hoang, ‘Parasite worm egg automatic detection in microscopy stool image based on Faster R-CNN’, in ACM International Conference Proceeding Series, Association for Computing Machinery, Jan. 2019, pp. 197–202. doi: https://doi.org/10.1145/3310986.3311014.

  2. Kumar S, Arif T, Alotaibi AS, Malik MB, Manhas J. Advances towards automatic detection and classification of parasites microscopic images using deep convolutional neural network: methods, models and research directions. Arch Comput Methods Eng. 2022. https://doi.org/10.1007/s11831-022-09858-w.

    Article  Google Scholar 

  3. Zhang C, et al. Deep learning for microscopic examination of protozoan parasites. Comput Struct Biotechnol J. 2022;20:1036–43. https://doi.org/10.1016/j.csbj.2022.02.005.

    Article  Google Scholar 

  4. Pho K, Mohammed Amin MK, Yoshitaka A. Segmentation-driven hierarchical retinanet for detecting protozoa in micrograph. Int J Semant Comput. 2019;13(3):393–413. https://doi.org/10.1142/S1793351X19400178.

    Article  Google Scholar 

  5. Zibaei M, Bahadory S, Saadati H, Pourrostami K, Firoozeh F, Foroutan M. Intestinal parasites and diabetes: a systematic review and meta-analysis. New Microbes New Infect. 2023. https://doi.org/10.1016/j.nmni.2022.101065.

    Article  Google Scholar 

  6. Holmström O, et al. Point-of-care mobile digital microscopy and deep learning for the detection of soil-transmitted helminths and Schistosoma haematobium. Glob Health Action. 2017. https://doi.org/10.1080/16549716.2017.1337325.

    Article  Google Scholar 

  7. Attias M, Teixeira DE, Benchimol M, Vommaro RC, Crepaldi PH, De Souza W. The life-cycle of Toxoplasma gondii reviewed using animations. Parasit Vectors. 2020. https://doi.org/10.1186/S13071-020-04445-Z.

    Article  Google Scholar 

  8. Tomiotto-Pellissier F, et al. Macrophage polarization in leishmaniasis: broadening horizons. Front Immunol. 2018. https://doi.org/10.3389/FIMMU.2018.02529.

    Article  Google Scholar 

  9. Chen X, et al. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal. 2022;79:102444. https://doi.org/10.1016/J.MEDIA.2022.102444.

    Article  Google Scholar 

  10. Suykens JAK, Vandewalle J. Least squares support vector machine classifiers. Neural Proc Lett. 1999;9(3):293–300. https://doi.org/10.1023/A:1018628609742.

    Article  Google Scholar 

  11. Borba VH, Martin C, Machado-Silva JR, Xavier SCC, de Mello FL, Iñiguez AM. Machine learning approach to support taxonomic species discrimination based on helminth collections data. Parasit Vectors. 2021;14(1):1–15. https://doi.org/10.1186/s13071-021-04721-6.

    Article  Google Scholar 

  12. K. E. Delas Penas, E. A. Villacorte, P. T. Rivera, and P. C. Naval, ‘Automated detection of helminth eggs in stool samples using convolutional neural networks’, IEEE Region 10 Annual International Conference, Proceedings/TENCON, vol. 2020-Novem, pp. 750–755, 2020, doi: https://doi.org/10.1109/TENCON50793.2020.9293746.

  13. Rosado L, da Costa JMC, Elias D, Cardoso JS. Mobile-based analysis of malaria-infected thin blood smears: automated species and life cycle stage determination. Sensors. 2017;17(10):2167. https://doi.org/10.3390/S17102167.

    Article  Google Scholar 

  14. J. Larsson and R. Hedberg. Development of machine learning models for object identification of parasite eggs using microscopy. 2000. http://www.teknat.uu.se/student

  15. Alva A, et al. Mathematical algorithm for the automatic recognition of intestinal parasites. PLoS ONE. 2017;12(4):e0175646. https://doi.org/10.1371/JOURNAL.PONE.0175646.

    Article  Google Scholar 

  16. A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. Adv Neural Inf Process Syst, vol. 25, 2012.

  17. Farooq MU, Ullah Z, Khan A, Gwak J. DC-AAE: dual channel adversarial autoencoder with multitask learning for KL-grade classification in knee radiographs. Comput Biol Med. 2023;167:107570. https://doi.org/10.1016/J.COMPBIOMED.2023.107570.

    Article  Google Scholar 

  18. Kumar S, Arif T, Ahamad G, Chaudhary AA, Khan S, Ali MAM. An efficient and effective framework for intestinal parasite egg detection using YOLOv5. Diagnostics. 2023;13(18):2978. https://doi.org/10.3390/DIAGNOSTICS13182978.

    Article  Google Scholar 

  19. Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2015;39(6):1137–49. https://doi.org/10.1109/TPAMI.2016.2577031.

    Article  Google Scholar 

  20. I. Correa, P. Drews, S. Botelho, M. S. De Souza, and V. M. Tavano, ‘Deep learning for microalgae classification’, Proceedings - 16th IEEE International Conference on Machine Learning and Applications, ICMLA 2017, vol. 2017-Decem, no. December, pp. 20–25, 2017, doi: https://doi.org/10.1109/ICMLA.2017.0-183.

  21. J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, ‘Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks’, Proceedings of the IEEE International Conference on Computer Vision, vol. 2017-October, pp. 2242–2251, Mar. 2017, doi: https://doi.org/10.1109/ICCV.2017.244.

  22. Waithe D, Brown JM, Reglinski K, Diez-Sevilla I, Roberts D, Eggeling C. Object detection networks and augmented reality for cellular detection in fluorescence microscopy. J Cell Biol. 2020. https://doi.org/10.1083/JCB.201903166/VIDEO-2.

    Article  Google Scholar 

  23. von Chamier L, Laine RF, Henriques R. Artificial intelligence for microscopy: what you should know. Biochem Soc Trans. 2019;47(4):1029–40. https://doi.org/10.1042/BST20180391.

    Article  Google Scholar 

  24. Seo Y, Park B, Hinton A, Yoon SC, Lawrence KC. Identification of Staphylococcus species with hyperspectral microscope imaging and classification algorithms. J Food Meas Charact. 2016;10(2):253–63. https://doi.org/10.1007/S11694-015-9301-0/TABLES/3.

    Article  Google Scholar 

  25. Liu R, Dai W, Wu T, Wang M, Wan S, Liu J. AIMIC: deep learning for microscopic image classification. Comput Methods Programs Biomed. 2022;226:107162. https://doi.org/10.1016/J.CMPB.2022.107162.

    Article  Google Scholar 

  26. Pullan RL, Smith JL, Jasrasaria R, Brooker SJ. Global numbers of infection and disease burden of soil transmitted helminth infections in 2010. Parasit Vectors. 2014. https://doi.org/10.1186/1756-3305-7-37.

    Article  Google Scholar 

  27. Li S, Du Z, Meng X, Zhang Y. Multi-stage malaria parasite recognition by deep learning. Gigascience. 2021;10(6):1–11. https://doi.org/10.1093/gigascience/giab040.

    Article  Google Scholar 

  28. Yang F, Yu H, Silamut K, Maude RJ, Jaeger S, Antani S. Parasite detection in thick blood smears based on customized faster-RCNN on smartphones. Proc Appl Imag Pattern Recognit Workshop. 2019. https://doi.org/10.1109/AIPR47015.2019.9174565.

    Article  Google Scholar 

  29. Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: a review. Med Image Anal. 2019;58:101552. https://doi.org/10.1016/J.MEDIA.2019.101552.

    Article  Google Scholar 

  30. Motamed S, Rogalla P, Khalvati F. Data augmentation using generative adversarial networks (GANs) for GAN-based detection of Pneumonia and COVID-19 in chest X-ray images. Inform Med Unlocked. 2021;27:100779. https://doi.org/10.1016/j.imu.2021.100779.

    Article  Google Scholar 

  31. Y. Chen, Y. Zhu, and Y. Chang, ‘CycleGAN Based Data Augmentation for Melanoma images Classification’, ACM International Conference Proceeding Series, pp. 115–119, 2020, doi: https://doi.org/10.1145/3430199.3430217.

  32. P. Mayo, N. Anantrasirichai, T. H. Chalidabhongse, D. Palasuwan, and A. Achim. Detection of parasitic eggs from microscopy images and the emergence of a new dataset.

  33. Bouteldja N, Hölscher DL, Bülow RD, Roberts ISD, Coppo R, Boor P. Tackling stain variability using CycleGAN-based stain augmentation. J Pathol Inform. 2022. https://doi.org/10.1016/j.jpi.2022.100140.

    Article  Google Scholar 

  34. Motamed S, Rogalla P, Khalvati F. Data augmentation using generative adversarial networks (GANs) for GAN-based detection of Pneumonia and COVID-19 in chest X-ray images. Inform Med Unlocked. 2021. https://doi.org/10.1016/j.imu.2021.100779.

    Article  Google Scholar 

  35. Naing KM, et al. Automatic recognition of parasitic products in stool examination using object detection approach. PeerJ Comput Sci. 2022. https://doi.org/10.7717/PEERJ-CS.1065.

    Article  Google Scholar 

  36. J. D. J. Deng, W. D. W. Dong, R. Socher, L.-J. L. L.-J. Li, K. L. K. Li, and L. F.-F. L. Fei-Fei, ‘jjkkjj’, 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2–9, 2009. DOI: https://doi.org/10.1109/CVPR.2009.5206848

  37. Cai Z. SA-GD: improved gradient descent learning strategy with simulated annealing. ar**v.org. 2021. https://doi.org/10.4855/ar**v.2107.07558.

    Article  Google Scholar 

Download references

Funding

The authors extend their appreciation to the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh in Saudi Arabia for funding and supporting this research partnership program no. PR-21–09-86.

Author information

Authors and Affiliations

Authors

Contributions

S Kumar: Supervision, Writing—review & editing; T arif: Data curation; G Ahmad: Methodology; A Ahmad: Investigation, Validation; Mohamad Ali: Software; Asimul Islam: Formal analysis.

Corresponding author

Correspondence to Satish Kumar.

Ethics declarations

Ethics approval

No ethical permission was needed.

Consent to participate

All authors consent to participate in this publication.

Consent for publication

All authors consent to publish the manuscript.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumar, S., Arif, T., Ahamad, G. et al. Improving faster R-CNN generalization for intestinal parasite detection using cycle-GAN based data augmentation. Discov Appl Sci 6, 261 (2024). https://doi.org/10.1007/s42452-024-05941-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42452-024-05941-y

Keywords