1 Introduction

The CORONA Virus Disease (COVID-19) is a pulmonary disease brought about by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). This pandemic has imposed tremendous loss to life so far. World Health Organization (WHO) is persistently observing and publishing all reports of this disease outbreak in various countries. This COVID-19 is a respiratory ailment and to a greater extent spread through droplets in air. The infection is transmitted predominantly via close contact and by means of respiratory droplets delivered when an individual coughs or sneezes. The symptoms of this virus are coughing, difficulty in breathing and fever. Trouble in breathing is an indication of conceivable pneumonia and requires prompt clinical consideration. No antibody or explicit treatment for COVID-19 contamination is available. Emergency clinics provide isolation wards to infected individuals. It is the most common when individuals are symptomatic, yet spread might be conceivable before symptoms appear. The infection can sustain on surfaces for 72 hours. Symptoms of COVID-19 start to appear somewhere in between the range of 2 to 14 days, with a mean of 7 days. The standard technique for analysis is by real time Reverse Transcirption Polymerase Chain Reaction (RT-PCR) performed on a nasopharyngeal swab sample. The same disease can likewise be analyzed from a mix of manifestations, risk factors and a chest CT demonstrating highlights of pneumonia. Many countries are unable to limit the wide spread of COVID-19 quickly due to insufficient medical kits. Many researches are carried out across the globe to handle the pandemic scenario. Many deep leaning models are proposed to predict the COVID-19 symptoms at the earliest to control the spread. We propose a transfer learning model over the deep learning model to further quicken the prediction process.

The literature survey of the proposed work is explained in Section 2. Proposed model is explored in Section 3. Experiment and results are discussed in Section 4. Discussion and conclusion are presented in Section 5 and Section 6 respectively.

2 Related Work

To limit the transmission of COVID-19 [11], screening large number of suspicious cases is needed, followed by proper medication and quarantine. RT-PCR testing is considered to be the gold standard grade of testing, yet with significant false negative outcomes. Efficient and rapid analytical techniques are earnestly anticipated to fight the disease. In view of COVID-19 radio graphical differences in CT images, we propose to develop a deep learning model that would mine the unique features of COVID-19 to give a clinical determination in front of the pathological test, thereby sparing crucial time for sickness control.

Understanding the basic idea of COVID-19 and its subtypes, varieties might be a continual test, and the same should be made and shared all over the world. Baidu Research [14] discharge its LinearFold calculation and administrations, which can be utilized for whole genome optional structure forecasts on COVID-19, and is evidently multiple times quicker than different calculations. Pathological discoveries of COVID-19 related with intense respiratory pain disorder are exhibited in [4]. In [29], Investigations included: 1) rundown of patient attributes; 2) assessment of age appropriations 3) computation of case casualty and death rates; 4) geo-temporal examination of viral spread; 5) epidemiological bend development; and 6) subgroup

Another [6] to examine the degree of COVID-19 disease in chosen populace, as controlled through positive neutralizer tests in everyone has been created. Chen H [5] proposed that limited information is accessible for pregnant ladies with COVID-19 virus. This examination meant to assess the clinical attributes of COVID-19 in women during pregnancy period. Shan F [19] suggested that CT screening is vital for conclusion, appraisal and organizing COVID-19 disease. Follow-up checks of each 3 to 5 days regularly are suggested for illness movement. It is concluded that peripheral and bilateral Ground Glass Opacification (GGO) with consolidation will be more prevalent in COVID-19 infected patients. Gozes [8] created AI-based computerized CT image examination instruments for location, measurement, and following of Coronavirus. Wang S [23] proposed a deep learning system that uses CT scan images to detect COVID- 19 disease. Wang Y [27] presented a critical side effect in COVID-19 as Tachypnea, increasing rapid respiration. The examination can be used to recognize different respiratory conditions and the gadget can be used handy. Rajpurkar P [17] presented the algorithm that detected pneumonia using cheXNet system which is more accurate. Xu X [28] exhibited that the traditional method used for identifying COVID-19 has low positive rate during initial stages. He developed a model which does the early screening of CT images. Yoon SH [31] found that the COVID-19 pneumonia that affected people in Korea exhibits similar characteristics as people living in China. They are found to have same characteristics when analyzed for patients. Singh R [22] evaluated the impact of social distancing on this pandemic with taking age structure into account. The papers provide a solution for this pandemic caused by novel coronavirus. Many researches are also carried out using Image processing techniques on COVID-19. Recently Subhankar Roy [2] proposed the deep learning technique for classification of Lung Ultrasonography (LUS) images. Classification of COVID-19 was carried out with CT Chest images with help of evolution–based CNN network in [18]. In study [3], Harrison X. Bai proposed that main feature for discriminating COVID-19 from Viral pneumonia is peripheral distribution, ground-glass opacity and vascular thickening. This study can distinguish COVID-19 from Viral pneumonia with high specificity using CT scan images and CT scan features. The same study [3] also suggested that both COVID-19 affected patient and Viral pneumonia affected patients develop central+peripheral distribution , air bronchogram , pleural thickening, pleural effusion and lymphadenopathy with no significant differences. General similarities between two viruses is that both cause respiratory disease, which can be asymptomatic or mild but can also cause severe disease and death. Second, both viruses are transmitted by contact or droplets. These predominant similarities with pneumonia viruses (influenza and SARS-CoV-2) urged us to proceed with the proposed work. The primary limitations that are analyzed so far are as follows:

  1. 1.

    Use of CT and CXR images: The COVID-19 virus attacks the cells in the respiratory tracts and predominantly the lung tissues; we can use the images of Thorax to detect the virus without any test-kit. It is inferred that the chest X-Ray are of little incentive in initial stages, though CT scans of the chest are valuable even before the symptoms appear.

  2. 2.

    Less testing period: The test taken to identify the COVID-19 is not fast enough. Especially during initial stages of the virus development, it is very hard to assess the patients. The manual analysis of CXR and CT scans of many patients by radiologists requires tremendous time. So we need the automated system which can save radiologist’s valuable time.

  3. 3.

    Reusing the existing model: In this paper, a novel existing system can be reused for identifying the COVID-19 using CT scan and Chest X-Ray images. This can precisely detect the abnormal features that are identified in the images.

To resolve the limitations stated above, the paper has been proposed using transfer learning model.

3 Proposed work

In this section, the proposed transfer learning model along with Haralick features used in the model are discussed.

3.1 Transfer learning

In this proposed work, a novel system has been presented to identify COVID-19 infection using deep learning techniques. First, transfer learning has been performed to images from NIH [21, 26, 30] Chest X-Ray-14 dataset to identify which disease has maximal similarity to COVID-19. It is found that pneumonia is much similar to COVID-19. The Fig. 1 represents images of the Chest X-Ray for various types of lung conditions.

Fig. 1
figure 1

(a) The normal lungs, (b) The bacterial pneumonia affected lungs, (c) The viral pneumonia affected lungs, (d) COVID-19 affected lungs

Transfer learning [9, 24] is a method where the knowledge gained by a model (Viral pneumonia detection model) for a given data is transferred to evaluate another problem of similar task (COVID-19 detection model). In transfer learning, a initial training is carried out using a large amount of dataset to perform classification. The architecture of the proposed transfer learning model is delineated in Fig. 2. The CXR and CT images of various lung diseases including COVID-19, are fed to the model. First, the images are preprocessed to get quality images.

Fig. 2
figure 2

Proposed Architecture of the Transfer Learning Model

The histogram equalization and Weiner filters are applied to increase the contrast and remove the noise respectively as image enhancement techniques to increase the quality of the images. Histogram equalization provides enhanced better quality image without loss of information. Weiner filter is used to determine target process by filtering the noisy regions in an image. The deconvolution is performed to the blurred images by minimizing the Mean Square Error(MSE). The area of interest is chosen using ITK-SNAP software.The image resizing is achieved using python PIL automatically. The images after applying image enhancement techniques are presented in Fig. 3.

Fig. 3
figure 3

(a) Chest CT scan image, (b) Histogram equalized image to increase contrast, (c) Weiner filtered to remove noise from image

Haralick texture features are obtained from the enhanced images and these modified images are then fed into various pre-defined CNN models. The haralick features are discussed in the next following section.

In various pre-defined CNN models like Resnet50, VGG16 and InceptionV3, the convolutional layers are used to extract the image features and max pooling layers are used to down sample the images for dimensionality reduction, intermediate results are shown in Fig. 4.

Fig. 4
figure 4

Intermediate output of COVID-19 image for first two layers from VGG16

The regularization is done by the dropout layers which speedup the execution by expelling the neurons whose contribution to the yield is not so high. The values of weights and bias are initialized randomly. The image size is chosen as 226x226. Adam optimizer takes care of updating the weights and bias. The sample images are trained in batches of size 250. Early stop** is employed to avoid over-fitting of data.

The five different CNN models are also built with different configurations to analyze the various results as shown in Fig. 5.

Fig. 5
figure 5

CNN model with different configuration

The stride value and dilation is chosen to be 1 which is the default value. Since these models perform one class classification (i.e.) either the sample will belong to that class or not, this is same as binary classification. So sigmoid function is used as the activation function in the fully connected layer as mentioned in Eq (1).

$$ F(x)=\frac{1}{1+e^{x}} $$
(1)

For the convolution layer and max-pooling layers ReLu function is utilized to activate the neurons and it is defined in Eq (2).

$$ F(x)=max(0,x) $$
(2)

Each model is trained with dropout of 0.2 or 0.3. The transfer learning model is applied to predict the COVID-19 images instead of develo** a new deep learning model from the scratch, since it takes more training time. The different pre-trained models and different CNN configured models are trained and tested with different lung disease images, the one, VGG16, had given a lesser misclassification with viral pneumonia, would be taken for prediction of COVID-19 cases by the proposed transfer learning model.

3.2 Haralick Texture Feature Extraction

The Haralick features [16] are extracted from images that are resized as mentioned in the Fig. 2. Haralick features very well describe the relation between intensities of pixels that are adjacent to each other. Haralick presented fourteen metrics for textual features which are obtained from Co-occurrence matrix. It provides the information about how the intensity in a particular pixel with a position is related to neighbouring pixel. The Gray-Level Co-occurrence Matrix (GLCM) is constructed for the images with N dimensions where N refers to the number of intensity levels in the image. For each image GLCM is constructed to evaluate fourteen features. The calculation of those 14 features leads to identification of new inter-relationship between biological features in images. The relationship between intensities of adjacent pixels can be identified using these features. These relationship among pixels contains information related to spatial distribution of tone and texture variations in an image. The homogeneity of image(F1) which is the similarity between pixels, is given by Eq (3), where p(k,l) is position of element in matrix.

$$ F_{1}=\sum\limits_{k=0}^{N}\sum\limits_{l=0}^{N}\frac{p(k,l)}{1+(k-l)^{2}} $$
(3)

The measure of difference between maximum pixel value and minimum pixel value (contrast)(F2), is given by Eq (4) where m is |kl|. p(k,l) is position of element in matrix.

$$ F_{2}=\sum\limits_{m=0}^{N-1}m^{2}\sum\limits_{k=0}^{N}\sum\limits_{l=0}^{N} p(k,l) $$
(4)

The dependencies of pixel’s gray levels which are adjacent to each other (F3), is given by Eq (5) also called correlation. Where μk,μl, σk, σl are mean and standard deviations of the probability density functions where p(k,l) is position.

$$ F_{3}=\frac{{\sum}_{k}^{}{\sum}_{l}^{}p(k,l)*(k-\mu_{k})* (l-\mu_{l})}{\sigma_{k} \sigma_{l}} $$
(5)

The square differences from mean of an image (F4), are averaged in Eq (6) which is called variance or sum of squares, where p(k,l) is position and μ is the mean value.

$$ F_{4}=\sum\limits_{k=1}^{N}\sum\limits_{l=1}^{N}(k-\mu)^{2} p(k,l) $$
(6)

The local homogeneity of an image (F5), is given by Eq (7) which is also called Inverse Difference Moment (IDM).

$$ F_{5}=\sum\limits_{k}^{}\sum\limits_{l}^{}\frac{p(k,l)}{1+(k-l)^{2}} $$
(7)

The mean values in a given image (F6), are summed to get sum average as given in the Eq (8), where a and b are row and column positions in the co-occurrence matrix summed to a+b.

$$ F_{6}=\sum\limits_{k=2}^{2N}kp_{a+b}(k) $$
(8)

The variance values of an image (F7), is summed to get sum variance as exhibited in Eq (9).

$$ F_{7}=\sum\limits_{k=2}^{2N}(k-F_{8})^{2} p_{a+b}(k) $$
(9)

The total amount of information that must be coded for an image (F8), is given by Eq (10) which is called sum entropy.

$$ F_{8}= -\sum\limits_{k=2}^{2N}p_{a+b}(k)log(p_{a+b}(k)) $$
(10)

The amount of information that must be coded for an image (F9), is given by the Eq (11) is called entropy.

$$ F_{9}= - \sum\limits_{k=1}^{N}{\sum}_{l=1}^{N}p(k,l)log(p(k,l)) $$
(11)

The variance of an image (F10), is differenced to get difference variance presented in Eq (12).

$$ F_{10}= \sum\limits_{k=0}^{N}(k-\mu)^{2}p_{a-b}(k) $$
(12)

The entropy values of an image (F11), is differenced to get difference entropy as delineated in Eq (13).

$$ F_{11}=-\sum\limits_{k=0}^{N}p_{a-b}(k)log(p_{a-b}(k)) $$
(13)

The Eq (14) shows the information measures of correlation1 (F12), where HX and HY are entropies of Fx and Fy.

$$ F_{12}=\frac{HXY-HXY1}{max(HX,HY)} $$
(14)

The Eq (15) shows the information measures of correlation2 (F13), where HX and HY are entropies of Fx and Fy.

$$ F_{13}=(1-e^{-2(HXY2-HXY)})^{1/2} $$
(15)

In the above equation HXY,HXY1 and HXY2 are as mentioned below in Eq (16), Eq (17) and Eq (18).

$$ HXY=- \sum\limits_{k}^{}\sum\limits_{l}^{}p(k,l) log(p(k,l)) $$
(16)
$$ HXY1=- {\sum}_{k}^{}{\sum}_{l}^{}p(k,l) log(p_{a}(k)p_{b}(l)) $$
(17)
$$ HXY2=- \sum\limits_{k}^{}\sum\limits_{l}^{}p_{a}(k)p_{b}(l) log(p_{a}(k)p_{b}(l)) $$
(18)

The Eq (16) is the linear interdependence of pixels in an image (F14), where p(k,i) and p(l,i) are positions as mentioned in Eq. (19).

$$ F_{14}=\sum\limits_{i}^{}\frac {p(k,i)p(l,i)}{p_{a}(k)p_{b}(i)} $$
(19)

4 Experiments and results

In this section, the dataset for carrying out the experiments have been discussed. In addition to this, all the results and statistical analyses have been presented.

4.1 Dataset

The data for COVID-19 is assimilated from various resources available in Github open repository, RSNA and Google images. The data collected from various resources are presented in Table 1.

Table 1 COVID-19 image dataset

The data [21, 26, 30] for the Chest X-Ray pulmonary diseases are obtained from NIH with total of 81,176 observations with disease labels from 30,805 unique patients are shown in Table 2. The images are of size 1024x1024.

Table 2 Chest X-Ray 14 pulmonary disease dataset

The data [12] for the viral, bacterial pneumonia and normal images are obtained from Mendeley with total of 5,232 images as shown in Table 3.

Table 3 Mendeley dataset for types of pneumonia

4.2 Results

The misclassification rate is calculated for all the pre-trained models like VGG16, Resnet50, and InceptionV3. The misclassification rate in Eq (20) is used to find the models which are similar to COVID-19, where N is total number of images, Fn is number of data that are actually COVID-19 but wrongly classified as not COVID-19 and Fp is number of data that are not COVID-19 but wrongly classified as COVID-19.

$$ Misclassification=\frac{F_{n}+F_{p}}{N} $$
(20)

From Table 4, we can see the architecture 1 shows a better result when compared with other architectures with less misclassification rate. These architectures are shown in Fig. 5.

Table 4 Misclassification rate when COVID-19 data is tested with models built using Mendeley dataset with 14 pulmonary lung diseases

From Table 5, we can identify that the COVID-19 data is very much similar to pneumonia, consolidation and effusion. It is evident that COVID-19 data when tested for pneumonia trained model produces less misclassification rate.

Table 5 Misclassification Rate When COVID-19 data is tested with models built using NIH dataset with 14 Pulmonary lung diseases

In Table 6, we can determine that transfer learning has produced better accuracy (ACCURACY1) compared with traditional learning accuracy (ACCURACY2). This is because the data for COVID-19 is similar to pneumonia. Because of this reason, when model is trained for pneumonia and tested with COVID-19 data, the accuracy is better. The time taken for VGG16 is less because it is only 16 layers deep while resent50 and inceptionV3 are 50 and 48 layers deep respectively even with better accuracy compared with other models. The models are trained using NVIDIA TESLA P100 GPUs provided by Kaggle.

Table 6 Testing the pneumonia models with COVID-19 dataset

Further analyzing the pneumonia images, we can perform transfer learning for two types of pneumonia. It is found that COVID-19 is as similar as viral pneumonia. The VGG16 model correctly identifies the COVID-19 data with 0.012 misclassification rate as shown in Table 7.

Table 7 Misclassification rate when COVID-19 data is tested with models built using NIH dataset

We can find from Table 8 that out of 407 images for COVID-19 and normal images, 385 COVID-19 images are correctly classified as COVID-19 and 22 images are falsely classified under non viral pneumonia class. This shows that COVID-19 is very similar to viral pneumonia. 28 images are misclassified which eventually made misclassification rate for viral pneumonia of 0.012.

Table 8 Confusion matrix when COVID-19 data is tested with VGG16 viral pneumonia model

From Fig. 6 we can find that the pre-trained VGG16 model has correctly classified the CT scan image of chest as COVID- 19. The top right image has got ground glass opacity on the right lower lobe of chest which is filled with air. While the image on top and bottom left are normal CT chest images. In the bottom right image, we can find extensive ground-glass opacities in lungs involving almost the entire lower left and lower right indicating COVID-19 virus.

Fig. 6
figure 6

Output of COVID-19 CT scan image classified correctly by VGG16

From Fig. 7, we can find that the VGG16 model has classified the Chest X-Ray images correctly. Here we can see that the images on the right side of the Fig. 7, has got increased patchy opacity in the right lower lobe. While the images on the left are seems to be more clear. The left side pictures are normal lung images which are correctly classified by VGG16.

Fig. 7
figure 7

Output of COVID-19 CXR image classified correctly by VGG16

CT scan carried out for a same person has peculiar features because the patient does not have any nodules or consolidation like reticular opacities as previous images. The image has got small patchy glass opacity in the center of lungs developed from the peripheral. These images are also precisely classified with more similarity percentage. So it has found a similar image from the training set. This shows that the model has been consistent with all the peculiar cases like the one shown in Fig. 8. This is the reason why the model is trained using both Chest CXR and CT scan images.

Fig. 8
figure 8

Output of COVID-19 CT scan image classified correctly by VGG16 With more similarity percentage with the images trained

The loss and accuracy graphs are shown in Fig. 9 and Fig. 10 respectively. We can see the steady increase in accuracies and steady decrease in loss values while training and testing. This shows that model is more effective and efficient.

Fig. 9
figure 9

Accuracy graphs for ResNet50 , InceptionV3 and VGG16 while finding the similarity to COVID-19 model

Fig. 10
figure 10

Loss graphs for ResNet50 , InceptionV3 and VGG16 while finding the similarity to COVID-19 model

After finding the similar models, the final classification is carried out to calculate confusion matrix for model evaluation. Tables 910 and 11 shows the confusion matrix for the conventional and transfer learning models of COVID-19 data when tested for viral pneumonia models. Tables 1213 and 14 show the classification reports for all three models with precision, recall and F1-scores to analyze the performance where C1 denotes normal class, C2 is bacterial pneumonia class, C3 is viral pneumonia class and C4 is COVID-19 class. The precision and recall for all the classes are found to be promising. The F1-score as shown in Eq (21) is calculated using precision in Eq (22) and recall in Eq (23), where Tp is number of data that are COVID-19 and are correctly classified as COVID-19. This shows the model is skilled and classified the images precisely. The transfer learning gives better outcomes when compared with normal classification.

$$ F1-score= 2. \frac {(precision * recall)} {(precision+recall)} $$
(21)
$$ precision= \frac{T_{p}}{T_{p}+F_{p}} $$
(22)
$$ recall=\frac{T_{p}}{T_{p}+F_{n}} $$
(23)
Table 9 Confusion matrix for VGG16 model for conventional classification and transfer learning models
Table 10 Confusion matrix for Resnet50 model for conventional classification and transfer learning models
Table 11 Confusion matrix for InceptionV3 model for conventional classification and transfer learning models
Table 12 Classification report for VGG16 model for conventional classification and transfer learning models
Table 13 Classification report for Resnet50 model for conventional classification and transfer learning models
Table 14 Classification report for InceptionV3 model for conventional classification and transfer learning models

Sample of 14 haralick features of 10 sample images are seen through Table 15, Table 16 and Table 17 for Normal image, viral pneumonia and COVID-19 images. Then haralick features of 200 images of normal, viral pneumonia and COVID-19 are analyzed. From this analysis, it is concluded from Tables 1819 and 20 that the feature F4, F6 and F7 should lie only within certain ranges. Other features exhibit values with fewer deviations. The range of values for an image to be detected as normal is as shown in the Eq (24), Eq (25) and Eq (26).

$$ 23.6780 \leq F_{4} \leq 33.8134 $$
(24)
$$ 8.1234 \leq F_{6} \leq 11.8791 $$
(25)
$$ 100.7445 \leq F_{7} \leq 132.4556 $$
(26)

For the image to be identified as viral pneumonia affected lung images the values of the features must lie within the range as shown in the Eq (27), Eq (28) and Eq (29).

$$ 35.7826 \leq F_{4} \leq 47.1890 $$
(27)
$$ 10.8714 \leq F_{6} \leq 15.6770 $$
(28)
$$ 147.7801 \leq F_{7} \leq 205.5645 $$
(29)

For the image to be identified as COVID-19 affected lung images the values of the features must lie within the range as shown in the Eq (30), Eq (31) and Eq (32).

$$ 33.7812\leq F_{4} \leq 57.3214 $$
(30)
$$ 10.1552 \leq F_{6} \leq 19.0101 $$
(31)
$$ 132.8199 \leq F_{7} \leq 210.29901 $$
(32)
Table 15 Haralick features extracted from ten normal lung images
Table 16 Haralick features extracted from ten viral pneumonia lung images
Table 17 Haralick features extracted from ten COVID-19 affected lung images
Table 18 Statistical Analysis of Haralick features that are extracted for normal, COVID-19 and Viral pnenumonia lung images (200 images)
Table 19 Statistical Analysis of Haralick features that are extracted for normal and viral pneumonic lung images
Table 20 Statistical Analysis of Haralick features that are extracted for normal and COVID-19 lung images

4.3 Performance Comparison

The efficacy of the proposed model is compared with other recent studies on COVID-19 conventional classification works and it is given in Table 21. From this performance analysis, the proposed transfer learning model outperforms the other existing models.

Table 21 Comparison of studies on COVID-19 classification

4.4 Visualisation

The infected region of lung images are identified using GradCAM. Images in the Fig. 11 shows heatmap visualization based on the prediction made by the transfer learning model(VGG-16) which produces better accuracy. Using GradCAM we can visually validate where the proposed network is scanning and verifying that it is indeed screening the correct patterns in the image and activating around those patterns which will be used by our model for classification.This heatmap generated shows the infected regions that are correctly identified by our model. To sum up GradCAM, the images are passed into the completely trained model and features are extracted from the last convolution layer. Let fi be the ith feature map and let wf,i be the weight in the final classification layer for feature map i leading to f. We obtain a map Mf of the most salient features utilized in categorizing the image as having f by calculting the weighted sum of the features using the assigned weights. It is given in the Eq. (33).

$$ M_{f}={\sum}_{i}^{}w_{f,i}{f_{i}} $$
(33)
Fig. 11
figure 11

Output of Gradient-weighted Class Activation Map** (GradCAM) generated heatmap visualization for images

5 Discussion

World Health Organisation(WHO) has recommended RT-PCR testing for the suspicious cases and this has not been followed by many countries due to shortage of the testing kit. Here the transfer learning technique can provide a quick alternative to aid the diagnoses process and thereby limiting the spread. The primary purpose of this work is to provide radiologists with less complex model which can aid in early diagnosis of COVID-19. The proposed model produces precision of 91% , recall of 90% and accuracy of 93% by VGG-16 using transfer learning, which outperforms other existing models for this pandemic period.

6 Conclusion

This COVID-19 detection model has been developed with kee** in mind the challenges prevailing in the field of COVID-19 detection using data assimilated from multiple sources. Analysis of unusual features in the images is required for detection of this virus infection. The earlier we detect the viral infection, the more it helps in saving lives. This paper has been visualized in holistic approach taking into account the critical issues that are daunting in the domain. The results are fairly consistent for all peculiar cases. We hope the outcomes discussed in this paper serves a small steps for constructing cultivated COVID-19 detection model using CXR and CT images. In future work, more data can be assimilated for better results which further strengthen the proposed model.