Introduction

Estimating changes developed in land use/land cover is a hot research area these days. Researchers investigate changes in the land pattern through satellite data, microsatellite data, drone data, unmanned aerial vehicles (\(\mathrm{UAVs}\)) data, terrain analysis, etc. (Chen et al. 2018). Several space agencies have conducted a series of successful space exploration missions like the super excited Apollo mission (Papanastassiou and Wasserburg 1971), Hubble mission (Baker et al. 2020), Voyager mission (Cohen and Rymer 2020), Cassini-Huygens mission (Sotin et al. 2021), Chandra mission (Tomsick et al. 2021) of the National Aeronautics and Space Administration (\(\mathrm{NASA}\)). Aryabhata (Damle et al. 1976), Chandrayaan 2 carried by GSLV Mark 3 (Chandrashekar 2016), Mangalyaan (Haider and Pandya 2015), launching 104 satellites in a single attempt (Muraleedharan et al. 2019), etc., are prominent successful space missions conducted by the Indian Space Research Organization (\(\mathrm{ISRO}\)). These space missions provide information about the capability of the individual space agency in space exploration. Due to these missions, space agencies generate extensive data to analyze specific situations or save records for future analysis (Mathieu et al. 2017). The data used in this research work is a perfect example of this scenario as the pre-COVID image of the study area was snapped for a general-purpose. Still, the post-COVID image was snapped to study the consequence of lockdown. Thus, the combination of pre-COVID data and post-COVID data becomes a great scenario to explore by remote sensing professionals, scientists, and researchers.

Nowadays, small aircraft-like devices popularly known as “drones” and \(\mathrm{UAVs}\) are also used for data collections and day-to-day purposes (Otto et al. 2018). These devices are operated by the human expect or some onboard computer device (Jiang et al. 2020). They are also used for several applications related to medical diagnostic, defense, transportation, film making, scientific research, firefighting, emergency services, etc. (Kerle et al. 2020). \(\mathrm{UAVs}\) are now introduced in satellite map** of the land-sides affected by the landslides (Niethammer et al. 2012), crop damage assessment caused due to the natural phenomenon (Maimaitijiang et al. 2020), map** disputed territory (defense application) (Li et al. 2020), \(3\mathrm{D}\) model development of terrain, etc. Today, the world is infected with the novel coronavirus (Nascimento et al. 2020) (Wang et al. 2020a, b, c). In this situation, \(\mathrm{UAV}\) has found some new application areas like spraying the disinfectant, scanning body temperature, broadcasting the message at extremely dangerous COVID hotspots, cargo delivery, \(\mathrm{QR}\) codes, connectivity, map**, etc. (d’Italie 2020). Thus besides satellites, \(\mathrm{UAVs}\), drones, high-resolution optical cameras, etc. are some of the primary sources through which high quality imagery can be obtained. Figure 1 represents the pictorial representation of the application areas of the drones. It can be observed that in the coming future, many earth exploration activities will be performed with the assistance of drones.

Fig. 1
figure 1

Application areas of drones (d’Italie 2020)

In satellite, remote sensing change detection methodologies are broadly classified into two categories (Woodcock et al. 2020), i.e., “pixel-based change detection (\(\mathrm{PBCD}\))” and “object-based change detection (\(\mathrm{OBCD}\))” (Hussain et al. 2013). Pre-classification \(\mathrm{PBCD}\) technique provide information about the study area in binary (change/no change) format. Another popular technique for image classification is \(\mathrm{OBCD}\) through which information of study area is obtained by analysing the difference developed in the classification classes. Through \(\mathrm{OBCD}\) techniques when pre- and post-image of an event is classified, the comparison among them is performed by analyzing the same category for both images. \(\mathrm{PBCD}\) and \(\mathrm{OBCD}\) methodologies are further categorized in several techniques presented in Table 1.

Table 1 Classification of various \(\mathrm{PBCD}\) and \(\mathrm{OBCD}\) techniques

Some notable work done in satellite remote sensing by fusion of two algorithms, techniques, and classification schemes are presented in these reviews. Garg and Dhiman (2021) proposed a fusion of “grey level co-occurrence matrix (\(\mathrm{GLCM}\))” features and “local binary pattern (\(\mathrm{LBP}\))” to develop a novel “content based image retrieval (\(\mathrm{CBIR}\))” system. They have used three different classification approaches in their experiment, i.e., support vector machine (\(\mathrm{SVM}\)), decision tree (\(\mathrm{DT}\)) algorithm and K-nearest neighbourhood. They have concluded that their proposed algorithm performs better with superior recall, precision, and accuracy. Iqbal et al. (2021) used \(\mathrm{GLCM}\) features fused with “machine learning (\(\mathrm{ML}\))” approach to obtain similarity in different crop fields. The investigation is performed on the \(\mathrm{UAV}\) based low altitude high resolution data. They have obtained prenominal results with this merger of these two techniques. Thus the overall accuracy of their developed system is increased by \(13.65\%.\) Caballero et al. (2020) obtained \(\mathrm{C}\) band \(\mathrm{SAR}\) imagery to differentiate between onion and sunflower crop. In their classification technique, they have used the combination of \(\mathrm{GLCM}\) along with \(\mathrm{SVM}\) approach. Through their developed methodology they have obtained “overall accuracy (\(\mathrm{OA}\))” and “Kappa coefficient (Kp)” of \(95.35\%\) and \(0.89\) respectively while differentiating onion crop with sunflower crop. Singh and Singh (2020) used SCATSAT-1 data to distinguish “multi-year ice” and “first-year ice” of the arctic region using “maximum likelihood classification (\(\mathrm{MLC}\))”. They have obtained over all classification accuracy of \(92\%\) in their experiment. Rimal et al. (2020) used the Landsat imagery of the Kathmandu valley of Nepal between 1988 and 2016 to compare the efficiency of object-based image classification “\(\mathrm{SVM}\)” and “\(\mathrm{ML}\)” image classification algorithm. The experimental result obtained from their investigation suggest that \(\mathrm{SVM}\) performs better than \(\mathrm{ML}\) classification algorithm. Thus scientist and researchers are working to develop a new methodology by combining two or more techniques to obtain maximum accuracy and complete information from image classification and feature quantification.

In this research work, a model is presented employing a combination of \(\mathrm{PBCD}\) (texture analysis based \(\mathrm{GLCM}\)) and \(\mathrm{OBCD}\) (Classified object change detection \(\mathrm{COCD}\)) techniques by analyzing the pre-COVID and post-COVID (during lockdown) panchromatic images of Connaught Place, New Delhi, India. The pixel-based texture analysis \(\mathrm{GLCM}\) technique is used for texture classification and quantification of the study area. \(\mathrm{GLCM}\), has provided information about the statistical and spectral behavior of the image pixels through the mathematical analysis. The quantification of the \(\mathrm{GLCM}\) features for pre-COVID and post-COVID images produces a new relationship among \(\mathrm{GLCM}\) features. Histogram signature plotting represents the changes in the frequency of intensity values of the study area. \(\mathrm{OBCD}\) technique provides information about the study area in a different pattern. In this technique, for pre-COVID and post-COVID images, “region of interest (\(\mathrm{ROI}\))” is selected by allotting pixels values to the \(\mathrm{ROIs}\). These \(\mathrm{ROI}\) behave as a “region” based on which classification of the study area is performed. Another set of \(\mathrm{ROI}\) is also created, which assists in the accuracy assessment. The advantage of \(\mathrm{OBCD}\) technique over \(\mathrm{PBCD}\) technique lies in the “visual point of view.” In this classification, there is also the possibility to compare only a “particular class,” leaving rest classes.

The article is divided into six separate sections. “Background of PBCD (GLCM) and OBCD techniques” provides detailed background information about \(\mathrm{PBCD}\) and \(\mathrm{OBCD}\) techniques. “Background of Skysat satellite program and details of the study area” presents a brief report on the Skysat satellite program and study area. “Experimental results” offers detail regarding experimental results of texture quantification and image classification. “Discussion” presents discussions and outcomes from the proposed research work. Finally, “Conclusion” offers concluding remarks on the research work.

Background of PBCD (GLCM) and OBCD techniques

GLCM-based texture classification technique

The texture is an essential aspect of gathering information from remote sensing images. Through texture analysis, spectral as well as spatial information of the study area is obtained. This technique is extensively used in various inaccessible sensing applications. Harlick et al. invented \(\mathrm{GLCM}\) (1973). He presented a set of “fourteen” different features to classify the image texture (Harlick et al 1973). Later a lot of work was done on these features. Gotlieb and Kreyszig organized these fourteen features into a set of four different categories (1990). Visual texture features are considered most important in remote sensing applications because they directly impact human visual perception. Texture visual features include contrast, correlation, angular second moment (\(\mathrm{ASM}\)), and inverse difference moment (\(\mathrm{IDM})\) (Haralick et al. 1973). The \(\mathrm{GLCM}\) formation from an input image is presented in Fig. 2. The location of pixel position in the input image is illustrated in Fig. 2a. Input image of dimension \(5\times 5\) is presented in Fig. 2b. The \(\mathrm{GLCM}\) image of the input image is shown in Fig. 2c. The normalized \(\mathrm{GLCM}\) image is presented in Fig. 2d.

Fig. 2
figure 2

a Pixel position of the input image, b input image of dimension \(5\times 5\), c \(\mathrm{GLCM}\) of the input image, and d normalized representation of the \(\mathrm{GLCM}\) image

\(\mathrm{GLCM}\) calculation of any input image is dependent on two critical parameters. These parameters are “distance” and “angle of orientation.” The distance represents space between the “pixel of interest” and the “neighboring pixel.” This distance can be varied to obtain different values of the texture features. The distance can be varied starting from \(d=1\), \(d=2\) and so on. The orientation angle presents the direction of the variation of the texture features following the distance. The orientation angle of an image can vary from 0° to 315°. This situation can be understood from Fig. 3, where different combinations of distances and orientations from a “pixel of interest (\(\mathrm{POI}\))” are presented.

Fig. 3
figure 3

\(\mathrm{GLCM}\) pixel, distance, and the orientation arrangement from the center image pixel

Let us assume an image with \({N}_{x}\) resolution cells in the “horizontal direction” and \({N}_{y}\) resolution cells in the “vertical direction.” The grey tone shown up in the image is quantized to \({N}_{g}\) level. Then the “horizontal spatial domain,” “vertical spatial domain,” and the “set of the quantized grey levels” are expressed by \({l}_{x}\), \({l}_{y}\) and \(G\). The set \({l}_{x}\times {l}_{y}\) is the resolution cell. Thus the grey tone in each resolution cell is expressed as \({l}_{x}\times {l}_{y}:G\). Therefore the expression for the angle quantized from 0° to 315° are expressed by Eqs. (1)–(8), where \((k,l)\) represents the image pixels.

$$ \begin{gathered} {\text{pix}}\left( {k,l,{\text{Dis}}, 0^\circ } \right) = \ne \left\{ {\left( {\left( {\left( {m,n} \right),\left( {o,p} \right)} \right) \in \left( {l_{x} \times l_{y} } \right)} \right) \times \left( {l_{x} \times l_{y} } \right) \vdots \left| {m - n} \right| = 0,\left| {o - p} \right| = D } \right\}, \hfill \\ I\left( {m,n} \right) = k\;{\text{and}}\;I\left( {o,p} \right) = l \hfill \\ \end{gathered} $$
(1)
$$ \begin{gathered} {\text{pix}}\left( {k,l, {\text{Dis}}, 45^\circ } \right) = \ne \left\{ {\left( {\left( {\left( {m,n} \right),\left( {o,p} \right)} \right) \in \left( {l_{x} \times l_{y} } \right)} \right) \times \left( {l_{x} \times l_{y} } \right) \vdots \left| {m - n} \right| = - D,\left| {o - p} \right| = D } \right\}, \hfill \\ I\left( {m,n} \right) = k\;{\text{and}}\;I\left( {o,p} \right) = l \hfill \\ \end{gathered} $$
(2)
$$ \begin{gathered} {\text{pix}}\left( {k,l, {\text{Dis}}, 90^\circ } \right) = \ne \left\{ {\left( {\left( {\left( {m,n} \right),\left( {o,p} \right)} \right) \in \left( {l_{x} \times l_{y} } \right)} \right) \times \left( {l_{x} \times l_{y} } \right) \vdots \left| {m - n} \right| = - D,\left| {o - p} \right| = 0 } \right\}, \hfill \\ I\left( {m,n} \right) = i\;{\text{and}}\;I\left( {o,p} \right) = j \hfill \\ \end{gathered} $$
(3)
$$ \begin{gathered} {\text{pix}}\left( {k,l, {\text{Dis}}, 135^\circ } \right) = \ne \left\{ {\left( {\left( {\left( {m,n} \right),\left( {o,p} \right)} \right) \in \left( {l_{x} \times l_{y} } \right)} \right) \times \left( {l_{x} \times l_{y} } \right) \vdots \left| {m - n} \right| = - D,\left| {o - p} \right| = - D } \right\}, \hfill \\ I\left( {m,n} \right) = i\;{\text{and}}\;I\left( {o,p} \right) = j \hfill \\ \end{gathered} $$
(4)
$$ \begin{gathered} {\text{pix}}\left( {k,l, {\text{Dis}}, 180^\circ } \right) = \ne \left\{ {\left( {\left( {\left( {m,n} \right),\left( {o,p} \right)} \right) \in \left( {l_{x} \times l_{y} } \right)} \right) \times \left( {l_{x} \times l_{y} } \right) \vdots \left| {m - n} \right| = 0,\left| {o - p} \right| = - D } \right\}, \hfill \\ I\left( {m,n} \right) = i\;{\text{and}}\;I\left( {o,p} \right) = j \hfill \\ \end{gathered} $$
(5)
$$ \begin{gathered} {\text{pix}}\left( {k,l, {\text{Dis}}, 225^\circ } \right) = \ne \left\{ {\left( {\left( {\left( {m,n} \right),\left( {o,p} \right)} \right) \in \left( {l_{x} \times l_{y} } \right)} \right) \times \left( {l_{x} \times l_{y} } \right) \vdots \left| {m - n} \right| = D,\left| {o - p} \right| = - D } \right\}, \hfill \\ I\left( {m,n} \right) = i\;{\text{and}}\;I\left( {o,p} \right) = j \hfill \\ \end{gathered} $$
(6)
$$ \begin{gathered} {\text{pix}}\left( {k,l, {\text{Dis}}, 270^\circ } \right) = \ne \left\{ {\left( {\left( {\left( {m,n} \right),\left( {o,p} \right)} \right) \in \left( {l_{x} \times l_{y} } \right)} \right) \times \left( {l_{x} \times l_{y} } \right) \vdots \left| {m - n} \right| = D,\left| {o - p} \right| = 0 } \right\}, \hfill \\ I\left( {m,n} \right) = i\;{\text{and}}\;I\left( {o,p} \right) = j \hfill \\ \end{gathered} $$
(7)
$$ \begin{gathered} {\text{pix}}\left( {k,l, {\text{Dis}}, 315^\circ } \right) = \ne \left\{ {\left( {\left( {\left( {m,n} \right),\left( {o,p} \right)} \right) \in \left( {l_{x} \times l_{y} } \right)} \right) \times \left( {l_{x} \times l_{y} } \right) \vdots \left| {m - n} \right| = D,\left| {o - p} \right| = D } \right\}, \hfill \\ I\left( {m,n} \right) = i\;{\text{and}}\;I\left( {o,p} \right) = j \hfill \\ \end{gathered} $$
(8)

Texture features developed by Haralick other than visual texture features are presented in Eqs. (9)–(18) (1973). These features are based on information theory, statistical measures, and information measures of correlation.

\(\mathrm{GLCM}\) features based on the “information theory,” in particular, entropy

$$ 1. \quad {\text{Sum}}\;{\text{entropy}} = - \mathop \sum \limits_{k = 2}^{{2N_{g} }} p_{{\left( {x + y} \right)}} \left( k \right)\log \left\{ {p_{x + y} \left( k \right)} \right\} $$
(9)
$$ 2. \quad {\text{Entropy}} = - \mathop \sum \limits_{k} \mathop \sum \limits_{l} p\left( {k,l} \right)\log \left\{ {p\left( {k,l} \right)} \right\} $$
(10)
$$ 3.\quad {\text{Difference}}\;{\text{entropy}} = - \mathop \sum \limits_{k = 0}^{{N_{g} - 1}} p_{{\left( {x - y} \right)}} \left( k \right)\log \left\{ {p_{{\left( {x - y} \right)}} \left( k \right)} \right\} $$
(11)

Where \(x\) and \(y\) denote elements of the “row” and “column” respectively of the co-occurrence matrix and \({p}_{\left(x+y\right)}\) represents the “probability of the co-occurrence matrix” corresponding to \(x+y\). Similarly \({p}_{\left(x-y\right)}\) represent the “probability of the co-occurrence matrix” corresponding to \(x-y\).

\(\mathrm{GLCM}\) features based on “statistical measures.”

$$ 4.\quad {\text{Inverse}}\;{\text{difference}}\;{\text{moment}} = \mathop \sum \limits_{k} \mathop \sum \limits_{l} \frac{1}{{1 + \left( {k - l} \right)^{2} }}p\left( {k,l} \right) $$
(12)
$$ 5. \quad {\text{Sum}}\;{\text{average}} = \mathop \sum \limits_{k = 2}^{{2N_{g} }} k \times p_{{\left( {x + y} \right)}} \left( k \right) $$
(13)
$$ 6.\quad {\text{Sum}}\;{\text{variance}} = \mathop \sum \limits_{k = 2}^{{2N_{g} }} \left( {k - f_{s} } \right)^{2} p_{{\left( {x + y} \right)}} \left( k \right) $$
(14)
$$ 7.\quad {\text{Difference}}\;{\text{variance}} = \mathop \sum \limits_{k = 0}^{{N_{g} - 1}} k^{2} \times p_{{\left( {x - y} \right)}} \left( k \right) $$
(15)

\(\mathrm{GLCM}\) features based on the “information measure of correlation.”

$$ 8.\quad {\text{Information}}\;{\text{measures}}\,{\text{of}}\;{\text{Correlation}} \;1 = \frac{{{\text{HXY}} - {\text{HXY}}1}}{{{\text{MAX}}\left\{ {{\text{HX}},{\text{HY}}} \right\}}} $$
(16)
$$ 9.\quad {\text{Information}}\;{\text{measures}}\;{\text{of}}\;{\text{Correlation}}\;2 = \left( {1 - \exp \left[ { - 2\left( {{\text{HXY}}2 - {\text{HXY}}} \right)} \right]} \right)^{1/2} $$
(17)

Where \(\mathrm{HXY}=-\sum_{k}\sum_{l}p\left(k,l\right)\mathrm{ log }(p\left(k,l\right))\), \(\mathrm{HX},\mathrm{ HY}\) are the entropies of px and py, \(\mathrm{HXY}1=-\sum_{k}\sum_{l}p\left(k,l\right)\mathrm{log}\left\{{p}_{x}(k){p}_{y}(l)\right\}\), \(\mathrm{HXY}2=-\sum_{k}\sum_{l}{p}_{x}(k){p}_{y}(l)\mathrm{log}\left\{{p}_{x}(k){p}_{y}(l)\right\}\)

$$ 10.\quad {\text{Maximum}}\;{\text{Correlation}}\;{\text{Coefficent}} = ({\text{second}} - {\text{largest}}\;{\text{eigenvalue}}\;{\text{of}}\;Q)^{0.5} \quad {\text{here}}\;Q(k,l) = \sum\nolimits_{k} {\frac{p(k,l)p(k,l)}{{p_{x} (j)p_{y} (k)}}} $$
(18)

\(\mathrm{GLCM}\) features representing visual texture features are presented in Table 2, which explains their mathematical notation, range, and discussion of these features.

Table 2 Discussion of the visual texture features

The pixel-based \(\mathrm{GLCM}\) techniques have several advantages and shortcomings listed as follows.

Advantages of GLCM

  • In the \(\mathrm{GLCM}\) based change detection technique, spectral and spatial information of the study area is obtained.

  • GLCM offers two different procedures for the quantification of the image pixels. Firstly selection of the window size of dimensions like \(3\times 3\) or \(5\times 5\) over a complete image. Secondly, numerical quantification of the total image pixels can be performed, and on that basis, texture features can be quantified.

  • GLCM can offer information about the image features in four categories based on human visual perception, statistical measures, entropy measures, and correlation information.

  • The future aspect of the \(\mathrm{GLCM}\) can be understood from the fact that earlier \(\mathrm{GLCM}\) can provide information only in two dimensional (\(2\mathrm{D}\)) surfaces. Today, researchers and scientists have developed a procedure to calculate the \(\mathrm{GLCM}\) across three dimensions known as 3D GLCM.

  • GLCM can calculate the pixel brightness of the image through different combinations of the image pixels.

  • GLCM is used in remote sensing applications, but today, the \(\mathrm{GLCM}\) technique is also used in earth scattering data analysis to predict “Earthquakes” and “Tsunami” possibilities.

  • An essential advantage of the \(\mathrm{GLCM}\) is that its features can be obtained for the “single orientation and distance” along with “combination of directions and distances.”

Shortcoming of GLCM

Computation of the \(\mathrm{GLCM}\) is a time-consuming process. The main problem during \(\mathrm{GLCM}\) calculation is the computational cost using pixel to pixel combination of the image. The issue of \(\mathrm{GLCM}\) computation can be overcome by the use of \(\mathrm{GLCM}\) with the Sobel operator.

Background of the image classification technique

Image classification techniques are used to classify the image into several small objects or classes. These objects can be classified as soil, urban, agriculture, plants, trees, water, etc. When an image is segregated using an image classification technique, all the essential areas of the image can be classified into objects or classes. These areas of the image selected as the object depend upon the type of study, e.g., Fig. 4a consists of a band-fused “Phased array \(L\) band synthetic aperture radar (\(\mathrm{PALSAR}\))” image of the Roorkee region of Uttarakhand, India. This image is classified using three different classification techniques and in four other classes, i.e., bare soil (black color), water (blue color), urban (red color), and agriculture (green color). Thus if anyone wishes to study only two classes, water, and agriculture, their task is completed by Fig. 4d. If a study about three categories is required, they can opt for Fig. 4c and can obtain information about four classes through Fig. 4b. Thus performing an image classification and creating the number of objects depends entirely upon the application.

Fig. 4
figure 4

a Band fused \(\mathrm{PALSAR}\) data, b classified image with four prominent class, c classified image with three prominent class, and d classified image with two prominent class (Data Courtesy: Japanese Aerospace Exploration Agency, JAXA)

A comparison can be made when pre and post-image related to any natural phenomenon are classified based on several classes. Then this type of image classification approach falls under object-based change detection (\(\mathrm{OBCD}\)) (Zhang et al. 2018). When the comparison is made between two images using \(\mathrm{OBCD}\) methodologies, they are compared based on standard classification features, i.e., “user accuracy (\(\mathrm{UA}\)) (Tong and Feng 2020)”, “producer accuracy (\(\mathrm{PA}\)) (Tong and Feng 2020)”, “commission error (\(\mathrm{CE}\)) (Agariga et al. 2021),” “omission error (\(\mathrm{OE}\)) (Agariga et al. 2021),” “overall accuracy (\(\mathrm{OA}\)) (Tong and Feng 2020)”, and “kappa coefficient (Kp) (Tong and Feng 2020).” Some of the prominent image classification techniques using object formation to classify an image are “Maximum likelihood classification (\(\mathrm{MLC}\))” (Soni et al. 2021), “Spectral angle mapper (\(\mathrm{SAM}\))” (Wang et al. 2021), “Support vector machine (\(\mathrm{SVM}\))” (Leonga et al. 2021), “Minimum distance classification (\(\mathrm{MDC}\))” (Nie et al. 2021), “Parallelepiped classification (PC)” (Kundu et al. 2021), and “Spectral information divergence (SID)” (Hunt 2021). Brief details of these classification techniques and their methodologies are presented in these reviews.

Parallelepiped classification (PC)

\(\mathrm{PC}\) uses a “decision rule” to classify the multispectral, hyperspectral, \(\mathrm{RGB}\) image data. The image data boundaries are created through the “n—dimensional parallelepiped” in the image data space. Figure 5 represents the classification of the image data through parallelepiped classification. Here image pixels are classified into four different classes, i.e., Class A, Class B, Class C, and unclassified pixels. Here, while performing these classifications, pixels of one class get merged with the other class’s pixel unintentionally. The user estimates the minimum and maximum pixel value corresponding to each band or a range in this classification scheme. It is expressed in terms of “standard deviation” on either side of the “mean” of each feature. These values determine the scope of the parallelepiped classification, i.e., for Band A, the range of the category is \(\mu A-2S\le PC\le \mu A+2S,\) and the content of variety for Band B is \(\mu B-2S\le PC\le \mu B+2S\) where \(S\) is the standard deviation of the image.

Fig. 5
figure 5

Pixel allotment to a particular class according to mean value

Advantage of parallelepiped classification

  • This technique performs fast image classification.

  • This technique is suitable for non-normal distribution.

  • This technique can be applied over a limited land cover area.

Shortcoming of parallelepiped classification

  • Overlap** of classified classes are allowed in this classification technique, resulting in less accurate results.

  • In this classification technique, all the pixels are not classified.

  • One of the main problems with this algorithm is that pixels are spectrally apart from the signature mean, affecting the pixel classification.

Minimum distance classification (MDC)

\(\mathrm{MDC}\) technique is employed to classify “unknown image data” into separate object classes presented in Fig. 6. This approach’s main objective is to minimize the distance between the different object classes and anonymous image data. In other words, the distance is considered as the parameter of similarity, so the minimum distance between two observation classes is identical to the maximum similarity. The minimum distance can be calculated with the assistance of Eq. (19).

$${d}_{{x}_{tp}{\mu }_{c}}=\left(\sqrt{{{(x}_{tp}}^{2}}-{{\mu }_{c}}^{2})\right)$$
(19)

where \({(x}_{tp})\) is defined as the test pixel and (\({\mu }_{c})\) represents the mean value of the classified object.

Fig. 6
figure 6

Unknown data allotment to a particular class

Advantage of minimum distance classification

  • All the regions of the n—dimensional space get classified under this classification scheme.

  • The main advantage of this classification scheme is no overlap** of the image classes occurs during the image classification.

Shortcoming of minimum distance classification

  • In this classification scheme, spectral variability is assumed to be the same in all directions, producing false results.

Maximum likelihood classification (MLC)

This classification technique assumes that an individual class’s statistics in the respective band are typically distributed, represented in Fig. 7. This methodology calculates the “probability of the particular pixel” to a specific category, and that pixel is assigned to the object class having the “maximum probability” or “maximum likelihood.”

Fig. 7
figure 7

Pixel allotment in “Maximum likelihood image classification”

The mathematical relationship for establishing the maximum likelihood between the image pixels is expressed by Eq. (20).

$$ L_{p} \left( x \right) = \frac{1}{{\left( {2\pi } \right)^{{{\raise0.7ex\hbox{$n$} \!\mathord{\left/ {\vphantom {n 2}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$2$}}}} \sum k^{\frac{1}{2}} }}\exp \left[ { - \frac{1}{{2\left( {x - \mu_{p} } \right)\mathop \sum \nolimits_{p}^{ - 1} \left( {x - \mu_{p} } \right)^{ - 1} }}} \right] $$
(20)

where x represents the \(n\) band image data, \({L}_{p}\left(x\right)\) is the expression of the “likelihood of object” of class \(x\) belonging to the class p. \({\mu }_{p}\) represents the “mean vector” of the class k. \(\sum p\) is the representation for the “variance matrix.”

Advantage of maximum likelihood classification

  • This scheme is assumed to be the most sophisticated as under this image classification scheme, good separation between different classes is obtained.

Shortcoming of maximum likelihood classification

  • Accuracy assessment requires an intense training of the dataset to describe covariance and the mean structure of the classes.

Spectral angle mapper (SAM)

The created object’s spectrum is compared with the already known “object spectrum” in this classification technique.” As a result of \(\mathrm{SAM}\) classification, an image is obtained with the best match for the individual image pixel. This similarity in the range is analyzed in terms of similarity, with the “vector originated from the origin.” Usually, reflection intensity is represented by the length of the vectors illustrated in Fig. 8.

Fig. 8
figure 8

Representation of the spectral angle mapper corresponding to “Band A” and “Band B”

The spectrum angle describes the difference between the spectra of Band A and Band B. Finally, an image is classified into several classes by evaluating the rise developed between the reference spectrum and the object’s spectrum. The angle formed between two vectors is the cosine angle. This angle is expressed by Eq. (21).

$$ \alpha = \cos^{ - 1} \left[ {\frac{{\mathop \sum \nolimits_{k = 1}^{{l = {\text{nb}}}} t_{k} {\text{rp}}_{k} }}{{\left( {\mathop \sum \nolimits_{k = 1}^{{l = {\text{tb}}}} t_{k}^{2} } \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 2}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$2$}}}} \left( {\mathop \sum \nolimits_{k = 1}^{{l = {\text{tb}}}} {\text{rp}}_{k}^{2} } \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 2}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$2$}}}} }}} \right] $$
(21)

where \(\alpha \) (alpha) is the angle between the “two vectors,” \(\mathrm{nb}\) represents the “total numbers of the spectral bands,” \({t}_{i}\) is the “target pixels” present in an image, and \(\mathrm{rp}\) is the total number of the reference pixels.

Advantage of spectral angle mapper

  • This method is considered a user-friendly and quick method to map the spectral similarity of the image spectra against a reference spectrum.

  • This technique produces good classification results even in the presence of the scaling noise.

Shortcoming of spectral angle mapper

  • This technique does not respond to a situation in which a vector magnitude fails to discriminate information in many instances.

Spectral information divergence (SID)

\(\mathrm{SID}\) uses a “spectral classification approach” to compare image pixels with the reference spectrum. The tool used for the comparison is the divergence measure. Slight divergence is the indication of the small image pixels. Pixels having considerable divergence values above a predefined threshold are not classified under this approach. Spectra of the end member can be extracted directly from an image. The \(\mathrm{SID}\) method computes the “spectral similarity” based on the divergence between the probability distribution of two spectra. Let us assume an embodiment with reference spectra (\(R\)) and test spectra (\(T\)).

The distribution value for the reference spectra can be expressed by Eq. (22).

$$ d_{{{\text{vr}}}} = \frac{{R_{k} }}{{\mathop \sum \nolimits_{k = 1}^{C} R_{k} }} $$
(22)

The distribution value for the test spectra can be expressed by Eq. (23).

$$ d_{{{\text{vt}}}} = \frac{{T_{k} }}{{\mathop \sum \nolimits_{k = 1}^{C} T_{k} }} $$
(23)

The \(\mathrm{SID}\) corresponding to the reference and the test spectra is expressed by Eq. (24).

$$ {\text{SID}} = \mathop \sum \limits_{k = 1}^{C} d_{{{\text{vt}}}} \log \left( {\frac{{d_{{{\text{vt}}}} }}{{d_{{{\text{vr}}}} }}} \right) + \mathop \sum \limits_{k = 1}^{C} d_{{{\text{vr}}}} \log \left( {\frac{{d_{vr} }}{{d_{vt} }}} \right) $$
(24)

Figure 9 represents the stacking of bands from Band 1 to Band 8 to develop a multiband image model using the \(\mathrm{SID}\) technique.

Fig. 9
figure 9

Band fusion in the spectral information divergence

Advantage of spectral information divergence

  • SID measures the amount of deviation by analyzing the probabilistic behaviors of the pixel’s spectral signatures. This comparison depends on the information theory, which is considered the more effective in retaining spectral properties; through this methodology, the spectral similarity between two image pixels can also be measured.

Shortcoming of spectral information divergence

  • SID is considered an efficient image processing technique. A critical drawback of this technique is the variation in the “output results” due to a change in the “light intensity,” which affects the classification results.

Support vector machine (SVM)

The “support vector machine (\(\mathrm{SVM}\))” classification technique is based on “supervised learning for data analysis and study.” This technique uses a “machine learning approach.” This technique follows image classification and regression data analysis. \(\mathrm{SVM}\) follows the kernel principle to perform linear and nonlinear image regression. In this algorithm, two hyperplanes are separated from each other using “support vector 1” and “support vector 2”. The pixels that need to be classified are situated opposite the hyperplane represented by Fig. 10. This algorithm of image classification was developed by “Hava Siegelmann” and “Vladimir Vapnik.” (Tiwari et al. 2021). This algorithm was initially developed for computer vision and pattern recognition but later used in satellite remote sensing and image processing applications.

Fig. 10
figure 10

Hyperloop diagram for support vector machine

The hypothesis function is expressed by Eq. (25).

$$H\left({L}_{x}\right)=\left\{\begin{array}{c}+1 if z.x+l\ge 0\\ -1 if z.x+l<0\end{array}\right\}$$
(25)

Thus the pixels “above the hyperplane” is classified as \(+1\), and the pixels “below the hyperplane” is classified as \(-1\).

Advantage of the SVM

  • Perform effective classification in high dimensions compared to the \(K\) nearest neighbor algorithm.

  • The most effective classification technique for the cases having several sizes and remarkable than the number of samples.

  • SVM is a versatile technique having different functions to be specified for the decision function.

  • When the separation margin between a plane is evident, then this methodology works most efficiently.

Shortcoming of SVM

  • Several parameters like Kernel, \(C-\), and Gamma function need to be set correctly in the \(\mathrm{SVM}\) classification approach to obtain better classification results. So at the same time, different parameters need to be taken care of for better classification.

  • SVM does not assist in determining probability estimates directly. These features require a “fivefold cross-validation process” to compute.

  • A good classification result is challenging to obtain through this methodology when the dataset contains extreme noise.

Background of Skysat satellite program and details of the study area

In today’s world, several states governments, and private industries compete to explore massive and vital information from Earth. Several government space agencies are actively participating in researching information about the origin of a novel coronavirus and its impact on the daily activity of ordinary human beings. Currently, six different space agencies “European Space Agency (\(\mathrm{ESA}\)) (Wörner 1975),” “National Aeronautics and Space Administration (\(\mathrm{NASA}\)) (Dunbar 1958),” “Japan Aerospace Exploration Agency (\(\mathrm{JAXA}\)) (Yamakawa 2003),” “Russian Federal Space Agency (\(\mathrm{RFSA}\) or Roscosmos) (Government 1992),” “China National Space Administration (\(\mathrm{CNSA}\)) (Kejian 1993)” and “Indian Space Research Organisation (\(\mathrm{ISRO}\)) (Sarabhai 1969)” are working in the field of satellite launch and satellite recovery. These agencies have their own satellite launch capacities. Besides these their several other governments sponsored space agencies like the “Canadian Space Agency (François-Philippe 1989),” “UK Space Agency (Annett 2010),” “Australian Space Agency (Palermo 2018),” etc., that are actively working in the field of Earth exploration and remote sensing. Some prominent private players in space exploration include “SpaceX (Musk 2002),” “Boeing (Calhoun 1916),” “Sierra Nevada Corporation (Corporation 1963),” “Orbital (Thompson 1982),” etc. Skysat is a commercial microsatellite of Skybox imaging. This Earth observational satellite was developed to collect high-resolution multispectral and panchromatic images of the Earth's surface. This satellite universe consists of 21 satellites dedicated to Earth imaging. A private firm, “Planet own these satellites.” First Skysat-1 was launched on 21 November 2013 (Marshall et al. 2010). Last year on 18 August 2020, Skysat 19–21 were launched (Marshall et al. 2010). The orbital type of Skysat 1–15 is sun-synchronous, whereas Skysat 16–21 is non-sun-synchronous. Skysat 1–2 has an orbital altitude of 600 km, Skysat 3–15 has an orbital altitude of 500 km, and Skysat 16–18 has an orbital length of 400 km. The sensor installed on these satellite systems operates at spectral bandwidth of blue 450–515 nm, green 515–595 nm, red 605–695 nm, NIR 740–900 nm, and \(\mathrm{PAN}\) 450–900 nm (Marshall et al. 2010).

In this research work, two panchromatic images of the pre-COVID and post-COVID (during lockdown) are obtained for the Skysat image database under the “research and training program” (Marshall et al. 2010). The pre-COVID image and post-COVID image of the Connaught Place, New Delhi, investigated in this research work are represented in Fig. 11d, e. The study area's pre-COVID and post-COVID images are obtained on 30 April 2019 and 14 April 2020, respectively (Marshall et al. 2010). Connaught place’s study area is popularly known as “Rajiv Chowk” (Hazarika et al. 2015). It is the leading financial and commercial center of the National capital of India. The study area is located at the heart of the National capital. It is having a latitude and longitude position of 28°37′58′ N and 77°13′11′ E (Hazarika et al. 2015). The area of the Connaught place is 2.36 km2. The rapid urbanization in Connaught place has increased energy consumption and traffic density. Moreover the \({\mathrm{PM}}_{2.5}\) level of the Connaught place is highest in the National capital region (Shukla et al. 2020). Even the concentration of the \({\mathrm{PM}}_{2.5}\) level has even touched 999 μg/m3 during the worst time period (Mukherjee et al. 2020). The temperature of Connaught place raise to 45 °C during summers of April-June and falls to 8 °C during winters of December to January. Thus all these features significantly affect image classification and texture quantification.

Fig. 11
figure 11

Earthly location of the Connaught Place, New Delhi (Study area)

Experimental results

Change estimation through PBCD GLCM based technique

In this investigation, three different band images of the study area are fused with the layer stacking technique available in ENVI 5.2. The layer stacking approach assists in generating a band-fused \(\mathrm{RGB}\) image of the study area. The final image of the study area consists of all the essentials bands; thus \(\mathrm{GLCM}\) method is applied to the study area. Figure 12a presents the \(\mathrm{RED}\) band pre-COVID image of the study area, Fig. 12b presents the \(\mathrm{GREEN}\) band pre-COVID image of the study area and Fig. 12c presents the \(\mathrm{BLUE}\) band pre-COVID image of the study area. Finally, all the bands are fused to create an \(\mathrm{RGB}\) image of the study area presented in Fig. 12d.

Fig. 12
figure 12

Pre-COVID image of the study area a \(\mathrm{RED}\) Band, b \(\mathrm{GREEN}\) Band, c \(\mathrm{BLUE}\) Band, and d Stacked (\(\mathrm{RGB}\))

The \(\mathrm{RGB}\) image of the study area is converted to a gray level image to identify the changes in the study area represented in Fig. 13. The pixels count along the X-axis is \(2400\) pixels and along the Y-axis is \(1800\) pixels. Thus the total pixel count in the image is \(\mathrm{4,320,000}\) pixels. Here, through the \(\mathrm{GLCM}\) approach, one can identify the changes as a whole no separate object-based information is provided through \(\mathrm{GLCM}\). Figure 13a represents the grey level representation of the band fused pre-COVID image of the study area. Histogram signature plot of the band combined image represents the distribution of the frequency of the intensity value corresponding to the \(4.32\times {10}^{6}\) image pixels illustrated in Fig. 13b. The visual texture features for the pre-COVID image are quantified and presented in Table 3. Here the feature values are calculated corresponding to the total image pixels. Texture features are quantified corresponding to four orientation angles i.e. 0°, 45°, 90° and 135° and eight different distances \(d=1\), \(d=2\), \(d=3\), \(d=4\), \(d=5\), \(d=6\), \(d=7\) and \(d=8\). Later the texture features are averaged to make “\(\mathrm{GLCM}\) direction independent.”

Fig. 13
figure 13

Pre-COVID image. a Gray level representation. b Histogram signature plot

Table 3 Quantification of the \(\mathrm{GLCM}\) features for pre-COVID image

The texture features contrast, correlation, \(\mathrm{ASM}\), and \(\mathrm{IDM}\) are plotted to obtain their specific behavior. Concerning 0° for eight different distances \(d=1\), \(d=2\), \(d=3\), \(d=4\), \(d=5\), \(d=6\), \(d=7,\) and \(d=8\) it has been observed that “contrast” is sharply increasing with the orientation and distance. The correlation feature has presented a sharp decline in the quality. Energy features have indeed shown a reduction in the feature value, but the decline rate is prolonged. Finally, the feature \(\mathrm{IDM}\) has also presented a sharp decline in the feature values similar to the decline rate of the correlation feature represented in Fig. 14a.

Fig. 14
figure 14

Comparison of the texture features for pre-COVID image at different distances. a Pre-COVID (0°). b Pre-COVID (45°). c Pre-COVID (90°). d Pre-COVID (135°)

Similarly, the \(\mathrm{GLCM}\) feature behavior corresponding to a 45° is presented in Fig. 14b, corresponding to a 90° is introduced in Fig. 14c and corresponding to a 135° is illustrated in Fig. 14d. It has been observed that \(\mathrm{GLCM}\) features have obtained the same pattern in the quantified feature values for different orientation angles. Thus from this analysis, new information about the \(\mathrm{GLCM}\) is discovered that this approach can also be used for pattern recognition. \(\mathrm{GLCM}\) is mainly considered an approach for texture classification, but \(\mathrm{GLCM}\) has provided information about the image texture as a whole; not specific details on any classified object can be identified from the \(\mathrm{GLCM}\). Similarly, the post-COVID image of the study area is explored, and texture feature quantification and feature plotting are performed to obtain new information.

The post-COVID image analysis of the study area is presented in Fig. 15. Figure 15a represents the \(\mathrm{RED}\) band image, Fig. 15b presents the \(\mathrm{GREEN}\) band image, and Fig. 15c represents the \(\mathrm{BLUE}\) band image of the study area. All the images are fused to obtain a multiband \(\mathrm{RGB}\) image of the study area. Some general assumption about the post-COVID image is that low or no traffic on the streets will be observed. Due to the continuous lockdown situation, it is expected that the air quality will also be undoubtedly improved. Thus it will be interesting to observe the behavior of the texture features for the post-COVID image. The layer stacking technique available with ENVI 5.2 is adopted to perform band fusion and develop an \(\mathrm{RGB}\) image of the study area. Figure 15d represents the band fused post-COVID image of the study area.

Fig. 15
figure 15

Post-COVID image of the study area. a \(\mathrm{RED}\) Band, b \(\mathrm{GREEN}\) Band, c \(\mathrm{BLUE}\) Band, and d Stacked (\(\mathrm{RGB}\)) band

The grey level representation of the post-COVID study area is presented in Fig. 16. Figure 16a represents the post-COVID image of the study area, which visually looks similar to the pre-COVID image. No significant change is visually identified in the study area. The quantification of the texture features can obtain a difference in the study area, and the modifications are developed in the histogram signature plot of the study area. The histogram signature plot in Fig. 16b visually represents the changing pattern of the surface of the study area. The quantification of the texture features for the post-COVID image is presented in Table 4.

Fig. 16
figure 16

Post-COVID image. a Gray level representation. b Histogram signature plot

Table 4 Quantification of the \(\mathrm{GLCM}\) feature for post-COVID image

The texture features for the post-COVID image seem to be identical with the texture features of the pre-COVID image shown in Fig. 17. The behavior of the texture feature “contrast” has shown an increasing pattern for 0°, 45°, 90° and 135°. Texture features correlation has shown decreasing pattern for all four degrees. The decrease of these texture features is high compared to the remaining texture features for all four orientations. Texture feature \(\mathrm{ASM}\) has shown a minimum decrease rate. Finally, texture feature \(\mathrm{IDM}\) has also shown decreasing pattern for all four directions. Here it has been observed that when \(\mathrm{GLCM}\) is used to compare two multi-band images. It may be possible that the change developed in the two images can be visually identified. Histogram signature plotting is a method to determine the occurrence of changes. The most important procedure to detect the developed changes is the comparison of the average texture features. As discussed earlier, an average of the \(\mathrm{GLCM}\) features is done to make the \(\mathrm{GLCM}\) direction independent. Figure 18a compares the texture features for the pre-COVID and post-COVID images. Here it can be seen that “contrast” has obtained the highest peak value. Contrast is highest for 0°, 45°, 135° with corresponds to post-COVID image, for 90° contrast for pre-COVID image is high. Correlation has obtained a high-value corresponding to all orientations for the pre-COVID image. \(\mathrm{ASM}\) has received a high value for all directions corresponding to the post-COVID image. Finally, \(\mathrm{IDM}\) received a high value for all orientations corresponding to the post-COVID image. Thus, texture features have obtained high feature values for the post-COVID image for most cases. Figure 18b represents the change in the texture features along the positive or negative end. Five times, the difference of texture features for pre-COVID and post-COVID images have produced positive values, and on eleven occasions, texture features have negative values. This also suggests that post-COVID image texture features have obtained higher values than the pre-COVID image.

Fig. 17
figure 17

Comparison of the texture features for post-COVID image at different distances. a Post-COVID (0°). b Post-COVID (45°). c Post-COVID (90°). d Post-COVID (135°)

Fig. 18
figure 18

a Change in the texture features corresponding to pre-COVID and post-COVID image. b Change pattern analysis of the feature values

Change analysis through object-based image classification techniques

While performing image classification through object-based image classification techniques. A particular area is classified into several objects. These objects are geographical areas of the study area like water, land, soil, tree, etc. Some of the prominent terms associated with the object-based image classification through which the image classification is performed are \(\mathrm{UA}\), \(\mathrm{PA}\), \(\mathrm{CE}\), \(\mathrm{OE}\), \(\mathrm{OA}\) and Kp. The procedure to calculate these image classification features can be understood by the example of a confusion matrix represented in Fig. 19. Here a confusion matrix with arbitrary image classification is assumed. This random image is classified into four different classes as water, vegetation, urban, and soil.

Fig. 19
figure 19

Test data for the image classification

Now different parameters corresponding to the image classification are computed as follows.

Commission error (\(CE\)) is generated when the pixels of one class get wrongly introduced in the class under observation. \(\mathrm{CE}\) concerning the classified categories can be expressed by Eqs. (2629).

$$ {\text{CE}}\;\left( {{\text{water}}} \right) = \left( {9 + 9 + 11} \right)/64 = 29/64 = 0.4531 = 45.31\% $$
(26)
$$ {\text{CE}}\;\left( {{\text{vegetation}}} \right) = \left( {8 + 7 + 9} \right)/68 = 24/68 = 0.3529 = 35.29\% $$
(27)
$$ {\text{CE}}\; \left( {{\text{urban}}} \right) = \left( {9 + 5 + 14} \right)/190 = 38/190 = 0.20 = 20\% $$
(28)
$$ {\text{CE}}\;\left( {{\text{soil}}} \right) = \left( {11 + 8 + 7} \right)/85 = 26/85 = 0.3058 = 30.58\% $$
(29)

Omission error (\(OE\)) refers to the classified pixels that are accidentally omitted from the investigation classes. \(\mathrm{OE}\) concerning the classified classes is expressed by Eqs. (30)–(33).

$$ {\text{OE}}\;\left( {{\text{water}}} \right) = \left( {8 + 9 + 11} \right)/63 = 28/63 = 0.4444 = 44.44\% $$
(30)
$$ {\text{OE}}\;\left( {{\text{vegetation}}} \right) = \left( {9 + 15 + 8} \right)/76 = 32/76 = 0.4210 = 42.10\% $$
(31)
$$ {\text{OE}}\;\left( {{\text{urban}}} \right) = \left( {9 + 7 + 7} \right)/175 = 23/175 = 0.1314 = 13.14\% $$
(32)
$$ {\text{OE}}\;\left( {{\text{soil}}} \right) = \left( {11 + 9 + 14} \right)/93 = 34/93 = 0.3655 = 36.55\% $$
(33)

User accuracy (\(UA\)) is defined as the “accuracy” from the perspective of the “map user,” not from the perspective of the “map maker.” \(\mathrm{UA}\) concerning the classified classes are expressed by Eq. (34)–(38).

$$ {\text{User}}\;{\text{accuracy}} = 100\% - {\text{commission}}\;{\text{error}} $$
(34)
$$ {\text{UA}}\;\left( {{\text{water}}} \right) = 100 - {\text{CE}}\;\left( {{\text{water}}} \right) = 100 - 45.31 = 54.69\% $$
(35)
$$ {\text{UA}}\;\left( {{\text{vegetation}}} \right) = 100 - {\text{CE}}\;\left( {{\text{vegetation}}} \right) = 100 - 35.29 = 64.71\% $$
(36)
$$ {\text{UA}}\;\left( {{\text{urban}}} \right) = 100 - {\text{CE}}\;\left( {{\text{urban}}} \right) = 100 - 20 = 80.00\% $$
(37)
$$ {\text{UA}}\;\left( {{\text{soil}}} \right) = 100 - {\text{CE}}\;\left( {{\text{soil}}} \right) = 100 - 30.58 = 69.42\% $$
(38)

Producer accuracy (\(PA\)) is defined as the “accuracy” from the perspective of the “map maker,” not from the perspective of the “map user.” \(\mathrm{PA}\) concerning the classified classes is expressed by Eqs. (39)–(43).

$$ {\text{Producer}}\;{\text{accuracy}} = 100\% - {\text{ommission}}\;{\text{error}} $$
(39)
$$ {\text{PA}}\;\left( {{\text{water}}} \right) = 100 - {\text{OE}}\;\left( {{\text{water}}} \right) = 100 - 44.44 = 55.56\% $$
(40)
$$ {\text{PA}}\;\left( {{\text{vegetation}}} \right) = 100 - {\text{OE}}\;\left( {{\text{vegetation}}} \right) = 100 - 42.10 = 57.90\% $$
(41)
$$ {\text{PA}}\;\left( {{\text{urban}}} \right) = 100 - {\text{OE}}\;\left( {{\text{urban}}} \right) = 100 - 13.14 = 86.86\% $$
(42)
$$ {\text{PA}}\;\left( {{\text{soil}}} \right) = 100 - {\text{OE}}\;\left( {{\text{soil}}} \right) = 100 - 36.55 = 63.45\% $$
(43)

Overall accuracy (\(OA\)) can be understood as a ratio between the “correctly classified pixels” to the “total number of pixels” present in the image. It is expressed by Eq. (44).

$$ {\text{Overall accuracy}}\;({\text{OA}}) = \frac{{{\text{Correctly}}\;{\text{classified}}\;{\text{pixels}}}}{{{\text{Total}}\;{\text{numbers}}\;{\text{of}}\;{\text{the}}\;{\text{image}}\;{\text{pixels}}}} $$
(44)

\(\mathrm{OA}\) for the confusion matrix presented in Fig. 19 is calculated as \(\mathrm{OA}=\frac{35+44+152+59}{407}=\frac{290}{407}=0.7125=71.25\%\). Thus the \(\mathrm{OA}\) classification accuracy for the presented confusion matrix is \(71.25\%.\)

Finally, the Kp is calculated to obtain classification accuracy. It is the measure of how well the classification of the study area dataset is performed. The range of Kp is \([-1, 1]\). Mathematically it is expressed by Eq. (45).

$$ {\text{Kappa}}\;{\text{coefficient}} = \frac{{{\text{probablity}} \left( O \right) - {\text{probablity}} \left( e \right)}}{{1 - {\text{probablity}} \left( e \right)}} = 1 - \frac{{1 - {\text{probablity}} \left( O \right)}}{{1 - {\text{probablity}} \left( e \right)}} $$
(45)

where \(\mathrm{probablity} \left(O\right)\) is considered as the “observed agreement” among various classification raters. This is assumed to be identical with accuracy. \(\mathrm{Probablity} \left(e\right)\) is considered as a “theoretical probability of the chance agreement.” While performing the image classification using the Kp following assumptions are kept into consideration.

If Kp is lying close to \(-1\), then it is assumed to be a worse image classification.

If Kp is lying close to 0, then it is random image classification.

If Kp is lying close to \(+1\), then the classification is assumed to be significantly realistic and close to accurate.

In early \(2020\), coronavirus was assumed to be spread in the entire world. In the \(\mathrm{USA}\), during early January of \(2020\), coronavirus was considered to have limited spread (Jorden et al. 2020). In China, cases related to the novel coronavirus began to report from late 2019 (Xu et al. 2020). Issues related to coronavirus were highlighted in early \(2020\), but the government of South Africa imposed a lockdown by March \(2020\) (Atangana and Araz 2020). Figure 20 presents the geographical locations of some prominent places before and during the lockdown. Here the pre lockdown images represent normal day-to-day activities of the people. A sudden stop in everyday activities is reported during the lockdown, i.e., no people on the road, empty roads, no industrial activities, etc.

Fig. 20
figure 20

Pre and post-COVID Skysat images of the various location, pre-COVID. a China, Bei**g (12 April 2020), c Sudan Omdurman, (23 April 2020), e Tyson foods, Washington, USA (30 April 2020), g South Africa, Johannesburg (27 September 2019); post-COVID. b China, Bei**g (12 April 2020), d Sudan, Omdurman (23 April 2020), f Tyson foods, Washington, USA (30 April 2020), h South Africa, Johannesburg (27 September 2019)

The pre and post-study area images are classified using six different image classification techniques, i.e., \(\mathrm{PC}\), \(\mathrm{MDC}\), \(\mathrm{MLC}\), \(\mathrm{SAM}\), \(\mathrm{SID}\), and \(\mathrm{SVM}\). The image classification is performed based on six other classes, i.e., buildings, trees, roads, grasslands, metro, and cars. Two different sets of \(\mathrm{ROIs}\) are created for classifying the images. \(\mathrm{ROI}\) set 1 contain image pixels for training the data and \(\mathrm{ROI}\) set 2 image pixels for the accuracy assessment. Table 5 presents details of the pixel’s count collected for training and accuracy assessment of the image data. Pre-COVID classification results of the study area are shown in Fig. 21.

Table 5 Pixel counts for training and accuracy assessment of study area
Fig. 21
figure 21

Pre-COVID image classification. a “Parallelepiped classification”, b “Minimum distance classification”, c “Maximum likelihood classification”, d “Spectral angle mapper”, e “Spectral Information Divergence”, f “Support Vector Machine”

The image classification features for different classification schemes are tabulated in Table 6.

Table 6 Classification features for the pre-COVID classified image

The image classification of the post-COVID study area is performed and presented in Fig. 22. The image classification features for different classification schemes obtained for the post-COVID image are tabulated in Table 7.

Fig. 22
figure 22

Post-COVID image classification. a “Parallelepiped classification”, b “Minimum distance classification”, c “Maximum likelihood classification”, d “Spectral angle mapper”, e “Spectral Information Divergence”, f “Support Vector Machine”

Table 7 Classification features for the post-COVID classified image

The image classification of the pre-COVID and post-COVID image through different object-based image classification techniques visually presents the change in the study area. It has been observed that the pre-COVID image “\(\mathrm{MLC}\) scheme” has shown the most satisfactory results both visually and numerically. In the post, COVID image classification “\(\mathrm{MLC}\) scheme” has produced the most reliable results. As in both cases, image classification is performed through \(\mathrm{MLC}\) technique, and most of the classes in image classification have appeared. In \(\mathrm{PC}\) technique building, trees and roads classes are most dominant while other classes remain inactive during the classification. In the pre-COVID classified image through \(\mathrm{MDC}\) technique, four classes are prevalent, i.e., trees, cars, roads, and buildings.

In contrast, three classes are dominant in the post-COVID classified image, i.e., trees, roads, and buildings. In the pre-classified image through \(\mathrm{SAM}\) technique, four classes are most prevalent, i.e., building, grassland, trees, and road. In contrast, only three classes are dominant in the post-classified image, i.e., building, tree, and road. \(\mathrm{SID}\) classification technique has produced the most abrupt result against the image classification. In the pre-COVID image, it is observed that the class “car” is most dominant, whereas, in the post-COVID image, the class building is dominant. This classification scheme has produced the worst classification results. Finally, through \(\mathrm{SVM}\) classification scheme, buildings, trees, and roads are the most prevalent class in the pre-COVID classified image. In the post-COVID image tree, cars and roads have appeared as the most dominant image class.

The linear fitting of the degree (1) corresponding to kappa coefficient (Kp) and overall accuracy (\(\mathrm{OA}\)) is present in Fig. 23. The curve fitting value is \({R}^{2}\ge 0.99\) for pre-COVID and post-COVID images. This suggests that the overall classification accuracy is directly proportional to the Kp\(.\) One important conclusion derived from this experiment is that \(\mathrm{MLC}\) has emerged as a reasonable classification scheme with superior accuracy and kappa coefficient.

Fig. 23
figure 23

Relationship of degree (1) between kappa coefficient and overall accuracy. a Pre-COVID. b Post-COVID

Discussion

In this research work, an innovative combination of \(\mathrm{PBCD}\) and \(\mathrm{OBCD}\) techniques is presented. \(\mathrm{GLCM}\) has emerged as a creative technique that contains information about the “statistical and spectral arrangement” of the image pixels. This information can be concluded in common words as \(\mathrm{GLCM}\) offer information about the changes occurring inside the image, i.e., information about the spectral and spatial arrangement of the image pixels. The \(\mathrm{OBCD}\) techniques have emerged as an ideal method to represent the changes developed in the study area visually. Here, users can create an object (class) of interest and compare pre-and post-image specifications based on the same object. Through \(\mathrm{OBCD}\) technique, not only classes can be made, but the accuracy of the classification can also be obtained. Thus it is observed that PBCD GLCM approach and \(\mathrm{OBCD}\) both techniques provide useful information related to image classification. Usage of any one of the techniques or both techniques depends upon the application only. In this research, a fusion methodology of \(\mathrm{PBCD}\) and \(\mathrm{OBCD}\) techniques is presented to extract maximum information from the study area. Finally, based on the experimental results, a model is developed to exact the full report of the study area shown in Fig. 24. It is also expected that the proposed model will work efficiently with other types of images, including multispectral images and hyperspectral images.

Fig. 24
figure 24

Proposed \(\mathrm{PBCD}\) and \(\mathrm{OBCD}\) fusion model

The presented model for image analysis includes \(\mathrm{PBCD}\) based \(\mathrm{GLCM}\) approach, which provides complete image information about the “spectral and spatial arrangement.” In \(\mathrm{OBCD}\) process, select the best classification methodology having superior accuracy with maximum \(Kp\). Thus all the features obtained from \(\mathrm{GLCM}\) and \(\mathrm{OBCD}\) assist to under an event altogether.

The texture classification of the study area presented in this work presents information about specific changes caused due to COVID lockdown in the study area. These changes are represented in the histogram signature plot of the pre-COVID and post-COVID images. \(\mathrm{GLCM}\) features have also presented a pattern of statistical variation in which \(\mathrm{ASM}\) have show exponential increase in the feature values whereas correlation, energy, and \(\mathrm{IDM}\) have shown fall in their feature values. The role of \(\mathrm{OBCD}\) technique in change identification is remarkably selecting an appropriate classification algorithm depends upon several factors like the study area. If the spatial resolution of the image is less (study area has less number of image pixels), then the user has a clear view of the classification classes. Then, in this case, good classification accuracy is expected from \(\mathrm{SVM}\), \(\mathrm{ML}\), \(\mathrm{MLC}\) approaches. Likewise, the Kp will also have a high value. Now, suppose the study area has high resolution with a large number of image pixels. In this case, it is expected that selecting an appropriate number of pixel numbers for a particular class is difficult. Thus low overall classification accuracy along with less Kp will be obtained in this case. \(\mathrm{OBCD}\) technique is quite effective in representing the visual difference of the developed changes. All the classified objects are visible and easily distinguishable through \(\mathrm{OBCD}\) schemes. Thus the fusion of \(\mathrm{GLCM}\) and \(\mathrm{OBCD}\) techniques are desirable for the cases where internal and external information of the study area is required.

Conclusion

This research work presents two different methodologies fused to obtain maximum information from data of interest. Firstly quantification of texture features based on the “grey level co-occurrence matrix (\(\mathrm{GLCM}\))” technique is performed. In the second step, image classification based on “object-based change detection (\(\mathrm{OBCD}\))” methods visually represents the transformation developed in the study area due to COVID lockdown. Pre-COVID and post-COVID (during lockdown) panchromatic images of Connaught Place, New Delhi, are analyzed in this research work to develop an accurate model for the study area. Texture classification of the images is performed based on visual texture features for eight distances and four orientations. Six different image classification methodologies are used for performing the image classification of the study area. These methodologies are “Parallelepiped classification (\(\mathrm{PC}\)),” “Minimum distance classification (\(\mathrm{MDC}\)),” “Maximum likelihood classification (\(\mathrm{MLC}\)),” “Spectral angle mapper (\(\mathrm{SAM}\)),” “Spectral Information Divergence (\(\mathrm{SID}\))” and “Support Vector Machine (\(\mathrm{SVM}\)).” \(\mathrm{GLCM}\) features quantification has provided a novel pattern of texture variations, i.e., contrast, correlation, \(\mathrm{ASM}\), and \(\mathrm{IDM}\). \(\mathrm{OBCD}\) based \(\mathrm{MLC}\) techniques have provided a maximum classification accuracy of \(83.68\%\) and \(73.65\%\) for the pre-COVID and post-COVID image data. Finally, a model is presented based on the above investigation for analyzing before and after COVID images. The model follows a two-step methodology with a final fusion of the obtained information to produce complete information about the study area numerically and visually.