1 Introduction

The collateral brought forth by rapid digitisation has had far-reaching and permeating consequences previously thought un-heard of. As such, it has led to the appropriation of technologies in many industries in an effort to modernise themselves, with the benefit of improving production efficiency and improving the quality of services rendered to their respective customers. In addition to revam** existing industries, it has also led to the rise and development of new industries, a prominent one being telemedicine. Owing to the multitude of restrictions that COVID-19 has brought forth, it has necessitated the rendering of services effectively while minimising physical contact. While initially presenting itself as an un-avoidable requirement, it is slowly positioning itself as a service of convenience. As such, telemedicine has bloomed to meet these requirements. Telemedicine can be regarded as the remote dispensation of medical diagnosis [6]. Medical information usually condensed in the form of a patient record is exchanged between medical staff in order to ascertain a patient’s condition. This needs to be done in a secure manner so that the privacy and security of the patient is maintained. Cryptography is one mechanism that can be used to secure information during transmission and storage, to prevent un-authorised third parties from reading the data [13]. Cryptography prescribes a series of steps in forms of mathematical algorithms which encrypt the data. There are many such algorithms that rose to prominence and information security as whole was bolstered due to its inception. A few prominent ones are the Advanced Encryption Standard (AES) [24], Data Encryption Standard (DES) [17] and the Triple DES [25]. These algorithms have been widely used to encrypt data, particularly textual data with satisfactory results. In the scope of images, these algorithms perform if not equally, lesser in comparison to textual data, with several rounds of operation. However, in the context of medical images, the algorithms do not perform as admirably, due to their high prevalence of redundancy, general bulkiness and high correlation.

In order to combat these inadequacies presented by the more ‘traditional’ approaches, several other techniques have been developed to secure medical images. Another consequence of digitisation is the rise in the processing power of computers, which has led to development of more complex, efficient and powerful encryption schemes, extending support to processing bulky data such as Medical Images. A well-tested mechanism to impart security by encryption is by exploiting the pseudo-random nature of chaotic maps and attractors. Chaos maintains a robust degree of randomness, complimented by its hight sensitivity to initial conditions. They also exhibit a certain degree of ergodicity and quasi randomness [2].

M. Benssalah et al., have proposed a chaos based elliptical curve cryptosystem for encryption of Medical images in particular. The elliptical curve cryptosystem was revealed to perform similarly to traditional systems RSA, with significantly smaller key sizes rendering it suitable utilisation in resource-constrained devices like FPGAs [5]. Wenting Yuan et al., proposed a digital hyper chaos system which forms part of a double encryption scheme, where an image is encrypted in both its time and frequency domains, resulting in a significantly high key space [27]. Y. Wang et al., were successful in implementing a modified RSA approach, through the use of semiconductor chaos circuit for a voice encryption system [26]. A combination of wavelet transforms and a chaos-based system was proposed by **: a linear approach," 2020 IEEE 1st international conference for convergence in engineering (ICCE). Kolkata, India, pp 458–463. https://doi.org/10.1109/ICCE50343.2020.9290694 " href="/article/10.1007/s11042-022-13165-8#ref-CR4" id="ref-link-section-d326090301e490">4]. Following this theme is their implementation in the securing of data pertaining to robotic movement developed by M. A. Al-Jabbar et al. This is done by simulating and shuffling keyframes points as defined by a cellular automaton, yielding sufficient strength against static and brute-force attacks [16]. CA based schemes have also seen use as a reliable security mechanism in niche fields such as IoT. A light weight CA is used in medical IoT devices to encrypt data as it does not generate redundant data, thereby lifting the additional constrain places on memory and resource limited devices [12]. Similar to proposed work but with a varying is degree of complexity ahardware-based implementation on the hybridised cellular automata scheme, as an alternative to the traditional AES with an emphasis on the comparison of operational frequency, logic element utilization and overall power consumption [9].

Software implementation of the various encryption schemes afore mentionedtend to easier to implement, considering the wide array of software development kits available. However, their performance in terms execution complexity algorithms may require higherpower and processing time as opposed to their hardware-based counterparts. A worthy alternative to purely software-based solution lies in Application Specific Integrated Circuit (ASIC). These devices possess a decisive advantage over their software implementation due to their inherent nature of parallel processing. The main disadvantage to this lies in their inflexibility to alter and improve upon existing design as seamlessly as you would have in software. As such, to overcome the roadblocks posed by these existing infrastructures, the proposed scheme for medical image encryption has been implemented on Field Programmable Gate Array (FPGA). FPGAs are flexible in terms of their reconfigurability generally consumes lesser design time. It also features access to certain proprietary IP cores which could significantly bolster the tools available on hand. The inherent nature of the hardware mechanism is that it serves as additional security mechanism by itself by limiting access and reducing the chance of tamperment. There are some of the reasons that a hardware-based FPGA deployment of an encryption scheme would prove to be advantageous over other existing methods.

2 Proposed methodology

This section illustrates the proposed encryption scheme in the block diagram in Fig. 1. The scheme proposes three stages of diffusion across different mechanisms followed by a confusion which occurs simultaneously. This constitutes the “on the fly process” process of confusion. The scheme is proposed for 256 × 256 16-bit grayscale DICOM image and may be scaled up as and when the requirements arise in tandem with more sophisticated hardware execution environments. The fractal structure and whose intrinsic randomness that was incorporated in the scheme is illustrated in Fig. 2. A detailed overview in Fig. 3 is presented along the Finite State Machine (FSM) representation of the overall flow.

Fig. 1
figure 1

Block diagram of the proposed encryption scheme

Fig. 2
figure 2

Detailed overview of the fractal structure

Fig. 3
figure 3

FSM structure for the proposed encryption scheme

The encryption flow as illustrated in Fig. 1 is defined below:

Data Assignment

  • Step 1: Load 4 consecutive pixels from the pixel array into 4 variables. Each pixel is of length 16 bit.

    • Spilt each pixel into 4 quartets and shuffle them. The Shuffled pixels are now ready for processing.

Diffusion

  • Step 2: The chaos Attractor is generated. The three planes are used for the diffusion process.

    • The lower nibble of x-plane is used to generate 4 keys, for the primary diffusion.

    • The lower nibble of z-plane is used as seed for Rule 42 based CA generation

  • Step 3: Once the primary diffusion is executed, the secondary diffusion is performed after ca generation.

  • Step 4: The tertiary diffusion is done by using the lower nibbles of the y-plane and the z-plane respectively.

Rotation

  • Step 5: The arms themselves are now rotated in direction ad amount determined by a certain control variable in place.

On the fly Confusion

  • Step 6: As the FPGA allows for parallel processing, the encrypted image is written to the BRAM segment simultaneously, without having to instantiate another stage, which makes up the ‘on the fly’ confusion.

The fractalized approach to the encryption scheme is illustrated in the square fractal structure. The intrinsic structural properties of the Fractal are exploited in implementing the scheme, which adds another layer of complexity, with a particular emphasis toward quantum-based attacks. The fractalized structure boasts a significant structural key sensitivity as elaborated in detail in a later section.

The proposed encryption scheme for medical images takes advantage of the inherent sensitive nature of the fractal structure. The scheme is a tri-layer scheme, comprising of two distinct diffusions stages and followed by an ‘on the fly’ confusion as and when the encrypted image is written into the BRAM of the FPGA. The encryption architecture has been constructed for a 256 × 256 DICOM medical image, which can scale up to meet other image size requirements by approaching the necessary changes at the hardware level.

The proposed encryption architecture is to perform the encryption of 16-bit DICOM images is comprised of several sub modules. These sub modules are discussed in detail and were accessed and operated and accessed via a FSM. The sub-modules are ordered as follows with their usage:

  • Fractal framework

  • Cellular Automata for diffusion

  • Lorenz attractor for diffusion

  • LFSR for confusion

2.1 Fractal framework

Though the square fractal structure is not a physical component in a strict sense, it heavily influences the way in which the encryption scheme is implemented, particularly on how physically the pixels are arranged and encrypted. Fractals can be regarded as self-replicating structures. As such from a certain simple base shape, complex structures may arise, when arranged and stacked in a certain manner. Examples of more complex fractals include the sierpinki triangle []. For the purposes of this work, a relatively simple square fractal structure was used. The advantage in lies in the fact the structure is highly sensitive to change. Even if the attacker was to work out the base structure of the overall fractal, without the nature and direction of evolution, only a brute force approach could suffice to overcome the scheme. This is the novelty of the proposed work, and this kind of approach is regarded to be resistant to even quantum-based attacks.

2.2 Cellular automata rule 42

$$ b(n)=\left(\sim \left(b(n)\right)\&b\left(n+1\right)\right)\oplus \left(b(n)\&b\left(n+1\right)\right) $$
(1)

This unit is responsible for the key generation with which the secondary diffusion is performed. The CAs are set of relatively simple expression which may be used to model more complex situations [7, 19]. As such, to formally put it, CAs are set of expressions whose state transition functions are dependent on their neighbouring cells/state. As such they exist as cells. The nature of these dependencies is defined by a set of rules, which mathematically specify the relationships between the cellular automaton unit and its neighbouring cells. For the purposes of this experiment the Rule 42 attractor was employed owing to its excellent random number generation capabilities. Rule 42 based CA is capable of generating a maximal length sequence used to perform the secondary diffusion [22]. The structure of 8-bit CA with rule 42 is depicted in the Fig. 4 and the functional simulation of CA has been presented in Fig. 5.

Fig. 4
figure 4

Illustration of 8-bit CA attractor with rule 42

Fig. 5
figure 5

Functional simulation of CA attractor with rule 42

2.3 Lorenz attractor

Attractors have the capacity to describe physical systems in a deterministic fashion and yet give rise to inherently unpredictable solutions. The Lorenz attractor is most famously attributed to the “Butterfly effect”, owing to butterfly like shape that the attractor generates [3]. This chaotic attractor is derived from the Lorenz System of ordinary differential equations. The attractor as any other attractors has chaotic solutions when initial conditions met and parameters are used. The attractor is essentially a set of chaotic solutions for the Lorenz systems. The chaotic nature of these solutions is exploited in generation of Pseudorandom sequences for the purposes image encryption. The solutions exist in three planes, x, y and z which originally corresponded to change in convection, vertical temperature and horizontal temperature. In the context of the proposed scheme, each of these planes results in an 64 bit output in each of the planes. Different subsections are derived as input and control for other sections of the architecture. Each of the planes are sub-selected for this purpose. The attractor is described through the following equations (Eq. 2, 3 and 4). In addition it was verified with a simulator whose simulation has been shown in Fig. 6.

$$ \frac{dx}{dt}=\sigma \left(y-x\right), $$
(2)
$$ \frac{dy}{dt}=x\left(\rho -z\right)-y $$
(3)
$$ \frac{dz}{dt}= xy-\beta z. $$
(4)
Fig. 6
figure 6

Functional simulation of Lorenz chaotic system

2.4 Linear feedback shift register

The LFSR is capable of generating pseudorandom sequences for which is incorporated into the encryption architecture. It is capable of producing the maximal length sequences of length 2n-1. The LFSR is a mathematically defined circuit specified by a Fibonacci polynomial of degree 16, as is the requirement for a 256 × 256 dimensioned DICOM image. The LFSR pseudorandom sequence contributes to the “On the Fly “confusion, which makes up the third stage in the encryption scheme. The “On the fly” aspect refers to the advantage of this particular confusion approach, is the diffused pixel has been confused and written in to BRAM of the FPGA in a single clock cycle which improves the execution speed. The functionality verification of LFSR is depicted Fig. 7. Since four pixels have been encrypted at a time with the fractal structure, a 14 bit LFSR is sufficient to confuse 16,384 pixels ([256 × 256] / 4 = 16,384).

Fig. 7
figure 7

Functionality verification of LFSR

The first stage of the encryption process is to shuffle the quartet of image pixel bits to resemble the fractal arm like structure, which is the first step in the fractal implementation. Four pixels are processed at tandem and as such the pixels are shuffled between them to increase the overall pixel-independency. As processing is done for four pixels, 4 distinct 8-bit keys are constructed from the 64-bit chaos sequence generated. These keys are concatenated in a certain order to create 4, 16-bit encryption keys to perform the primary diffusion for each of the 16-bit pixels. Another control is sourced from the chaos map to control the nature of the diffusion process, i.e., being a XOR based or a XNOR based diffusion.

The secondary diffusion is made possible with rule 42 based 16-bit cellular automata generator, which yields in 16 unique keys, of which four are extracted to perform the secondary diffusion of the diffused pixels. The seed for the CA is extracted from a subsection of the chaos map generator. The mechanism for assigning a 16-bit CA key from the list of 16 unique values is also sourced from a section of the chaos map key. Once the respective 16-bit CA key as been marked, the secondary diffusion occurs.

A secondary shuffling takes places, where the order of the entire pixels is shuffled among one another, which is a kin to rotating the arms of the fractal in either direction by one stem. The control for the direction of rotation is sourced from the seed of the CA used in the secondary diffusion. A reduction operation is performed on the 16-bit seed value for the CA, the outcome of which is used to determine the direction of fractal arm rotation. The third and final step in the encryption framework consists of the rather unique ‘on the fly’ confusion. A 16-bit Fibonacci LFSR with a non-zero seed value is used. The ‘on the fly’ aspect of the confusion process is deemed as such as the confused information is written directly on the BRAM of the FPGA without having to confuse the image separately and load it subsequently into the memory.

2.5 FSM state operations

  • Initial State

    • All the variables are initialised:

      • pixel count i = 0.

      • Lorenz attractor initial conditions defined.

      • Linear Feed Back Shift register initiated and feedback defined.

      • The DICOM array DA, containing the entire DICOM image as pixels in array form is loaded.

      • next_state = State 1

  • State 1: Data Assignment

    • Pixels are loaded 4 at a time, each representing one arm of the fractal.

      • dicom1 = dicom_array[i];

      • dicom2 = dicom_array[i + 1];

      • dicom3 = dicom_array[i + 2];

      • dicom4 = dicom_array[i + 3];

    • Pixels are shuffled amongst the arms and stored on pN

      • p1 = {dicom1[3:0], dicom2[3:0], dicom3[3:0], dicom4[3:0]};

      • p2 = {dicom1[7:4], dicom2[7:4], dicom3[7:4], dicom4[7:4]};

      • p3 = {dicom1[11:8], dicom2[11:8], dicom3[11:8], dicom4[11:8]};

      • p4 = {dicom1[15:12], dicom2[15:12], dicom3[15:12], dicom4[15:12]};

    • next_state = State 2

  • State 2: Chaos generation and primary Diffusion

    • The Lorenz attractor is generated across the x, z and y planes.

      • X-plane is used to generate four 8-bit keys. Pixels are diffused with the keys.

        • dN = pN ⊕ (kn, kn + 1) or pN ⊙ (kn, kn + 1)based on control

      • Z-plane is used as seed for Cellular Automata generation.

      • Z-plane is used for CA address selection

      • Y-plane is reserved for an alternate operation.

    • next_state = State 3

  • State 3: Cellular Automata Generation and secondary Diffusion

    • Seed from Z-plane is used for Rule 42 based CA generation and is stored in array

      • reg[15:0] ca_array[0:15]

    • Address from Z-plane is used to select the specific 16-bit component from ca_array

      • ddN = dN ⊕ (caarray[z − plane])

    • next_state = State 4

  • State 4: Arm rotation

    • The arms (pixels) are rotated, i.e., they are made to shift and take up the position of the adjacent pixel.

    • Rotation is done once in either left or right direction, based on control value.

      • temp_reg = dd1;

      • dd1 = dd2;

      • dd2 = dd3;

      • dd3 = dd4;

      • dd4 = temp_reg;

    • next_state = State 5

  • State 5: Chaos based tertiary Diffusion

    • Sections of Z-plane and Y-plane serve as keys for the tertiary diffusion of pixels.

      • The results are stored in ans_array

      • ans_array[N] = ddN ⊕ y − plane ⊕ z − plane

    • This constitutes the final diffusion operation.

    • next_state = State 6

  • State 6: Linear Feed Shift Register Confusion

    • This Process occurs simultaneously as the Diffusion, which constitutes the ‘On the fly’ process of confusion. The 16.bit LFSR generates 216 − 1random values, used as address while writing the image into the BRAM of the FPGA.

      • BRAM_address = lfsr_output

    • Each time the pixel is written to a random location as specified by BRAM_address.

    • This concludes the Encryption operation.

    • If pixel count is lesser the 65,536, the operation is repeated for the next quartet of pixels.

      • if(pixel_count <65,536)

        • next_state = Data Assignment

      • else if(pixel_count > =65,536)

        • next_state = Final State

  • State = Final State

    • As the process on a Verilog coded hardware environment cannot be physically halted, it is programmatically halted. This is achieved by the utilization of a self-loop, which can be interpreted as the end of the operation.

      • Next State = Final State

The decryption flow may be regarded as simple the reverse of the encryption flow process, and as such has been omitted for preventing redundancy.

3 Results and experimental validation

This section outlines the performance analysis of the encryption scheme, conducted over a suite of statistical and hardware tests. Five test DICOM images designated DICOM1, DICOM2, DICOM3, DICOM4, and DICOM5 were used to implement the proposed algorithm. The sample test images are depicted in Fig. 8(a – c) wherein the encrypted images have been presented in Fig. 9(a – c). The images are first converted into their hexadecimal equivalent values, which were used as such in the Verilog script. The strength of the scheme is elaborated further in the section. The proposed scheme was implemented on a Cyclone IV EP2C35F672C6 FPGA, through Quartus II Electronic Design Automation (EDA) tool.

Fig. 8
figure 8

Test DICOM images

Fig. 9
figure 9

Encrypted DICOM images

3.1 Histogram analysis

This analysis allows one to perceive the distribution of the intensities of the pixel weights of the image. Each image has a unique pattern, with which the image may be reconstructed. The flat and even distribution indicates the there exists an equal distribution of pixel weights amongst the images [19]. This scattering of the pixel depths in a uniform pattern across the entire range of the histogram, a good indicator of performance of the encryption scheme. This is outlined in the Table. 1.

Table 1 Results of histogram analysis between original and encrypted images

Histogram analysis ensures that the encrypted images yielded an approximate flat response which is close to the ideality. This results in the equi-distribution of the pixel values which confirms the adequate diffusion has been employed.

3.2 Correlation analysis

This is another statistical metric used to ascertain the performance of the encryption algorithm. Correlation can be broadly defined as the relationship between two variables, or in other words, it may be used to compute the degree of similarity between two variables [22]. A cryptosystem can be regarded as good is the encryption scheme is able to efficiently mask all the distinctive such the encrypted image is randomized with high uncorrelation. The correlation coefficient describes the relationship between the neighbouring pixels along the horizontal, vertical and the diagonal directions. For an encrypted image, should it be different from the original plaintext image, then these values should ideally be zero or practically should be as close to zero as possible. The correlation values are normalised from −1 to 1. Should the coefficient be 1, the algorithm is said to have failed with the plaintext and encrypted image being perfectly correlated. In the case of 0, the images are perfectly uncorrelated. When the correlation coefficient is −1, then the encrypted image is said to be the negative of the original image. This information is also presented in the form of correlation diagrams. As expected, the correlation responses along the afore mentioned three directions is expected to be sparse. These values have been computed with the following equations:

$$ C.C=\frac{Cov\left(x,y\right)}{\sigma_x\ast {\sigma}_y} $$
(5)
$$ {\sigma}_x=\sqrt{VAR(x)} $$
(6)
$$ {\sigma}_y=\sqrt{VAR(y)} $$
(7)
$$ VAR(x)=\frac{1}{N}{\sum}_{i=1}^N{\displaystyle \begin{array}{c}{\left({x}_i-E(x)\right)}^2\\ {}\end{array}} $$
(8)
$$ Cov\left(x,y\right)==\frac{1}{N}{\sum}_{i=1}^N\left({x}_i-E(x)\right)\left({y}_i-E(y)\right) $$
(9)

Where:

  • x and y are the pixel values of the same pixel location in the encrypted and plaintext images

  • C.C is the correlation coefficient

  • Cov is covariance of x and y

  • Var(x) is the variance at pixel x

The correlation analysis has been conducted for the original and encrypted image. Table 2 depicts the correlation in Horizontal, Vertical and Diagonal directions and Table 3 presents the correlation coefficients.

Table 2 Correlation analysis
Table 3 Results of correlation analysis

3.3 Information entropy analysis

Entropy of any random process yields in information about itself. Information entropy is a primary identifier of uncertainty. This metric helps us to determine the inherent entropy or randomness of the algorithm by observing an encrypted image [3]. For a 16-bit depth grayscale DICOM image, the ideal value is 16, but in general, the entropy value of a source is smaller than its ideal value and as such in practicality is not achievable. In any case, if the entropy is lesser than ideal, there always exists a certain degree of predictability. Hence value as closer to the ideal can be regarded as good encryption scheme which minimize the chances of predictability. The entropy values for the 5 test images were calculated and are illustrated in Table 4. From Table 4 it may be observed that the entropy values manage to reach a value as high as 15.1726, which is close to the ideal value of 16, suggesting that the proposed encryption scheme is sound in nature.

Table 4 Results of entropy analysis

3.4 Chosen plain text attack analysis

The proposed encryption scheme utilized the Modulo 2 addition operation for the diffusion in the primary and secondary encryption stage. This necessitates the test to validate the resistance toward chosen plain text attack [21]. This resistance may be verified by applying the following equation:

$$ d1\oplus d2=E(d1)\oplus E(d2) $$
(10)

If the equation holds true, then this suggests that the encrypted image is vulnerable to attack. However, the equation when applied to the test and the encrypted test images show that the relation does not hold true, implying that the algorithm displays a significant resistance toward chosen plain text attacks. This experimentation has been conducted between the original and its corresponding encrypted images. It has been identified that the proposed encryption is resistant to chosen plain text attack.

3.5 Encryption quality analysis

An encryption scheme is considered effective if in its capacity is able to mask the distinctive features of an image. Prior tests relied on the visual inspection of images, but it is not a sufficient indicator to determine exactly the number of features hidden. As such, to evaluate the scheme in more detail the deviation of pixels values between the plaintext and the encrypted ones are examined to get a better idea of the quality of encryption. To the effect, Maximum deviation, Irregular deviation and Deviation from Uniform Histogram are used deviation parameters [20].

Maximum deviation is calculated by measuring the maximum deviation between plaintext image and its encrypted counterpart. This is calculated using the histogram responses of the plaintext image and its encrypted counterpart. This process is listed as:

  • Let p be absolute difference between the plaintext and encrypted histogram responses

  • Let pi be the amplitude of the histogram at point i. Maximum Deviation can be calculated as:

    $$ {M}_d=\frac{p_0+{p}_{65535}}{2}+{\sum}_{i=0}^{65536}{p}_i $$
    (11)

The higher the value of Md, the more is the deviation of the encrypted image with respect to the plaintext image. Using Eq. (11) the Maximum deviation can be estimated.

Irregular deviation measures how close the statistical distribution of the encrypted image pixels is close to the uniform distribution. An encryption algorithm is expected to randomise all the pixels in an image, which would ideally result in a uniform distribution. As such, of the Irregular deviation is close to the uniform distribution, and then the encryption scheme can be regarded as performing as intended. The irregular deviation may be calculated as:

$$ {I}_D={\sum}_{i=0}^{65536}{H}_{D_i} $$
(12)

Where:

  • ID is the irregular deviation

  • \( {H}_{D_i} \) is the absolute histogram deviation from the mean value

As a rule of thumb, the smaller the value of irregular deviation, the better is the strength of the encryption scheme. This value indicates the how close is the histogram distribution of the encrypted image is to the ideal uniform distribution. The histogram response of an encrypted image must ideally be uniform Deviation from Histogram gives insight into the deviation from the ideal uniform histogram distribution. Deviation from histogram is given by the equation:

$$ {D}_h=\frac{\sum_{C_i=0}^{65536}\mid {H}_{C_i-}{H}_C\mid }{M\times N} $$
(13)

Where:

  • HCis the histogram of the ciphertext image

  • \( {H}_{C_i} \) is the value of the frequency of occurrence at index i

The encryption quality metrics are applied on the encrypted images and the obtained values have been listed in the Table 5. From the results, it has been evidenced that the proposed encryption scheme has achieved adequate values in the evaluation.

Table 5 Results of encryption quality analysis

3.6 Keyspace analysis

Keyspace is a vital parameter in encryption which decides the strength of the algorithm towards brute force attack. The size of the key to the power of 2 constitutes the keyspace. In general, it has been recommended to have 2128 bit of keyspace to overcome the key oriented attacks. In this proposed work, the keyspace is not only dependant on the key generators but also depends on the position of pixel bits located in the square tree fractals. Since the square tree fractal has many self-similar structures on evolution, it helps to keep the inputs in any of the squares which bring the complexity for the attacker. It would be difficult for the attacker to guess the position of the input and to try all the possible input combinations on the fractals structure, which requires more time. In this work, 80 squares have been used to formulate the algorithm in which 64 bits (4 pixels of 64 bits) of input can be stored for encryption. These 80 bits can be kept in 80 squares of the fractals in 80! (80 × 79 × 78 × ... × 1) ways which generate huge input combinations. Hence, the keyspace has been calculated as, Keyspace = Cellular Automata (16 bits) + Lorenz chaotic system (3 initial conditions & 3 control parameters in IEEE 754 single precision) + LFSR (14 bits) + Fractals positions for input pixels (80 squares) = 216 × (232 × 232 × 232 × 232 × 232 × 232) × 214 × 80! which gives the large span of keyspace to resist the brute force attack. Besides, this fractal-based encryption is resistant towards the quantum computing attacks because of the growth of square tree fractals in terms of input positions. This work outperforms in keyspace than other earlier works of [3, 7, 19,20,21,22].

3.7 Hardware analysis

Hardware analysis is a prime concern of analysis to estimate the efficiency of the algorithm with area, power and speed requirements. It has three parameters namely hardware utility in terms of LEs and registers, power dissipation with static, dynamic and Input / Output (I/O) power and throughput. Hardware utility comprises of Look Up Tables (LUTs) otherwise called as combinational functions which creates the replica of the encryption algorithm inside FPGA. The collection of LUTs termed as LE. The consumption of LEs exhibit the utilization of FPGA area for the specified algorithm through which the area calculation has been carried out. In addition, power dissipation analysis provides the amount of power dissipated when the algorithm is executed on the FPGA. Among the three power dissipation metrics, dynamic power dissipation is directly deals the power dissipation of the encryption algorithm wherein static and I/Os are FPGA dependent measures. Further, throughput of the algorithm has been calculated through timing analysis which was determined with the Zero Plus digital logic analyzer interfaced with target FPGA. The following Table 6 depicts the hardware analysis of the proposed encryption architecture which in terns compared with the existing FPGA based medical image encryption work [21].

Table 6 Hardware analysis for the proposed encryption architecture

On analysing the results, the proposed encryption architecture consumes very less time than the existing work [21] due to the adoption of fractal four arm structures which encrypts 4 pixels of DICOM at one clock cycle of 50 MHz. Hence, the throughput has been very much increased. Conversely, LE consumption would be high (approximately 7% extra) than the existing work because of the concurrent operations on pixels. Since the throughput has been concerned, it a common fact that area would be increased on FPGA when attempting the hardware concurrency. The highlights of the proposed encryption architecture have given below:

  • Fractal structure adopted encryption method to resist brute force attack

  • FSM based architecture that suits for any type of FPGA (Device independent)

  • Dissipates very low dynamic power dissipation of 2.85 mW on Cyclone EP4CE115F29C7 FPGA

  • Inclusion of “on the fly” confusion helps in faster execution time

  • Requires only 12.13 ms to encrypt a 256 × 256 × 16 bit DICOM image

  • Resistant towards side channel attacks with the adoption of on-chip memory (BRAM)

  • Achieves the average entropy of 15.17156 with near zero correlation

4 Conclusion

A fractalized tri-layer encryption scheme for 16-bit grayscale DICOM images was implemented successfully on Cyclone IV EP2C35F672C6 FPG. The novel fractalized approach has yielded in a formidable encryption scheme, which provides a decisive advantage over other similar implementations. In addition to this, a hardware implementation of the same is an additional layer of security as a wider response to attack vectors. The fractalized approach gives rise to scopes of structural evaluation, thus presenting as an additional blanket of security. Even in the worst-case scenario of the primitive fractal element being compromised, the random evolution pattern retains the security of the algorithm. To that effect, the various performance metrics calculated have revealed the robust nature of the encryption architecture. The hardware analysis has revealed that the architecture has consumed a rather 14.28% of logical elements on the FPGA board with a maximum throughput of 86.44 Mbps. The comparative analysis of image cryptosystem with different fractals structures would be a continuation of this work.