Introduction

Two view reconstruction of a three dimensional object is a very well studied but still a challenging problem in computer vision [1, 2]. The main task in the reconstruction process may be thought as to obtain the geometrical structure of any scene from the provided pair of two dimensional images. Generally, the two view reconstruction process is treated as ill posed problem due to geometric inverse in nature. Hence very small approximation error may cause a very large deflection in the final result. This is the main reason of very less interest shown by the researchers towards the multi view reconstruction. The basic approach for the reconstruction of the locations of corresponding three dimensional points is known as point-based reconstruction algorithm [3, 4]. The core principle of point-based reconstruction method is as follows: the points are located in two view planes, and the three dimensional position of this point may be found as the intersection of the two emitted projection rays. The three dimensional shape of any object may be recovered by repeating the principle for several points. This two view configuration requires a complete prior knowledge of camera positions, camera parameters and their relative orientation.

For algebraic curves, each pair of corresponding points in both views yields the same three dimensional point. But it is not possible in case of free-form shapes or objects. Generally, it is happened due to the presence of some fallacies like difficulties in locating the correct correspondence between the points, complex shapes, geometrical errors, very large amount of corrupted data, etc. To reduce the complexity of free-form reconstruction, it may be formulated as: Find the locations of three dimensional points whose back projections fit the measured data points in all the view planes very well. This formulation involves a very sensitive part, named curve fitting through the measured data points. This fitting is a very crucial problem itself in a wide region of Science and Engineering. The fitting is generally termed as the error function for the given data. Various approaches such as least-square fitting [5, 13] used orthogonal perspective views to model the same task in the form of NURBS-snake energy minimization but the same problem was not dealt in their approach also.

From the past few decades, researchers are using some evolutionary algorithms and physics based algorithms very frequently to serve the purpose of reconstruction process. Ning et al. [14] applied simulated annealing approach to solve the three dimensional reconstruction problem using two infinite-source orthogonal projections. Ogura et al. [15] used simulated annealing technique to visualize the accurate posterioric angular assignment structure of protein projections. Chen et al. [16] presented a three dimensional image reconstruction from a limited number of projections by simulated annealing. The volume image was represented by a set of 3D Gaussian functions and the reconstruction was done by estimating parameters of kernel functions. Voisin et al. [17] developed Genetic Algorithm techniques to reconstruct the three dimensional shapes with the cloud data points. To handle the noisy data, they defined a fitness function with tolerance threshold. Koch et al. [18] proposed an evolutionary-based algorithm for three dimensional panoramic reconstruction using uncalibrated stereovision system. Their algorithm consists two steps: In first step, they extracted the points of interest in pairs of images which are captured by two consecutive cameras of the system. And in second step, they used evolutionary algorithm to compute the transformed matrix between the two images and the respective depth of the points of interest. Singh et al. [19] proposed a non-linear optimization model to reconstruct the three dimensional curve and surface using Gravitational Search Algorithm (GSA). The objective function was modeled as an error function between the given measured points and the points on the generated NURBS curve or surface. Another optimization approach for curve reconstruction is given by Alazzam et al. [20] where Average Uniform Algorithm (AUA) was used. Their algorithm was principally constructed using uniform distribution to generate random solutions, and then averaging the best solutions to obtain the optimal value for the objective function. As the evolutionary approaches and physics-based algorithms are very popular, easy to implement and able to solve the complex optimization problems, but these approaches have high implementation cost and usually require huge number of iterations. The major drawback of the most of the evolutionary approaches discussed above is that they are totally three dimensional reconstruction techniques using the direct three dimensional data. The goal which is three dimensional reconstruction using two views is neglected throughout in their work due to its complex nature.

In the recent scenario, the nature inspired swarm based meta-heuristic optimization techniques are also proven to be effective in many fields of computer and engineering. These techniques seems more relevant as compared to the traditional methods to solve the optimization problems in large search spaces [21,22,23]. The inspiration behind the swarm based optimization techniques is the foraging characteristic of the species (such as real ants). At the initial step of food search, ants move in their surroundings randomly. As soon as an ant get any food source, it collects the information about the food quality and its quantity. During the return to its nest, it releases some pheromone trail on the base. If the quantity of the pheromone deposited is high then the probability to follow that path by the other ants will also be high. This type of indirect communication among the ants by the means of pheromone trials helps to get the shortest paths between the food source and the nest. This behavior of real ants is transferred into a form of algorithm which is known as Ant Colony Optimization (ACO). It was introduced by Dorigo [24, 25] in early 1990s. Various types of ACO approaches are existed in literature such as Max-Min, Ant Q, Ant-tabu, Fast ant system [26, 27] etc. These approaches are very well used for data fitting also and hence for reconstruction purpose. In [28], **ao et al. shown the NURBS fitting process in three dimensional space based on the ACO algorithm. They have used a modified ant colony optimization algorithm to estimate the weight vectors by the minimization of the sum square residual error between the fitted and target three dimensional surface. It must be noted here that their proposed optimization model does not involve the control points as the optimization variable. In reverse engineering problems, control points play an important role. It is a member of a set of points used to determine the shape of the curve and surface or more precisely any higher dimensional NURBS structure. Another thing is the ACO algorithm, used for the optimization process, comprised only the local search space using Tabu search and then refined the solutions of all iterations. The global search is not chosen as prior in the search space. Chrysostomou et al. [29] presented a decent approach to model the three dimensional complex structures using the multiple and calibrated photos of the same scene. Their theorem presented some characteristics of space curve algorithm. But the complications with space carving techniques generally arise when images are segmented incorrectly, due to the fact that when a voxel is incorrectly removed at initial stage, it could emerge as a hole in the final 3D model which was not present in the real input images. In the first part of their algorithm, they reconstructed the structures using lightness compensating image comparison method using several input images while in the second part they used ACO algorithm for the further refinement of the reconstructed models. Some modified forms of ACO algorithm like Simple ant colony optimization (SACO) [30], Global ant colony optimization (GCO) [31, 32] have also been presented to overcome the complexities of the previously existed form [24]. A comparison of the proposed reconstruction algorithm with other relevant approaches is presented in Table 1. In this table, all reconstruction methodologies are compared based on their objectives, number of views, method and the other reconstruction highlights.

Table 1 Comparison with existing approaches in terms of objective and implementation details

In this study, we have used a nature inspired meta-heuristic optimization approach to reconstruct the three dimensional shapes using two views. The main focus of the proposed algorithm is towards the compressive NURBS reconstruction of three dimensional shapes based on GACO model [26]. The most notable novelty of the proposed algorithm lies in the framework of ant colony optimization technique in reverse engineering problems to reconstruct the three dimensional models from its two dimensional stereo images. While other researchers avail themselves to reconstruct the three dimensional object using the three dimensional cloud data points directly. This work shows the potential of using nature inspired methods to provide more accurate three dimensional objects from its stereo views to deal the reverse engineering problems. This model is a hybridization of two optimization algorithms named as SACO and GCO approaches. This new hybrid algorithm is totally based on distance matrix, new colony generation, pure foraging behavior and the continuous effort of ants. We have used this model to get the optimal values of NURBS parameters in two views. The optimized parameters are further used in Triangulation procedure [7] to reconstruct the shape in three dimensional space. The whole paper comprises different sections. “Preliminaries” briefs us a short note on NURBS, the parametrization process and the modified Generalized Ant Colony Optimizer (GACO) model. Our proposed algorithm is described in “Curve reconstruction by GACO model”. It includes detailed explanation about the curve fitting problem, selection of parameters and research methodology to determine the three dimensional parameters for reconstruction process via GACO model. “Experimental results and discussions” floats our experimental models with detailed error analysis. This section also includes the comparison of the proposed algorithm with some previously existed approaches. Finally the paper ends with conclusion, the main contribution and the future scope.

Preliminaries

NURBS function

Let us consider a sequence \((t_{0},t_{1},...,t_{s-1},t_{s})\) of real numbers, where \(t_{j}\le t_{j+1},~~j=0,1,...,s-1\). This sequence is known as knot sequence or knot vector whereas the term \(t_{j}\) including within the sequence is called knot. The number of times this knot repeated itself in a sequence is called the multiplicity of the knot and this multiplicity defines an important role in the shape of the NURBS curve. There are basically two groups of knot vectors, uniform knot vector and non uniform knot vector. In uniform knot vector, the knots inside the sequence are equally spaced and each knot appears only one time in the sequence. Whereas, in non uniform knot vector, the knots are unequally spaced and/or knot may appear more than one times in the sequence. In case of non uniform knot sequence, generally we find the non-periodic case where the end knots repeat itself according to the order of the NURBS curve and inner knots appear only at one time.

The B-spline basis functions of degree \(l-1\), which are described by Cox-De Boor Recurrence relations [9] are given as

$$\begin{aligned} B_{j,1}(\zeta )=\left\{ \begin{array}{ll} 1 &{} \text{ if } t_{j-1} \le \zeta < t_{j}, \\ 0 &{} \text{ otherwise }.\\ \end{array} \right. , \end{aligned}$$
(1)

and for \(l>1\),

$$\begin{aligned} B_{j, l}(\zeta )=\frac{\zeta -t_j}{t_{j+l}-t_j}B_{j, l-1}(\zeta )+\frac{t_{j+l+1}-\zeta }{t_{j+l+1}-t_{j+1}}B_{j+1, l-1}(\zeta ), \nonumber \!\!\!\!\!\!\!\\ \end{aligned}$$
(2)

where Eq. (1) shows that within the interval \([t_{j},t_{j+1})\), the B-spline Basis function \(B_{j,1}\) is a piecewise continuous function with value as 1, and 0 elsewhere. This is known as the support of the function. If both the knots in a knot vector \([t_{j-1},t_{j})\) are same then this support reduces to an identity. It is clear from Eq. (2) that the B-spline basis function \(B_{j,l}(\zeta )\) of order l is a combination of B-spline basis function of previous orders with coefficients as the linear factors in \(\zeta \). While its support is the union of the supports of the former basis functions of order \(l-1\). Note that the condition \(\frac{0}{0}=0\) is applicable in Eq. (2) wherever it is necessary.

Now, define the rational B-spline basis function as

$$\begin{aligned} r_{j,l}(\zeta )= \frac{w_{j} B_{j, l}(\zeta )}{ \sum \nolimits _{j=0}^{n}w_{j} B_{j, l}(\zeta )},~~~~j=0,\ldots ,n, \end{aligned}$$
(3)

where \(B_{j,l}\) is the B-spline basis function of order l and \(w_{j}\) is the weight vector. Hence for a given set of two dimensional control points \(p_j\) and a knot vector \((t_{0},t_{1},...,t_{s-1},t_{s})\), the non uniform rational B-spline function or NURBS curve of order l is defined as

$$\begin{aligned} c(\zeta )={\sum \limits _{j=0}^{n}} p_{j} r_{j, l}(\zeta ), \end{aligned}$$
(4)

where the B-spline basis functions are given by Eqs. (1) and (2). NURBS function have so many important properties, for example non-negativity, partition of unity, local support, strong convex hull property, affine invariant, etc. and it can be converted into a simple B-spline basis function or Bezier curve [8, 9].

Parametrization process

Another important property which is related to the NURBS curve is named as parametrization process. It helps us to assign a numerical value to each point on a NURBS curve. Hence each point of the NURBS curve has a set of some numerical value or parameter value that describe the location of the point on the NURBS curve. The chord length parametrization process is given by

$$\begin{aligned}&\zeta _0=0,~~~~ \zeta _{j}=\frac{|t_j-t_{j-1}|}{\sum \nolimits _{j=2}^{n}(t_j-t_{j-1})}\nonumber \\&~~~~\text{ for }~~~j=1,...,n~~~~\text{ and }~~~\zeta _{n}=1. \end{aligned}$$
(5)

The chord length parametrization lies in the range [0, 1] i.e. the parameter \(\zeta \) takes the values within the range [0, 1]. Besides the chord length parametrization process, there are many more methods for the same, such as uniform parametrization [9, 33], centripetal parametrization [9], hybrid parametrization [34] etc.

GACO model

The Generalized Ant Colony Optimizer (GACO) model mainly consists two approaches: Simple ant colony optimizer and Global ant colony optimizer. In this model ant’s foraging behavior plays an important role. With this type of behavior ants are able to approach to the optimal path without any coordination, advice and corporation. Dorigo et al. [30] have presented the foraging behavior of ants by means of a formal binary path selection model. According to that model: Let us consider the paths \(E_1\) and \(E_2\) with moving ants \(N_{E_1}\) and \(N_{E_2}\) respectively at any time t. Now, at time \((t+1)\), the probability of choosing the path \(E_1\) by the new upcoming ants as the target path is given by

$$\begin{aligned} p_{E_1(t+1)}= & {} \frac{(N_{E_1}(t)+k)^{\alpha }}{(N_{E_1}(t)+k)^{\alpha }+(N_{E_2}(t)+k)^{\alpha }},\nonumber \\= & {} 1-p_{E_2(t+1)}, \end{aligned}$$
(6)

where the term \(\alpha \) denotes the pheromone deposit by the ants. As we increase the value of \(\alpha \), the probability of choosing that path will certainly increase. The constant c denotes the degree of the attraction of the path. According to the foraging model, when the ants move towards the food search, each ant leave a certain amount of pheromone value. Shorter or nearest path will get more amount of pheromone deposited as compare to the longer paths and all the following ants quickly adopt that shorter path. The main algorithm can be divided into three stages or parts: (a) Ant based solution construction; (b) Pheromone update; and (c) Daemon actions. Let us consider a set of artificial ants \(n = (a_1,a_2,.....a_n)\) located in a colony.

Ant based solution construction stage: The solution construction starts with a random partial solution. After that, at each construction step, the solution is extended by adding some solution component from the set of feasible solution \(V^i_n\) to the partial solution. The choice of solution component is done probabilistically at each construction step. Let us consider that the ant n is currently at the point i and its probability to transit in the next point \(j \in V^{i}_{n}\) is given by

$$\begin{aligned} p_{n}^{i,j}(t)=\left\{ \begin{array}{ll} \frac{z_{i,j}^{\alpha }(t)}{\sum \nolimits _{j \in V_{n}^{i}}z_{i,j}^{\alpha }(t)},~~~ &{} j \in V^{i}_{n}; \\ 0, &{} j \in V^{i}_{n}. \end{array} \right. \end{aligned}$$
(7)

Each and every ant has its own decision policy to select the next shifting point. The value of \(z_{i,j}\) denotes the pheromone value at the path (ij) whereas \(\alpha \) denotes the impact of pheromone deposit, and it must be a positive constant value. All the ants built the entire path by linking the source points to their destination points. Now remove the entire path cycles of individual ants but they can be retraced from their sources as all the paths (ij) have their pheromone amount deposited. In this way the pheromone table may be constructed. This table contains a real valued information for each of the destination point and for each of the adjacent path. The real valued information is thought of as a practicing marks of traveling over the adjacency paths on the way to the destination points.

Pheromone update stage: In the beginning of the process, all the ants will find the nearest point randomly. And the ants with successfully achieved the destination points have the right to update the total path length \(\mathcal {L}\) in a pheromone table. Let us consider \(\mathcal {L}^n(t)\) as the path length traveled by the ant n. The total pheromone added or deposited by the ant n on this path length is \(\varDelta z_{i,j}^{n}(t)\) which is given by

$$\begin{aligned} \varDelta z_{i,j}^{n}(t) \propto \frac{1}{\mathcal {L}^n(t)}. \end{aligned}$$

The pheromone table is a continuously updating table by the ants in the search of the possible solutions. The pheromone update is to be performed by the rule

$$\begin{aligned} z_{i,j}(t+1)=z_{i,j}(t)+ \sum \limits _{n=1}^{a_n}\varDelta z_{i,j}^{n}(t+1). \end{aligned}$$
(8)

The upcoming ants will take the help from this updated pheromone table in search of their destinations.

Daemon action stage: Let us denote the optimal solution by \(\mathcal {O}^n(t)\) and quality solution by \(f(\mathcal {O}^n(t))\). In this algorithm, local search space as well as global search space are taken into consideration. The local pheromone amount \(\varDelta z_{i,j}^{n}(t+1)\) is calculated by the rule

$$\begin{aligned} \varDelta z_{i,j}^{n}(t+1)=\left\{ \begin{array}{ll} \frac{1}{\mathcal {L}^n(t+1)}, &{} \hbox {if } n\text {th} \hbox { ant follows the path }(i,j); \\ 0, &{} {\text{ otherwise }.} \end{array} \right. .\nonumber \\ \end{aligned}$$
(9)

In the above Eq. (9), \(\mathcal {L}^n(t+1)\) is the total path length traveled by the ant n. The role of local pheromone update is to make the transition paths attractive by dynamically changing the path tours. The transition of the path by the ants also occurred after every iteration based on the deposited pheromone amount. Thus, the local search is always performed in order to get the shorter paths. The global pheromone update is performed using the similar rule as for the local search. Once all the ants complete their paths, global search is performed to get the best path as follows:

$$\begin{aligned} \varDelta z_{i,j}^{n}(t+1)=\left\{ \begin{array}{ll} \frac{1}{\mathcal {L}_{\text {best}}(t+1)}, &{} \hbox {if } (i,j) \hbox { is the best path;} \\ 0, &{} {\text{ otherwise },} \end{array} \right. \nonumber \\ \end{aligned}$$
(10)

where \(\mathcal {L}_{\text {best}}\) is the shortest path. Let \(a_l\) ants and \(a_g\) ants perform the local and global search respectively. Now denote r(i) as the target points and the function f(r(i)) as the fitness function. All the target points are assigned a quality solution, and a certain amount of pheromone value \(z_i\) is also initialized for each target point. It is observed in almost all the ant colony optimization problems that they fall into the local optimum so easily. Hence crossover and mutation [35] play an important role to avoid the prematurity of the algorithm. They certainly increase the diversity of the ants. In this study, \(95 \%\) of the \(a_{g}\) perform crossover and mutation while the remaining \(5 \%\) are involved in pheromone trial distribution. The probability of each n of the local ants \(a_l\) selects r(i) target points, which is partially towards the good target points, is as follows:

$$\begin{aligned} p^{n}_{i}(t)= \frac{z^{\alpha }_{i}(t)\nu ^{\beta }_{i}(t)}{\sum \nolimits _{j \in N_{i}^{n}}z^{\alpha }_{j}(t)\nu ^{\beta }_{j}(t)}, \end{aligned}$$
(11)

where \(\nu ^{\beta }_{i}(t)\) represents the attractiveness of the move from source to destination and \(N_{i}^{n}\) denotes the feasible solution. Now, we calculate the fitness function for the target points. If the target points get the better fitness, then the upcoming ants follow the same path otherwise choose new direction randomly with the increment in the age of target points i.e. the weakness of the particular solution.

Curve reconstruction by GACO model

Curve fitting

Let us assume that we want to fit a provided set of two dimensional data points \(q=\{q_{\alpha }\}=(x_{\alpha },y_{\alpha })_{\alpha =0,..,p}\) with the help of a NURBS curve \(c(\zeta )\) of order l such that \((s<p)\)

$$\begin{aligned} q_{\alpha }=\sum \limits _{j=0}^{n}p_{j}r_{j,l}(\zeta _{\alpha }) \quad \text{ for } \text{ all }~ \alpha =0,...,p. \end{aligned}$$
(12)

For this fitting we must use an arrangement of parameter values \(\zeta _{\alpha }\) to each of the measured data points \(\{q_{\alpha }\}\) and a suitable choice of weight vectors \(w_{j}\). Thus the problem is now reduced in finding the curve which minimizes the following expression (weighted least-squares):

$$\begin{aligned} f=\sum \limits _{\alpha =0}^{p}[q_{\alpha }-c(\zeta _{\alpha })]^{2}. \end{aligned}$$
(13)

We may think of the above problem as: There are the data points arranged in a vector form \(\{q_{\xi }=(x_{\xi },y_{\xi })\}\) with \(\xi =0,...,D\) where D indicates the number of given data points. After providing the parameter values \(\zeta _{\xi }\) and the weights \(w_{\xi }\) to each of the data points \(q_{\xi }\), the fitting problem can be expressed as

$$\begin{aligned} f= & {} \sum \limits _{\xi =0}^{D}[q_{\xi }-c(\zeta _{\xi })]^{2}\nonumber \\= & {} \sum \limits _{\xi =0}^{D}\left[ q_{\xi }-\frac{\sum \nolimits _{j=0}^{n} w_{\xi }p_{\xi }b_{j,l}(\zeta _{\xi })}{\sum \nolimits _{j=0}^{n}p_{\xi }b_{j,l} (\zeta _{\xi })}\right] ^{2}. \end{aligned}$$
(14)

From the above Eq. (14), we get an over-constrained system of linear equations (with \(p_{\xi }\) as the unknown in above system). A least-square solution [5, 6] is widely chosen for the solution of these kind of system. But the B-spline basis functions involved in the system are highly non-linear in nature and also there is a huge number of unknowns for a given large data points. Thus in accordance to the mentioned challenges it will be much more suited to apply a non-linear optimization process to get the desired solution.

The matrix representation of Eq. (12) is

$$\begin{aligned} \left( \begin{array}{c} q_{0} \\ q_{1} \\ ... \\ ...\\ q_{p} \\ \end{array} \right)= & {} \left( \begin{array}{ccccc} r_{0,l}(\alpha _{0}) &{} r_{1,l}(\alpha _{0}) &{} . &{} . &{} r_{n,l}(\alpha _{0}) \\ r_{0,l}(\alpha _{1}) &{} r_{1,l}(\alpha _{1}) &{} . &{} . &{} r_{n,l}(\alpha _{1}) \\ . &{} . &{}. &{} . &{}. \\ . &{} . &{} . &{} . &{} . \\ r_{0,l}(\alpha _{p}) &{} r_{1,l}(\alpha _{p}) &{} . &{} . &{} r_{n,l}(\alpha _{p}) \\ \end{array} \right) \nonumber \\&\left( \begin{array}{c} p_{0} \\ p_{1} \\ . \\ . \\ p_{n} \\ \end{array} \right) ~~OR ~~ \mathbf [Q] =\mathbf [R][P] . \end{aligned}$$
(15)

Pre-Multiply by \(\mathbf [R] ^{T}\) both sides (due to the over-determined nature of the above system), we get

$$\begin{aligned} \mathbf [R] ^{T}{} \mathbf [Q] =\mathbf [R] ^{T}{} \mathbf [R][P] . \end{aligned}$$
(16)

Now we are able to get the classical least-square minimization system which provides the best fit coefficients in the weighted least-square manner to the given data points. Hence we can say that in the fitting problem, generally we have to perform the following tasks:

  • A suitable and careful selection of parameters (order, no. of control points, knot vectors) is required to generate the B-spline basis function first.

  • The positive weights corresponding to each control point must be obtained.

  • The proper parametrization of the given data has to be obtained.

  • At last, the control points must be obtained.

In the next sections, we describe the procedures to solve the above subproblems by the GACO model. This algorithm presents the solution of fitting problem in a general and unified way.

Selection of parameters

For the solution of above subproblems, the inputs are:

  • The collection of two dimensional data points, \(\mathbf{Q} \).

  • The order of the NURBS curve l

  • The number of control points \(n+1\).

For the dimension of search space, we have the following details

  • The total number of control points to be calculated are \((n+1)\), each of which has two coordinates plus the corresponding weight vector. That forces the search space to \(3(n+1)\).

  • With the total data points \((p+1)\), each with one parameter value \(\zeta \), the search space is \((p+1)\).

Hence, the search space dimension in the case of curve fitting is directed to \(3(n+1)+(p+1)\). Now, the order of the B-spline basis functions and the numbers of control points are chosen based on the nature of the given data points. As we know that the NURBS are piecewise polynomial functions so we can randomly choose the numbers and places of the control points for simple as well as complex shapes. The inner knots are set within the range [0, 1] while the weight vectors are also set normalized within the same range (0, 1). The NURBS curve parameters (weights, control points), which were initially set to some values, are evaluated numerically by solving the Eq. (16) with the method of two step linear process as described in [7]. Hence the choice of control points is not critical at all. Similar relevant is the value of the initial pheromone which is set as \(z_{i,j}=0.1\) and \(\alpha =1, \beta =1\), \(z_{k}(0)=1\).

Fig. 1
figure 1

Methodology Of NURBS curve fitting using GACO model

Methodology

In this section, the adopted methodology is described. Many conventional techniques have been used by the researchers [8, 9] for curve fitting where a constant value (typically 1) is always chosen for the weight vectors of the respective control points. This step generates a automatic B-spline curve from the NURBS curve. Hence some of the important properties of the NURBS curve [9] will be lost automatically. This comes out to be the one of the major weaknesses of the conventional approaches. In the presented approach, at the starting of proposed algorithm we have given some random values [7] to the control points and their corresponding weight vectors. After that GACO model will optimize the results accordingly. The procedure of the proposed algorithm is shown in Fig. 1. The GACO algorithm is described as follows:

ALGORITHM

  1. Step 1

    Start

  2. Step 2

    Generate data points Q

  3. Step 3

    NURBS curve of order l with n control points will be constructed to fit the data points Q. The first step to fit the data points is to calculate the knot vectors which is set as \((0,...,0,t_1,...,t_{n-p},1,...1)\) i.e., in the range of (0,1)

  4. Step 4

    From the generated data points, the values of the parameters are calculated using chord length parametrization method given by Eq. (5)

  5. Step 5

    Generate random control points and weight of the NURBS curve

  6. Step 6

    Evaluate the fitness of the objective function using Eq. (14)

  7. Step 7

    Start the GACO algorithm

  8. Step 8

    Create ants

  9. Step 9

    Put ants on an entry state

  10. Step 10

    link empty path lists for each ant

  11. Step 11

    Select next state for each ant using Eq. (7)

  12. Step 12

    Are the path linkage completed if no then go to Step 11

  13. Step 13

    For each ant add pheromone values in the pheromone table using Eq. (8)

  14. Step 14

    Calculate the objective function (Eq. (14)) and sort the fitness for all target points in decreasing order

  15. Step 15

    Post \(95 \%\) of \(a_g\) for crossover and mutation

  16. Step 16

    Post \(5 \%\) for trail diffusion

  17. Step 17

    Update pheromone values by using the Eq. (10) and also update the weakness of \(a_g\)

  18. Step 18

    Send \(a_l\) ants to pick good target points by Eq. (11)

  19. Step 19

    For each \(a_l\) calculate the fitness function, if fitness improve then go to Step 20 otherwise go to Step 21

  20. Step 20

    Update the pheromone value by Eq. (9), and move to good target points

  21. Step 21

    Increase the age of target points and select new direction randomly

  22. Step 22

    Repeat the steps until terminating criteria (to get a small predefined small value) is reached

  23. Step 23

    End

Experimental results and discussions

Reconstruction results

In this section, the reconstruction results based on GACO algorithm are discussed. The proposed approach is tested for synthetic (Helix, Testsurface) as well as real world objects (Vase, Tsukuba, Giraffe). The code for the reconstruction algorithm is developed in MATLAB and a computer with 2.53 GHz Intel(R) I3 processor with 3.0 GB RAM is used for computation. For two view reconstruction process mainly the following steps are taken into consideration:

  • Generate the digitized data points in both views by projecting the synthetic three dimensional data points with the help of projection matrices.

  • Start NURBS fitting [7] and obtain the initial parameter values of NURBS.

  • Perform GACO algorithm in both image planes to get NURBS control points and corresponding weights.

  • Use third view [7] for establishing the correspondence between the control points in two images.

  • Reconstruct the control points of the NURBS shape (curves or surfaces) in three dimensional space using Triangulation process [7]. Moreover, assign the appropriate weights to each reconstructed three dimensional control point to generate the complete NURBS shape.

Model 1: Helix The very first mean for the validation of our approach is a helix of the form (shown in Fig. 2(i)): \(\left\{ \begin{array}{l} X = 50 + 3 \sin t \\ Y = 30+ 5 \cos t ~~~~t \in [0,2 \pi ]\\ Z = 10 + 0.5 ~t \\ \end{array} \right. \) We have projected the above parametric curve onto the left image plane using the projection matrix \(T^L=\left( \begin{array}{cccc} 1 &{}\quad 0 &{}\quad 0 &{}\quad 50 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 20 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad 0 \\ \end{array} \right) \) and onto the right image plane with matrix \(T^R=\left( \begin{array}{cccc} 1 &{}\quad 0 &{}\quad 0 &{}\quad -20 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad 0 \\ \end{array} \right) \). Now the data points in the two dimensional image planes (as shown by \(\cdot \) in Fig. 2a and b are obtained with help of above projection matrices. The inputs for NURBS fitting [7] are set as: (i) Order of the fitted two dimensional NURBS curve: 4; (ii) Non-uniform Knot vector: \([0 ~0~ 0 ~0 ~\frac{1}{10} ~\frac{1}{10} ~\frac{2}{10} ~\frac{3}{10} ~\frac{5}{10} ~\frac{6}{10} ~\frac{8}{10} ~\frac{9}{10} ~\frac{9}{10} ~1~ 1~ 1~ 1]\); (iii) The number of control points: 13. The search space in this case is a hyperspace of dimension 80. The algorithm [7] generates the control points and corresponding weight vectors. And through this NURBS fitting a two dimensional curve is fitted, and points are calculated for different parametric values. Now the error function or objective function is formed as the error between data points and the calculated points. The objective is to minimize this error function by GACO model. The values of the algorithm parameters are already set as \(z_{i,j}=0.1\), \(\alpha =1, \beta =1\), \(z_{k}(0)=1\). The termination criterion is set as 100 iterations or minimum threshold error. Figure 2c and d show the starting random positions of the control points by \(\diamond \). Using the presented theory, the updated positions of the control points are shown by \(\diamond \) in the Fig. 2e and f. Using third view [12], the three dimensional control points (shown by red colored \(\diamond \) in Fig. 2g) and weight vectors are obtained. Finally the reconstructed NURBS curve with the obtained parameters is shown in the Fig. 2h. This represents the best fit error of the NURBS curve of order 4 with 13 control points.

Fig. 2
figure 2

Reconstruction results of Helix: a, b Two dimensional projected data points in left and right views; c, d Randomly posed control points in both image planes; e, f Updated positions of control points in both views; g Reconstructed control points in space; h Reconstructed NURBS curve in space; i Helix

Model 2: Testsurface As we know that the NURBS surface differs from a NURBS curve in a way that NURBS surface has two parametric directions instead of one. And also the order and the knot vector must be defined for both parameters. With the help of similar concept we may extend the presented theory for surfaces also. For the validation, we have considered the following testsurface (Fig. 3g):

$$\begin{aligned} \left\{ \begin{array}{l} X = \text {linspace}(-2,2,10)\\ Y = \text {linspace} (-2,2,10)\\ (X~Y) = \text {meshgrid} (X,Y)\\ Z = X (e^{-X^2-Y^2})\\ \text {surf} (X,Y,Z)\\ \end{array} \right. . \end{aligned}$$

For the generation of the data points (shown with blue colored \(\Box \) Fig. 3a, b) in both image planes, we have taken the left and right projection matrices as \(T^L=\left( \begin{array}{cccc} 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad 1 \\ \end{array} \right) \) and \(T^R=\left( \begin{array}{cccc} 1 &{}\quad 0 &{}\quad 0 &{}\quad -10 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad 1 \\ \end{array} \right) \). The parameters for NURBS fitting are set as: (i) NURBS surface order: (3, 3); (ii) The knot vectors: \([0~ 0 ~0 ~\frac{1}{4}~ \frac{1}{4}~ \frac{2}{4}~ \frac{2}{4}~ \frac{2}{4}~ \frac{3}{4}~ \frac{3}{4} ~1 ~1 ~1]\) and \([0 ~0 ~0~ \frac{1}{3}~ \frac{1}{3}~ \frac{1}{3}~ \frac{2}{3}~ \frac{2}{3} ~1 ~1 ~1]\); (iii) The number of control points: \(10 \times 8\). Rest of the parameters for GACO algorithm are set as same as in above model. With the proposed approach, the optimized locations of the two dimensional control points and their corresponding weights are obtained in both image planes (shown with blue \(*\) in Fig. 3c, d). Using third view, the required parameters are evaluated in three dimensional space (shown by colored \(*\) in Fig. 3e). The generated NURBS surface through the parameters is shown in Fig. 3f.

Fig. 3
figure 3

Reconstruction results of surface: a, b Two dimensional projected data points in left and right views; c, d Updated control points by GACO model in both image planes; e Reconstructed control points in space; f Reconstructed NURBS surface; g Testsurface

Fig. 4
figure 4

Reconstruction results of Vase: a, b Two dimensional views of Vase; c Reconstructed NURBS Vase Model

Model 3: Vase Model For the validation of the algorithm with real examples, first we have taken two views (images) of the same vase (or flask) as shown in Fig. 4a and b. Both views or images are captured with the help of a camera with different viewpoints. The objective is to construct the three dimensional model or surface corresponding to the provided two dimensional images. Before starting the GACO algorithm we have set the following inputs: (i) Order of the fitted NURBS surface: (4, 4); (ii) The knot vectors: \([0~ 0 ~0 ~0~\frac{1}{4}~ \frac{1}{4}~ \frac{2}{4}~ \frac{2}{4}~ \frac{3}{4}~ 1~1 ~1 ~1]\) and \([0~0 ~0 ~0~ \frac{1}{3}~ \frac{1}{3}~ \frac{2}{3}~ \frac{2}{3}~ 1 ~1 ~1 ~1]\); (iii) The number of control pints: \(9 \times 8\). Rest of the GACO parameters are same as mentioned in “Selection of parameters”. The three dimension model or surface corresponding to two views with the reconstructed control points (shown with black \(*\)) and corresponding weights is shown in Fig. 4c. The main highlight of the presented algorithm in this case is the reduction of third view. We have not used the concept of third view here, still we get very good visual result.

Fig. 5
figure 5

Reconstruction results of Tsukuba statue: a, b Two dimensional views of Tsukuba model; c, d Updated control points by GACO model in both views; e Reconstructed NURBS Tsukuba Model

Model 4: Tsukuba Statue For another example, we have taken the Tsukuba image which is publicly available several years ago. Again for the two view reconstruction we have taken two subsequent frames (Fig. 5a, b) of the Tsukuba image captured from slightly different viewpoints. The main objective here is to reconstruct the three dimensional curve of the boundaries of the Tsukuba statue. Here the inputs are given in the following form: (i) Order of NURBS: 4; (ii) The knot vector: \([0 ~0~ 0 ~0 ~\frac{1}{10} ~\frac{1}{10} ~\frac{2}{10} ~\frac{3}{10} ~\frac{5}{10} ~\frac{6}{10} ~\frac{8}{10} ~\frac{9}{10} ~\frac{9}{10} ~1~ 1~ 1~ 1]\); (iii) The number of control points: 13. Rest of the parameters are same as before. To the total of 429 two dimensional data points for boundary curve reconstruction, the optimized locations of NURBS parameters by GACO model in both views are shown by \(\diamond \) in Fig. 5c and d. Finally the NURBS curve of Tsukuba statue by GACO model is shown in Fig. 6e.

Model 5: Giraffe Image In the next example, we have taken two views of a Giraffe (shown in Fig. 6a, b). In this example total 1914 two dimensional data points are taken into consideration for the reconstruction process of outer boundary of the giraffe. With the proposed approach, the reconstructed outer shape of the body of the giraffe is shown in the Fig. 6c.

Fig. 6
figure 6

Reconstruction results of Giraffe: a, b Two dimensional images of Giraffe; c Reconstructed boundary of NURBS curve

Error analysis

For the quantitative error analysis, we have evaluated Mean, Median, Best, Worst and Standard deviation (SD, \(\sigma \)) of the values of objective function determined by the GACO algorithm. All the relevant terms are calculated over 100 runs and displayed in Table 2. It is observed from this table that the reconstruction errors are very less and can be inferred as almost negligible which shows the effectiveness of the proposed algorithm very well. But overall performance of any algorithm will be tested in presence of noise. We have also checked the robustness of the presented theory.

Table 2 Mean, median, best, worst and standard deviation (SD) of errors over 100 runs

The procedure to add the noise in the reconstruction process is as follows: the white Gaussian noise (Mean=0 and different variance) is added to the data points of both views (2D image planes). This step will perturb the actual position of the data points and induce some error in the whole reconstruction process. We have to study how much it will going to affect the actual reconstruction process. This study has been presented in Table 3. We have induced different types of noise level i.e., variances (\(\sigma =0.1\) to 1). Under each noise level, mean, median, best, worst and SD are reported and it is observed that the reconstruction errors are significantly small in presence of different level noise also. Figure 7a shows the robustness of the presented theory in case of helix. Here mean of the reconstruction errors are shown under different amount of induced noise. There is no deflection in the curve upto the noise level \(\sigma =0.6\). After that small changes in the deflections are shown but still no rapid change. Similar analysis has been done in case of the surface example also (shown in Fig. 7b). From Fig. 7b, it may be concluded that the induction of higher levels of noise does not produce any kind of rapid increment in the reconstruction errors. Hence this analysis presents the robustness of the presented theory very well. The convergence behavior of the presented study has been demonstrated by the means of error plots. The errors (mean, median, SD) are observed over iterations. The convergence plot for the first synthetic example is shown in Fig. 8a. It shows that almost after the \(60^{th}\) iteration, the errors are constant. Whereas in case of the surface example (Fig. 8b), the errors are converging very fast i.e., almost after the 15th iteration.

Table 3 Reconstruction errors in presence of various amounts of noise
Fig. 7
figure 7

Mean errors of reconstruction under different noise levels in case of a Helix; b Testsurface

Fig. 8
figure 8

a Convergence behaviour of Helix; b Convergence behaviour of Testsurface

Comparison study

As discussed in the above sections, the proposed GACO approach towards the two view reconstruction process performs very well. To support our claim, a detailed comparison analysis is presented in this section. For this study, we have compared the reconstruction results for the curve as well as the surface. The curve is taken of the form: \(\left\{ \begin{array}{ll} X = 2\cos t \\ Y = 2\sin t \\ Z = 2(t+1)\\ \end{array} \right. \)   \(t\in [0,\frac{5\pi }{4}].\)

Fig. 9
figure 9

Comparison analysis of a curve by: a Point-based Approach; b Our Algorithm

This curve is projected in left and right image planes using these two projection matrices: \(\left( \begin{array}{cccc} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 1 \\ \end{array}\right) \) and \(\left( \begin{array}{cccc} 1 &{} 0 &{} 0 &{} -20 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 \\ \end{array}\right) \). The number of data points used for comparison are 31. The parameters for GACO algorithm are set as follows: (i) NURBS order: 3; (ii) The knot vector: \([0~ 0~ 0 ~\frac{1.45}{3} ~\frac{2.38}{3} ~1 ~1 ~1]\); (iii) The number of control point: 5. Figure 9 represents the qualitative comparison. Left part of the Fig. 9 is the reconstruction result based on point-based methodology [3] and right part of the Fig. 9 is the result by the proposed GACO approach. This comparison gives the support to our claim to get smooth and flexible reconstruction.

Another example for the comparison is considered as the given surface: \(\left\{ \begin{array}{l} X = \text {linspace} (-2,2,20)\\ Y = \text {linspace} (-2,2,20)\\ (X~Y) = \text {meshgrid} (X,Y)\\ R = (\sqrt{X^2+Y^2})\\ Z = \text {sin} (R)/ R \\ \text {surf}(X,Y,Z)\\ \end{array} \right. \)

Fig. 10
figure 10

Comparison analysis of a surface by: a Point-based Approach; b Our Algorithm

Table 4 Errors of reconstruction with corrupted data in case of curve (Fig. 9) and surface (Fig. 10)

This surface is projected in left and right image planes using the projection matrices: \(\left( \begin{array}{cccc} 1 &{}\quad 0 &{}\quad 0 &{}\quad 40 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 20 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad 1 \\ \end{array}\right) \) and \(\left( \begin{array}{cccc} 1 &{}\quad 0 &{}\quad 0 &{}\quad -20 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad -10 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad 1 \\ \end{array}\right) \) . The parameters are set for 400 data points: (i) NURBS order: (3, 3); (ii) The knot vectors \([0~ 0 ~0 ~\frac{1}{3}~ \frac{1}{3}~ \frac{2}{3}~ \frac{2}{3}~ \frac{2}{3}~ \frac{2}{3}~ \frac{2}{3} ~1 ~1 ~1]\) and \([0 ~0 ~0~ \frac{1}{4}~ \frac{1}{4}~ \frac{2}{4}~ \frac{3}{4}~ \frac{3}{4} ~1 ~1 ~1]\); (iii) Number of control points: 80. To illustrate the advantages of the proposed algorithm, we have compared the reconstruction result of this surface with ACO approach [28]. In their study, simple ACO algorithm was used for three dimensional NURBS fitting. To get the input for two view NURBS fitting, we have projected the three dimensional data [28] onto the base planes (view planes) with the help of same projection matrices. The qualitative analysis is shown in Fig. 10. Figure 10a is the reconstruction result from the reference [28] while Fig. 10b corresponds to the proposed GACO approach. The obtained results also show the better smoothness and flexibility. Another mode of comparison i.e. quantitative analysis is also presented here for both the reference examples. For this again Mean, Median, Best and SD are obtained for the reconstruction results under various levels of noise. Table 4 represents the obtained error in both cases, where each block indicates the Mean, Median, Best and SD in decreasing orders. From this table, it is observed that reconstruction errors in both examples by GACO approach is much smaller than the other considered methods. The computational efficiency is also displayed through the Table 5. In this table, the reconstruction time periods for all the reference examples (Figs. 2,3,9,10) are shown. This table also shows that the total computation time for the proposed algorithm is smaller than the other existed state of the art algorithms. Hence with all the presented analysis, we may conclude that our proposed algorithm outperforms others in terms of accuracy, efficiency and flexibility.

Conclusion

In this study, we have presented a reconstruction methodology for free-form shapes in space using only two views. In particular, through a given set of data points in stereo views, a NURBS curve of definite order is assumed to fit. GACO model is used for the optimization of the fitting error between the data points and the fitted NURBS curve. The optimization process results in the best values of control points and their corresponding weights in both views. Finally, Triangulation method has been applied to get the related parameters (control points and weight vectors) in space, and thus actually returning the reconstructed NURBS curve. This method has been extended for the NURBS surface also. A detailed experimental study and error analysis has been conducted in support of the proposed approach. The main highlights of the presented algorithm are floated here:

  • Very Effective Proposed method has been tested over several synthetic as well as real images. We did not foist any restriction to the input images or data points. The methodology is proven to be very effective in all cases. Even in case of the images with lost data (as shown in Fig. 4), the performance is remarkable.

  • On the Mark All the examples taken for the analysis shows that the measured errors are very less. It means that the method performs well in terms of numerical accuracy. Moreover the reconstruction method reconstructs the shape of the objects with high accuracy.

  • General The method is very general and can be applied to all complex problems. In case of curve, surface or boundary evolving problems, the method gives accurate results.

  • Robust As we have discussed in the above study that the reconstruction problem in three dimensional space is an ill-posed problem. So small error in the fitting process in two dimensional space produce a large deviation in the reconstructed shape. But in the presented error analysis, it is clear that the method performs extremely good in case of large noisy input data also.

Table 5 Average time in reconstruction process

In this work, we have used the optimized values of NURBS control points and their corresponding weight vectors for required result. There are some other parameters also which we have fixed during our procedure such as knot vectors, parametric values of data points etc. Type of parametrization method of data points is also a major concern here. In near future, we want to switch to some other parametrization methods also like centripetal, universal etc. and will try to give more emphasis on the mentioned parameters also for the reconstruction process.