1 Introduction

With the development of space technology, an increasing number of space missions involve the relative position measurement of two spacecraft [1,2,3,4], such as space assembly, space satellite repair, fuel injection, satellite capture and tracking, and space interception. The measurement of the spacecraft’s relative position is very important to maintain or change the spacecraft orientation in space to complete a space mission.

For relative position measurement, vision has some advantages in terms of weight, volume, power consumption, and equipment cost [5,6,7,8]. In the smart-OLEV mission [9, 10], the SMART-1 platform uses stereo cameras and lighting equipment to provide better measurement data within 5 m, but pointing data are only provided at a long distance. The Argon vision system is divided into long-distance and short-range vision sensors [11, 12], which select different field-of-view sensors for different distances. The natural image feature recognition system developed by the Johnson Space Center USES generates a 3D model of the target under test to calculate the relative pose [13, 14]. Its measurement accuracy is proportional to the relative distance. The measurement system requires to measure the relative pose information of two spacecraft at different distances for the control system or other systems, and the relative distance of the spacecraft varies significantly, reaching more than 20 times. Therefore, spacecraft pose estimation has the following characteristics.

(1) When the two spacecraft are relatively distant, the depth information of the feature points on the target spacecraft and the distance between the feature points are less than the relative distance between the two spacecraft.

(2) Because the focal length of the camera is fixed, the accuracy of the feature point extraction decreases with an increase in the relative distance between the two spacecraft.

Based on the reasons above, when the two spacecraft are far apart, the pose measurement accuracy will be reduced. At present, two main algorithms are used for pose estimation.

(1) Cooperative space measurement. The first is an analytical algorithm based on the perspective projection camera model, such as perspective-n-point [15,16,17] and direct linear transformation [18,19,20]. Using these algorithms, the pose of spacecraft can be solved directly. However, the accuracy of the spacecraft pose obtained using the analytical algorithm is unsatisfactory. and optimization algorithms based on nonlinear camera models, such as Gauss–Newton, Levenberg–Marqurdt, and orthogonal iteration algorithms [2.1 Algorithm Model

To construct the spacecraft pose estimation algorithm model, four coplanar symmetry points are used to calculate the spacecraft pose. The target spacecraft coordinate system is Os-xyz. There are four points for \({\varvec{P}}_{i}^{s} ,\) \(i = 1, \cdots ,4\). The four points in the target spacecraft coordinates are

$${\varvec{P}}_{1}^{s} = \left[ {\begin{array}{*{20}c} a \\ 0 \\ d \\ \end{array} } \right],\quad{\varvec{P}}_{2}^{s} = \left[ {\begin{array}{*{20}c} { - a} \\ 0 \\ d \\ \end{array} } \right],\quad{\varvec{P}}_{3}^{s} = \left[ {\begin{array}{*{20}c} c \\ b \\ d \\ \end{array} } \right],\quad{\varvec{P}}_{4}^{s} = \left[ {\begin{array}{*{20}c} { - c} \\ b \\ d \\ \end{array} } \right],$$
(1)

where a, b, c, d are known values and the relationship between \({\varvec{P}}_{i}^{s}\) and the points in the camera coordinate system \({\varvec{P}}_{i}^{c}\) is given by

$${\varvec{P}}_{i}^{c} = {\varvec{RP}}_{i}^{s} + {\varvec{T}},$$
(2)

where

$${\varvec{R}}_{3 \times 3} = \left[ {\begin{array}{*{20}c} I \\ J \\ K \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {r_{11} } & {r_{12} } & {r_{13} } \\ {r_{21} } & {r_{22} } & {r_{23} } \\ {r_{31} } & {r_{32} } & {r_{33} } \\ \end{array} } \right],\quad{\varvec{T}}_{3 \times 1} = \left[ {\begin{array}{*{20}c} {t_{x} } \\ {t_{y} } \\ {t_{z} } \\ \end{array} } \right],$$
(3)

where I is the first row of the rotation matrix of \({\varvec{R}}_{3 \times 3}\), J is the second row, and K is the third row.

2.2 Camera Model

The fixed-focus-lens camera model can be simplified to a single-lens model. According to the optics principle, space points \(P_{i}^{c} (x_{i}^{c} ,\;y_{i}^{c} ,\;z_{i}^{c} )\), image points \(p_{i} (u_{i} ,\;v_{i} )\), and the camera origin \(O_{{\text{c}}}\) are located on the same line. Therefore, the camera model is called the pinhole camera model, which is also known as the perspective projection model.

$$u_{i} = \frac{{fx_{i}^{c} }}{{z_{i}^{c} }},\quad v_{i} = \frac{{fx_{i}^{c} }}{{z_{i}^{c} }},$$
(4)

where f is the camera focal distance. The spacecraft pose estimation model is

$$\left[ {\begin{array}{*{20}c} {{{u_{i} z_{i}^{c} } \mathord{\left/ {\vphantom {{u_{i} z_{i}^{c} } f}} \right. \kern-0pt} f}} \\ {{{v_{i} z_{i}^{c} } \mathord{\left/ {\vphantom {{v_{i} z_{i}^{c} } f}} \right. \kern-0pt} f}} \\ {z_{i}^{c} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {IP_{i}^{s} } \\ {JP_{i}^{s} } \\ {KP_{i}^{s} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {t_{x} } \\ {t_{y} } \\ {t_{z} } \\ \end{array} } \right].$$
(5)

Form Eq. (2), we can obtain the relationship between \(z_{i}^{c}\) and \(t_{z}\):

$$z_{i}^{c} = t_{z} (1 + \varepsilon ),\quad \varepsilon = {{KP^{s} } \mathord{\left/ {\vphantom {{KP^{s} } {t_{z} }}} \right. \kern-0pt} {t_{z} }}.$$
(6)

Finally, we have

$$\begin{aligned}& \left[ {\begin{array}{*{20}c} {\frac{{u_{1} }}{f}t_{z} (1 + \varepsilon_{1} )} & {\frac{{u_{2} }}{f}t_{z} (1 + \varepsilon_{2} )} & {\frac{{u_{3} }}{f}t_{z} (1 + \varepsilon_{3} )} & {\frac{{u_{4} }}{f}t_{z} (1 + \varepsilon_{4} )} \\ {\frac{{v_{1} }}{f}t_{z} (1 + \varepsilon_{1} )} & {\frac{{v_{2} }}{f}t_{z} (1 + \varepsilon_{2} )} & {\frac{{v_{3} }}{f}t_{z} (1 + \varepsilon_{3} )} & {\frac{{v_{4} }}{f}t_{z} (1 + \varepsilon_{4} )} \\ {t_{z} (1 + \varepsilon_{1} )} & {t_{z} (1 + \varepsilon_{2} )} & {t_{z} (1 + \varepsilon_{3} )} & {t_{z} (1 + \varepsilon_{4} )} \\ \end{array} } \right] \hfill \\ & \quad= \left[ {\begin{array}{*{20}c} { - ar_{11} + dr_{13} } & {ar_{11} + \,dr_{13} } & {cr_{11} + br_{12} + dr_{13} } & { - cr_{11} + br_{12} + dr_{13} } \\ { - ar_{21} + dr_{23} } & {ar_{21} + \,dr_{23} } & {cr_{21} + br_{22} + dr_{23} } & { - cr_{21} + br_{22} + dr_{23} } \\ { - ar_{31} + dr_{33} } & {ar_{31} + \,dr_{33} } & {cr_{31} + br_{33} + dr_{33} } & { - cr_{31} + br_{33} + dr_{33} } \\ \end{array} } \right] \hfill \\& \qquad+ \left[ {\begin{array}{*{20}c} {t_{x} } \\ {t_{y} } \\ {t_{z} } \\ \end{array} } \right]\;. \hfill \\ \end{aligned}$$
(7)

According to the symmetry properties of points, we have

$$\left\{ {\begin{array}{*{20}c} {r_{11} = k_{1} t_{z} ,} \\ {r_{21} = k_{2} t_{z} ,} \\ \end{array} } \right.\left\{ {\begin{array}{*{20}c} {r_{12} = k_{3} t_{z} ,} \\ {r_{22} = k_{4} t_{z} ,} \\ \end{array} } \right.\left\{ {\begin{array}{*{20}c} {r_{13} = k_{5} t_{z} - {{t_{x} } \mathord{\left/ {\vphantom {{t_{x} } d}} \right. \kern-0pt} d},} \\ {r_{23} = k_{6} t_{z} - {{t_{y} } \mathord{\left/ {\vphantom {{t_{y} } d}} \right. \kern-0pt} d},} \\ \end{array} } \right.$$
(8)

where

$$\left\{ \begin{aligned} &\begin{array}{*{20}c} {k_{1} = \frac{{(u_{2} (1 + \varepsilon_{2} ) - u_{1} (1 + \varepsilon_{1} ))}}{2af},} \\ {k_{2} = \frac{{(v_{2} (1 + \varepsilon_{2} ) - v_{1} (1 + \varepsilon_{1} ))}}{2af},} \\ \end{array} \hfill \\& \begin{array}{*{20}c} {k_{3} = \frac{{(u_{3} (1 + \varepsilon_{3} ) + u_{4} (1 + \varepsilon_{4} ) - u_{2} (1 + \varepsilon_{2} ) - u_{1} (1 + \varepsilon_{1} ))}}{2bf},} \\ {k_{4} = \frac{{(v_{3} (1 + \varepsilon_{3} ) + v_{4} (1 + \varepsilon_{4} ) - v_{2} (1 + \varepsilon_{2} ) - v_{1} (1 + \varepsilon_{1} ))}}{2bf},} \\ \end{array} \hfill \\& \begin{array}{*{20}c} {k_{5} = \frac{{(u_{2} (1 + \varepsilon_{2} ) + u_{1} (1 + \varepsilon_{1} ))}}{2df},} \\ {k_{6} = \frac{{(v_{2} (1 + \varepsilon_{2} ) - v_{1} (1 + \varepsilon_{1} ))}}{2df}.} \\ \end{array} \hfill \\ \end{aligned} \right.$$
(9)

2.3 Simplified Model

When the two spacecraft are relatively distant, the accuracy of the image feature extraction is low, and the depth information of the feature points to the target spacecraft can be ignored. The camera model can be approximated by a simplified perspective projection model [29,30,31]. Consequently, we obtain

$$z_{i}^{c} = t_{z} ,\quad i = 1,2,3,4.$$
(10)

Simplified perspective projection refers to the projection on a plane parallel to the imaging plane through the origin of the target spacecraft. Therefore, it ignores the depth of the target spacecraft point relative to the origin of the target spacecraft. When the two spacecraft are relatively distant, the neglect error is insignificant. From Eq. (8), we have

$$\begin{gathered} k_{1} = \frac{{(u_{2} - u_{1} )}}{2af}{, }\,\,k_{2} = \frac{{(v_{2} - v_{1} )}}{2af}{, }\,\,k_{3} = \frac{{(u_{3} + u_{4} - u_{2} - u_{1} )}}{2bf}, \hfill \\ k_{4} = \frac{{(v_{3} + v_{4} - v_{2} - v_{1} )}}{2bf}{, }\,\,k_{5} = \frac{{(u_{2} + u_{1} )}}{2df}{, }\,\,k_{6} = \frac{{(v_{2} - v_{1} )}}{2df}, \hfill \\ \end{gathered}$$
(11)

where ki can be calculated by image points. Equation (7) contains nine variables and six equations. Thus, it cannot be solved directly. The rotation matrix R has the following constraints:

$$\left\{ \begin{aligned} &r_{11}^{2} + r_{12}^{2} + r_{13}^{2} = 1, \hfill \\ &r_{21}^{2} + r_{22}^{2} + r_{23}^{2} = 1, \hfill \\ &r_{31}^{2} + r_{32}^{2} + r_{33}^{2} = 1, \hfill \\ &r_{11} r_{21} + r_{12} r_{22} + r_{13} r_{23} = 0, \hfill \\ &r_{31} r_{21} + r_{32} r_{22} + r_{33} r_{23} = 0, \hfill \\& r_{11} r_{31} + r_{12} r_{32} + r_{13} r_{33} = 0. \hfill \\ \end{aligned} \right.$$
(12)

From the first, third, and sixth equations of Eq. (12), we can obtain

$$(r_{11} r_{21} - r_{12} r_{22} )^{2} - (r_{11}^{2} + r_{12}^{2} + r_{21}^{2} + r_{22}^{2} ) + 1 = 0.$$
(13)

From Eqs. (8) and (13), we obtain

$$(k_{1} k_{3} - k_{2} k_{4} )^{2} t_{z}^{4} - (k_{1}^{2} + k_{2}^{2} + k_{3}^{2} + k_{4}^{2} )t_{z}^{2} + 1 = 0.$$
(14)

Eq. (14) is a quartic equation. Therefore, the number of roots is four. Two negative roots are removed according to the relationship between roots and coefficients, and two positive roots meet the following conditions:

$${\text{Condition 1}}: t_{z1}^{2} \le \frac{{k_{1}^{2} + k_{3}^{2} }}{{(k_{1} k_{3} - k_{2} k_{4} )^{2} }},$$
(15)
$${\text{Condition 2}}:t_{z2}^{2} \ge \frac{{k_{2}^{2} + k_{4}^{2} }}{{(k_{1} k_{3} - k_{2} k_{4} )^{2} }}.$$
(16)

Condition 2 can only be satisfied when the rotation angle is greater than 60°; therefore, the result of applying Condition 1 is selected.

Rotation matrix R can be described by four quaternion parameters \((q_{0} ,q_{1} ,q_{2} ,q_{3} )\):

$${\varvec{R}} = \left[ {\begin{array}{*{20}c} {q_{0}^{2} + q_{1}^{2} - q_{2}^{2} - q_{3}^{2} } & {2(q_{1} q_{2} + q_{3} q_{0} )} & {2(q_{1} q_{3} - q_{2} q_{0} )} \\ {2(q_{1} q_{2} - q_{3} q_{0} )} & {q_{0}^{2} - q_{1}^{2} + q_{2}^{2} - q_{3}^{2} } & {2(q_{2} q_{3} + q_{1} q_{0} )} \\ {2(q_{1} q_{3} + q_{2} q_{0} )} & {2(q_{2} q_{3} - q_{1} q_{0} )} & {q_{0}^{2} - q_{1}^{2} - q_{2}^{2} + q_{3}^{2} } \\ \end{array} } \right] \, {.}$$
(17)

Assumed that

$$\begin{aligned} \beta &= \frac{1}{2}(r_{32} - (r_{12} r_{31} - r_{11} r_{32} )) \hfill \\ &= \frac{1}{2}(k_{4} t{}_{y} - k_{3} k_{2} t_{z}^{2} - k_{1} k_{4} t_{z}^{2} ) = 2q_{1} q_{2} . \hfill \\ \end{aligned}$$
(18)

Form Eq. (14), we obtain

$$\left\{ \begin{aligned} &q_{1}^{2} + q_{2}^{2} = \frac{1}{2}(1 + r_{11} ), \hfill \\ &q_{1}^{2} - q_{2}^{2} = - \frac{1}{2}\sqrt {(1 + r_{11} )^{2} - 4\beta^{2} } , \hfill \\ &q_{1} r_{12} + q_{2} r_{31} = 2q_{4} (q_{2}^{2} - q_{1}^{2} ), \hfill \\& q_{2} r_{12} + q_{1} r_{31} = 2q_{3} (q_{2}^{2} - q_{1}^{2} ). \hfill \\ \end{aligned} \right.$$
(19)

As a result, we have

$$\left\{ \begin{aligned} &q_{1} = \frac{1}{2}\sqrt {1 + k_{1} t_{z} - \sqrt {(1 + k_{1} t_{z} )^{2} - 4\beta^{2} } } , \hfill \\ &q_{2} = \frac{\beta }{{2q_{1} }}, \hfill \\& q_{3} = \frac{{q_{2} k_{3} + q_{1} k_{2} }}{{2(q_{2}^{2} - q_{1}^{2} )}}t_{z} , \hfill \\ &q_{4} = \frac{{q_{1} k_{3} + q_{2} k_{2} }}{{2(q_{2}^{2} - q_{1}^{2} )}}, \hfill \\& t_{x} = \frac{{t_{z} }}{2f}(u_{1} (1 + \varepsilon_{1} ) + u_{2} (1 + \varepsilon_{2} )) - dr_{13} , \hfill \\& t_{y} = \frac{{t_{z} }}{2f}(v_{1} (1 + \varepsilon_{1} ) + v_{2} (1 + \varepsilon_{2} )) - dr_{33} , \hfill \\ \end{aligned} \right.$$
(20)
$$\left\{ \begin{aligned} t_{x} = \frac{{t_{z} }}{2f}(u_{1} (1 + \varepsilon_{1} ) + u_{2} (1 + \varepsilon_{2} )) - dr_{13} , \hfill \\ t_{y} = \frac{{t_{z} }}{2f}(v_{1} (1 + \varepsilon_{1} ) + v_{2} (1 + \varepsilon_{2} )) - dr_{33} . \hfill \\ \end{aligned} \right.$$
(21)

2.4 Optimization Algorithm

The accuracy of spacecraft pose estimation based on simplified perspective projection is poor. Therefore, an iterative optimal algorithm was constructed to improve the solution accuracy. The optimal algorithm for improving the accuracy of spacecraft pose estimation is shown in Figure 2.

Figure 2
figure 2

Optimal algorithm to improve the accuracy of spacecraft pose estimation

In the algorithm, \(R^{j} = \left[ {\begin{array}{*{20}c} {I^{j} } & {J^{j} } & {K^{j} } \\ \end{array} } \right]^{{\text{T}}}\) and \(T^{j} = \left[ {\begin{array}{*{20}c} {t_{x}^{j} } & {t_{y}^{j} } & {t_{z}^{j} } \\ \end{array} } \right]^{{\text{T}}}\),

$$I^{j} = \frac{{I^{\prime}}}{{\left\| {I^{\prime}} \right\|}},\quad K^{j} = \frac{{K^{\prime}}}{{\left\| {K^{\prime}} \right\|}},\quad J^{j} = I^{j} \times K^{j} ,$$
$$t_{z}^{j} = \frac{1}{2}\left(\frac{1}{{\left\| {I^{\prime}} \right\|}} + \frac{1}{{\left\| {K^{\prime}} \right\|}}\right),\quad t_{x}^{j} = dt_{z}^{j} t^{\prime}_{x} ,\quad t_{y}^{j} = t_{z}^{j} t^{\prime}_{y} d.$$

3 Experimental Section

The simulation experiment parameters were set as follows. The focal distance of the camera was 12 mm. The pixel size was 7.4 μm × 7.4 μm. The rotation matrix and the translation vector were \([\varphi ,\;\theta ,\;\psi ] = [30^\circ ,\;30^\circ ,\;30^\circ ]\) and \(T = [0.5t_{z} ,\;0.5t_{z} ,\;t_{z} ]\) (m), respectively, where \(t_{z} = 1 - 20.\) The four points in the target spacecraft coordinates were

$${\varvec{P}}_{1}^{s} = \left[ {\begin{array}{*{20}c} {75} \\ 0 \\ {75} \\ \end{array} } \right],\quad {\varvec{P}}_{2}^{s} = \left[ {\begin{array}{*{20}c} { - 75} \\ 0 \\ {75} \\ \end{array} } \right],\quad{\varvec{P}}_{3}^{s} = \left[ {\begin{array}{*{20}c} {30} \\ {40} \\ {75} \\ \end{array} } \right],\quad {\varvec{P}}_{4}^{s} = \left[ {\begin{array}{*{20}c} { - 30} \\ {40} \\ {75} \\ \end{array} } \right].$$

Simulation experiments verified the proposed algorithm in the following three aspects: 1) The optimization algorithm was analyzed without noise. 2) The relationship between the estimation accuracy and distance was analyzed with a mean value of 0 and a standard deviation of 0.1 pixel Gaussian noise. 3) The relationship between the estimation accuracy and distance was analyzed with a mean value of 0 and a standard deviation of 1 pixel Gaussian noise.

The simulation results are shown in Figures 3 and 4. The spacecraft pose estimation error is large, based on the simplified perspective projection, and the optimization algorithm based on the camera model effectively reduces the measurement error. After 10 iterations, the attitude errors are less than 0.42°, and the position errors are less than 4 mm.

Figure 3
figure 3

Attitude accuracy analysis using optimization

Figure 4
figure 4

Position accuracy analysis using optimization

Figures 5 and 6 show the estimation accuracy with a mean value of 0 and a standard deviation of 0.1 pixel noise. When \(t_{z}\) is 10 m, the attitude error is less than 0.36°, and the position error is less than 19.5 mm. When \(t_{z}\) is 20 m, the attitude error is less than 0.65°, and the position error is less than 117 mm. The maximum pose error occurs when \(t_{z} = 1\) m. This is mainly because the initial relative position accuracy is low based on the simplified perspective projection model.

Figure 5
figure 5

Attitude accuracy analysis with 0 mean and 0.1 pixel standard deviation noise

Figure 6
figure 6

Position accuracy analysis with 0 mean and 0.1 pixel standard deviation noise

Figures 7 and 8 show the estimation accuracy with a mean value of 0 and a standard deviation of 1 pixel noise. When \(t_{z}\) is 10 m, the attitude errors are less than 3°, and the position errors are less than 0.35 m. When \(t_{z}\) is 20 m, the attitude errors are less than 7.5°, and the position errors are less than 1 m.

Figure 7
figure 7

Attitude accuracy analysis with 0 mean and 1 pixel standard deviation noise

Figure 8
figure 8

Position accuracy analysis with 0 mean and 1 pixel standard deviation noise

4 Conclusions

To meet the requirements of pose estimation accuracy for spacecraft relative distance change from far to near, we propose a model based on two different camera models. In this model, the initial value of the spacecraft pose is calculated by a simplified perspective projection model, and the results are further optimized by the perspective projection model. The simulation results show that the errors of pose estimation are less than 0.8° and 117 mm when the image points have 0 mean and 0.1 pixel standard deviation. Further, the errors of pose estimation are less than 7.5° and 1 m when the image points have 0 mean and 1 pixel standard deviation. The estimation accuracy can satisfy the requirements of spacecraft missions.