1 Introduction

Oblivious transfer (OT) was first introduced by Rabin [35] in 1981 to establish an exchange of secrets protocol based on the factoring problem. Say the sender has two messages, oblivious transfer allows the receiver to know one of them and keeps the sender oblivious to which message has been received. The unchosen message remains unknown to the receiver.

It has been shown that oblivious transfer is an important building block as a cryptographic tool. Oblivious transfer can be used to construct other cryptographic primitives [15, 22, 32]. Several oblivious transfer protocols based on Diffie-Hellman-related problems were proposed [22] or using the transformation of [18].

2 Preliminaries

Notation

Let \(\{X(a,n)\} =_c \{Y(a,n)\}\) denote computationally indistinguishable probabilistic ensembles XY, which means for any PPT non-uniform algorithm D there exists a negligible function f such that for all a, \(n\in \mathbb {N}\) we have \(|\Pr [D(X(a,n))]-\) \(\Pr [D(Y(a,n))]| \le f(n)\). Let \(\{X(a,n)\} =_s \{Y(a,n)\}\) denote statistically indistinguishable probabilistic ensembles XY on the same set, which means the statistical distance between X and Y is negligible. The notation means a is uniformly generated from the set S. For simplicity, we often omit the security level parameter n but it is implicit in the indistinguishabiliy and the negligible function.

2.1 CSIDH

For a given prime p and an elliptic curve E defined over \(\mathbb {F}_p\), \({End}_p(E)\) is the subring of the endomorphism ring End(E) consisting the endomorphisms defined over \(\mathbb {F}_p\).

Let \(\mathcal {O}\) be an order in an imaginary quadratic field and \(\pi \in \mathcal {O}\) an element of norm p. Define the set of isomorphism classes of elliptic curves \(\mathcal {E}\ell \ell _p(\mathcal {O},\pi )\) where E defined over \(\mathbb {F}_p\), \({End}_p(E)=\mathcal {O}\), and \(\pi \) is the \(\mathbb {F}_p\)-Frobenius map of E. For any ideal \(\mathfrak {a} \in \mathcal {O}\) and \(E \in \mathcal {E}\ell \ell _p(\mathcal {O},\pi )\), an action can be defined by \(\mathfrak {a}*E=E'\) such that there exists an isogeny \(\phi :E \rightarrow E'\) with \(ker(\phi )=\cap _{\alpha \in \mathfrak {a}} \{P \in E(\bar{\mathbb {F}}_p) \mid \alpha (P)=0 \}\). The image curve of \(\mathfrak {a}*E\) is well-defined up to \(\mathbb {F}_p\)-isomorphism. Moreover, the ideal class group \(Cl(\mathcal {O})\) acts freely and transitively on \(\mathcal {E}\ell \ell _p(\mathcal {O},\pi )\).

Castryck et al. specified the prime to be \(p=4 \times \ell _1 \times ... \times \ell _n -1\) where \(\ell _i\) are small odd primes. In the case of \(p=3 \mod 8\), for any supersingular elliptic curve E defined over \(\mathbb {F}_p\), the restricted endomorphism ring \({End}_p(E) = \mathbb {Z}\{\pi \} \cong \mathbb {Z}\{\sqrt{-p}\}\) if and only if E is \(\mathbb {F}_p\)-isomorphic to \(E_A: y^2=x^3+Ax^2+x\) for some unique \(A \in \mathbb {F}_p\). The quadratic twist of a given elliptic curve \(E:y^2=f(x)\) is \(E^t:dy^2=f(x)\) where \(d \in \mathbb {F}_p\) has Legendre symbol \(-1\). When \(p=3 \mod 4\) let \(E_0\) be such that \(j(E_0)=1728\), then \(E_0\) and \(E_0^t\) are \(\mathbb {F}_p\)-isomorphic. The quadratic twist can be efficiently computed in the CSIDH setting [12]. Since the prime \(p=3 \mod 4\), \(E':-y^2=x^3+Ax^2+x\) is the quadratic twist of \(E_A:y^2=x^3+Ax^2+x\) and \(E'\) is \(\mathbb {F}_p\)-isomorphic to \(E_{-A}\) by \((x,y)\mapsto (-x,y)\). Further, \((a*E_0)^t\)=\(a^{-1}*E_0\). Therefore, for any curve \(E\in \mathcal {E}\ell \ell _p(\mathcal {O},\pi )\), we have, by the transitivity of the action,

$$(a*E)^t=a^{-1}*E^t.$$

Throughout this paper, we concentrate on supersingular curves defined over \(\mathbb {F}_p\). Denote the ideal class group \(Cl({End}_p(E))\) by Cl and the set of elliptic curves \(\mathcal {E}\ell \ell _p(\mathcal {O},\pi )\) by \(\mathcal {E}\).

Uniform Sampling of Curves. In CSIDH, the method provided to sample elements of the class group Cl is heuristically assumed to be statistically close to uniform [12]. Here we make the same assumption and derive the following lemma when \(p=3 \mod 4\).

Lemma 1

Given a curve \(E \in \mathcal {E}\) and a distribution D on Cl, let \(D*E\) be the distribution on \(\mathcal {E}\) of \(a*E\) for \(a\leftarrow D\), and let \((D*E)^t\) be the distribution on \(\mathcal {E}\) of \((a*E)^t\) for \(a\leftarrow D\). If D is statistically indistinguishable from the uniform distribution on Cl, then \(D*E\) and \((D*E)^t \) are statistically indistinguishable from the uniform distribution on \(\mathcal {E}\).

Proof

Let U be the uniform distribution on \(\mathcal {E}\). Since Cl acts freely and transitively on \(\mathcal {E}\), \(D*E\) is statistically indistinguishable from U. Since taking quadratic twists is a transposition on \(\mathcal {E}\), by taking a twist on both distributions, we have \(D*E =_s U = U^t =_s (D*E)^t\).

CSIDH works by sampling ideal classes as \(\prod _{i=1}^n (\ell _i, \pi -1)^{e_i}\) where \(e_i\) are sampled from \(\left[ -B,B\ \right] \cap \mathbb {Z}\) for a suitably chosen value B. Heuristically, increasing B means that sampling becomes closer to the uniform distribution on Cl. Beullens et al. [6] proposed an efficient instantiation of these sampling methods in CSI-FiSh which requires pre-processing to compute a lattice of relations in the class group. Implementations of the CSIDH scheme can be found in [33] for constant-time variants.

Computational Assumptions. The computational assumptions relevant to this work are defined as follows.

Problem 1

(Computational CSIDH Problem) Given curves E, \(r*E\) and \(s*E\) in \(\mathcal {E}\) where \(r,s \in Cl\), find \(E' \in \mathcal {E}\) such that \(E'=rs*E\).

Problem 2

Given curves (E, \(s*E\),\(r*E\)) in \(\mathcal {E}\) where \(r,s \in Cl\), find \(E' \in \mathcal {E}\) such that \(E'=s^{-1}r*E\).

The computational CSIDH problem is the main hardness assumption for [12]. Problem 2 is an equivalent problem. To see this, given an oracle O for Problem 1, one can obtain \(E'\) by taking \(E' \leftarrow O(s*E,E,r*E)\) such that \(E'=rs^{-1}*E\). Conversely, given an oracle O for Problem 2, one can obtain \(E'\) by taking \(E' \leftarrow O(s*E,E,r*E)\) such that \(E'=rs*E\).

The following two problems are the main underlying problems against semi-honest adversaries.

Problem 3

(Computational Square CSIDH Problem) Given curves E and \(s*E\) in \(\mathcal {E}\) where \(s \in Cl\), find \(E' \in \mathcal {E}\) such that \(E'=s^2*E\).

Problem 4

(Computational Inverse CSIDH Problem) Given curves E and \(s*E\) in \(\mathcal {E}\) where \(s \in Cl\), find \(E' \in \mathcal {E}\) such that \(E'=s^{-1}*E\).

The equivalence between these two assumptions and a conditional reduction to the computational CSIDH problem were given in [20]. The condition for the second reduction is that the group order is given and odd. Therefore, we can say that there is quantum reduction [23, 37] to the computational CSIDH problem when \(p = 3 \mod 4\). In fact, there is also an efficient quantum reduction for the case of \(p = 1 \mod 4\), see Appendix 1 of [27]. Note that the quantum computation is only to compute the group structure of Cl, and so can be considered as a precomputation; the remainder of the reduction is classical.

As Castryck et al. pointed out [12] both problems contain exceptional cases when \(E_0\) takes part in the problems due to the symmetric structure. That is, \((a*E_0)^t=a^{-1}*E_0\), and so Problem 4 is easy in the special case \(E = E_0\). The issue can be circumvented if the public curve is generated by a trusted third party.

Next, we will introduce the main underlying assumption for our UC-secure construction.

Problem 5

(Reciprocal CSIDH Problem) Given E in \(\mathcal {E}\). Firstly, the adversary chooses and commits to \(X \in \mathcal {E}\), then receives the challenge \(s*E\) where \(s \in Cl\). Then the adversary must compute a pair \((s*X,s^{-1}*X)\) with respect to the committed X.

Intuitively, the computational reciprocal CSIDH problem is a relaxed version of the square CSIDH problem or the inverse CSIDH problem. In particular, if one can solve the inverse CSIDH problem, then one can solve the reciprocal CSIDH problem by taking \(X=E\) with \((s*X,s^{-1}*X)=(s*E,s^{-1}*E)\). Conversely, if an attacker knows the isogeny between X and E, or \(E^t\), then this can be used to solve the inverse CSIDH problem. That is, if \(X=r*E\), one can obtain \(s^{-1}*E\) by computing \(r^{-1}*(s^{-1}*X)\) with the given r. On the other hand, if \(X=r*E^t\), one can obtain \(s^{-1}*E\) by computing \(r*(s*X)^t\) with the given r. However, note that the attacker is not required to know the isogeny between X and E or \(E^t\) in the problem.

The reciprocal CSIDH problem appears to be non-standard at first sight but, in fact, it is equivalent to the inverse CSIDH problem. Even though the problem provides additional freedom X for the attacker, yet notice that X is chosen prior to the challenge \(s*E\). We show in the following reduction that the freedom to choose X can be handled. We call the reduction strategy self-reconciling.

Proposition 1

The computational reciprocal CSIDH problem is equivalent to the computational inverse CSIDH problem.

Proof

Given a challenge \((E,s*E)\) for the inverse CSIDH problem. Invoke the adversary for the reciprocal CSIDH with E. After receiving X from the adversary, send the challenge \(t_1s*E\) to the adversary where . Receive \((t_1s*X,(t_1s)^{-1}*X)\) from the adversary, then rewind the adversary to the time when it output X, and then send \(t_2s*X\) as the challenge with respect to committed X where . Receive \((X_0,X_1)\) from the adversary. Output \(X_1\).

Claim \((t_2)*X_1=s^{-1}*E\). Write \(X=b*E\) by the transitivity of the action, so \(t_2s*X=(t_2sb)*E\). Then, since the second challenge is \(t_2s*X=(t_2sb)*E\), we have \(t_2*X_1=(sb)^{-1}*X=s^{-1}*E\). Precisely, if the adversary can solve the problem based on E with committed X with probability \(\epsilon \), then the adversary can be used to solve the inverse CSIDH problem based on E with probability \(\epsilon ^2\).

In the proof Proposition 1, the reduction extracts the first entry of the first solution and the second entry of the second solution to obtain the solution for the inverse CSIDH problem. We can, therefore, conclude the following corollary.

Corollary 1

In the experiment of Problem 5, after committing to the curve X, if the adversary can solve \((s*X,s'^{-1}*X)\) with respect to different given challenges \(s*E\) and \(s'*E\) then the adversary can be used to solve the computation inverse CSIDH problem.

We end the subsection with the following relation.

Remark. The above results can all be extended to general (free and transitive) group actions and hard homogeneous spaces [14]. We leave the details to the reader.

2.2 Functionalities

In this subsection, we define the functionalities we need as well as the related security definitions.

A symmetric encryption scheme is a pair of algorithms \((\mathsf{E},\mathsf{D})\) defined over message space \(\mathcal {M}\) and ciphertext space \(\mathcal {C}\) with key space \(\mathcal {K}\).

Definition 1

(non-committing encryption (NCE)) A symmetric encryption scheme \((\mathsf{E},\mathsf{D})\) is said to be non-committing if there exists PPT algorithm \(B_1,B_2\) such that for any PPT distinguisher \(\mathcal {D}\), message \(m \in \mathcal {M}\).

where and

Informally speaking, non-committing encryption allows a user to generate a dummy ciphertext indistinguishable from the real one by \(B_1\) and later explain it with the assistance of \(B_2\). The idea was introduced by Canetti et al. [10] with the one-time pad (OTP) as an instantiation. It was also used in some oblivious transfer constructions [3, 13]. The non-committing proposition here is used to extract the input without rewinding in the simulation process.

figure a

The functionality of trusted setup curves \(\mathcal {F}_{TSC}\) serves as a setup for generating a curve for the protocol. This setup hides the relation t between the public curve and the curve \(E_0\). In practice, this can be replaced with a key exchange protocol [7]. That is, two parties do a key exchange first and obtain a curve such that the isogeny relation to \(E_0\) remains unknown if the two parties do not share their ideal classes or collude.

figure b

The functionality of a random oracle \(\mathcal {F}_{RO}\) internally contains an initially empty list. Upon receiving the query from the domain, it will check whether it is a repetition. If so, return the value assigned before; otherwise, it randomly assigns a value from the codomain, stores the pair, and returns the value. Formally speaking, an input of a random oracle can be any binary string. For simplicity, we restrict the domain to \(\mathcal {E}\). This can be easily and compatibly extend to \(\{0,1 \}^*\), since supersingularity can be efficiently verified [12].

We briefly define the security terms. Let \(output^{\pi }(x,y)\) denote the outputs of two parties with the inputs xy respectively after the execution of \(\pi \), and \(view^{\pi }_i\) consist of the input, the internal random tape and all received messages of the \(i^{th}\) participant after the execution of \(\pi \). Let \({IDEAL}_{\mathcal {F},\mathcal {S},\mathcal {Z}}\) and \({HYBRID}^{\mathcal {G}}_{\pi ,\mathcal {A},\mathcal {Z}}\) denote the ideal execution ensemble and the hybrid ensemble, respectively. A detailed explanation can be found in [27]. We refer [9, 24] for more thorough descriptions for the security notions against semi-honest adversaries and the malicious adversaries, respectively.

Definition 2

(security OT against semi-honest adversary). We say a protocol \(\pi \) securely (privately) computes \(\mathcal {F}_{OT}\) in the presence of static semi-honest adversaries if there exists probabilistic polynomial-time algorithms \(S_1,S_2\) such that

$$\begin{aligned} {output}^{\pi }(x,y)=\mathcal {F}_{OT}(x,y) \end{aligned}$$
$$\begin{aligned} \{S_1(x,({F}_{OT}(x,y))_1)\}_{x,y} {=_c} \{ view^{\pi }_1(x,y)\}_{x,y} \end{aligned}$$

and

$$\begin{aligned} \{S_2(y,({F}_{OT}(x,y))_2)\}_{x,y} {=_c} \{ view^{\pi }_2(x,y)\}_{x,y}. \end{aligned}$$

Definition 3

(UC-realize). A protocol \(\pi \) is said to UC-realize an ideal functionality \(\mathcal {F}\) in the presence of malicious adversaries and static corruption in the hybrid model with functionality \(\mathcal {G}\) if for any adversary \(\mathcal {A}\) there exists a simulator \(\mathcal {S}\) such that for every interactive distinguisher environment \(\mathcal {Z}\) we have

$$ {IDEAL}_{\mathcal {F},\mathcal {S},\mathcal {Z}}{=_c}{HYBRID}^{\mathcal {G}}_{\pi ,\mathcal {A},\mathcal {Z}}. $$

3 Our Proposal

This section first presents the idea behind our tweaked key exchange by introducing the core of Chou and Orlandi’s OT scheme [13]; we then derive a novel compact protocol as a prototype. Following this, we compress the three-round scheme to an optimal two rounds by using the quadratic twist technique. Finally, building on the round-optimal structure, we add a “proof of decryption” mechanism, which requires an extra round, in order to achieve security against malicious adversaries.

3.1 Passively Secure Schemes

Tweaked Key Exchange. Figure 1 presents the Chou–Orlandi OT scheme [13] which is based on Diffie–Hellman key exchange. In Diffie–Hellman, the sender and the receiver first share their public “keys”, \(g^s\) and \(g^r\), with each other, after which both of them can secretly obtain a shared secret \(g^{rs}\). To adapt this for the purpose of OT, the receiver can use the second round to obfuscate his secret bit i. In the third round, the sender can communicate an encryption of the two OT messages by deriving two keys, one which cancels out the obfuscation, and one which does not. Because of this key derivation, the receiver can then only decrypt the message corresponding to his input bit.

Fig. 1.
figure 1

Chou and Orlandi’s OT scheme in a nutshell [13]

We can view the isogeny-based oblivious transfer constructions of previous works in the same way. In Barreto et al.’s work [2], the shared secret between the sender and the receiver is the j-invariant of the isomorphic elliptic curves \(\phi _{B'}\phi _{A}(E)\) and \(\phi _{A'}\phi _{B}(E)\) [2]. Here, the receiver hides his input bit by masking his \(p_3^{e_3}\)-torsion subgroup public basis by a pair of special \(p_3^{e_3}\)-torsion points \(U,V \in \phi _{B}(E)\); the sender then requires the same pair of points U, V to remove the noise. A coin-flip** mechanism is then used to guarantee that both parties obtain the same points UV.

Proposals by de Saint Guilhem et al. and Vitse rely on a similar idea to use a fixed key from the key exchange to decrypt the chosen ciphertext [17, 38]. In the first OT construction of [17], two public curves are required as a trusted setup, which serve the same role as two fixed keys from the perspective of key exchange. In [38], one more \(p_2^{e_2}\)-torsion subgroup generated by the sender is required to obtain two fixed keys.

Our Three-Round Protocol. We present our three-round protocol in Fig. 2 using the notation of the CSIDH setting. In this work we approach the change from key exchange to OT with a different strategy. The essence is that the sender and the receiver can exponentiate by both s and by \(s^{-1}\), and by both r and \(r^{-1}\) respectively.

Upon receiving \(g^s\) from the sender, the receiver computes both \(g^{r}\) and \(g^{sr}\), and sends one of them to the sender depending on its choice bit. The sender then exponentiates it by both s and by \(s^{-1}\) as the encryption keys, which is like doing the key exchange as Problem 1 and 2. One can verify that the shared secret in each case is \(g^{rs}\) and \(g^{r}\), resp.

The other encryption keys are \(g^{rs^{-1}}\) and \(g^{rs^{2}}\), resp. They are intractable to the honest-but-curious receiver due to the hardness of the inverse and square CSIDH problems, respectively. Furthermore, the receiver’s input bit remains unknown since the sender only knows either \(g^{r}\) or \(g^{sr}\).

Note that in this isogeny-based setting, it is necessary that the relation between the shared public curve \(E \in \mathcal {E}\) and a fixed base curve \(E_0\) remains unknown. Should the receiver know that \(E = t *E_0\), then he can always input \(i=0\) and compute the other key as \(t^2 r^2 * (rs*E)^t = t^2 r^2 * (trs*E_0)^t = t r s^{-1} *E_0 = rs^{-1}*E\).

Fig. 2.
figure 2

Our three-round OT protocol.

Our Two-Round Protocol. To address the drawbacks of our three-round protocol, we observe that the quadratic twist provides additional flexibility for the curve computations.

To first break the dependency of C on A, we let the receiver compute \(C = (r *E)^t\) in the case \(i = 1\), instead of \(r *A\). Lemma 1 guarantees that this still statistically hides i. Now that C is independent of A, the receiver can send his message first, reducing the protocol to only two rounds. Furthermore, this removes the hypothetical attack of a malicious receiver choosing C in response to A and enables a direct reduction to the computational CSIDH problem.

We then note that the sender’s second encryption curve can be computed as \((s *C^t)^t\), instead of \(s^{-1} *C\), in the three-round version. Here again we can simplify by letting the sender compute the second curve as \(s *C^t\), without the additional twisting operation. This then results in a simplification for key computation too: for \(i=0\), the encryption curve is \(s *(r *E) = r *A\), and for \(i=1\) it is \(s *((r *E)^t)^t = r *A\); thus we return to the idea of using a single Diffie–Hellman key by way of using the twist operation. The modified two-round protocol is described in Fig. 3. We give a formal security proof in Sect. 4.1.

Fig. 3.
figure 3

The core of our two-round OT scheme. No analogue exists in the Diffie–Hellman setting due to the use of the quadratic twist.

In this simplified variant the number of isogeny computations remains the same as in the three-round variant. We note that taking quadratic twists is an efficient operation via field negation.

3.2 The Full Construction Against Malicious Adversaries

The full protocol is shown in Fig. 4 below. To be secure against malicious adversaries who may deviate from the specification, both parties will do a simple verification of the received elements. In the CSIDH setting, both parties will check whether the curve is supersingular, which can be done efficiently, as shown in [12].

Protocol

(CSIDH-based OT). Let \((\mathsf{E},\mathsf{D})\) be a symmetric encryption scheme with message space \(\mathcal {M}\) and ciphertext space \(\mathcal {C}\). Let \(H: \mathcal {E} \rightarrow \mathcal {K}\) be modeled as a random oracle \(\mathcal {F}_{RO}\) that serves as the key derivation function from the group \(\mathcal {E}\) to the key space \(\mathcal {K}\) for the symmetric encryption scheme.

  • Trusted Setup: Let \(E=t*E_0\) where is unknown.

  • Input: As input, the sender \(\mathcal {S}\) takes two messages \(M_0,M_1\) of the same length; the receiver \(\mathcal {R}\) takes a bit \(i\in \{0,1\}\).

  • Procedure:

    1. 1.

      \(\mathcal {S}\) samples independent ideals , a random string and computes \(A_0=s_0*E\), \(A_1=s_1*E\).

    2. 2.

      \(\mathcal {R}\) generates an ideal and computes \(C=r*E\); if \(i=1\), overwrites \(C=C^t\); and sends C to \(\mathcal {S}\).

    3. 3.

      \(\mathcal {S}\) checks whether \(C \in \mathcal {E}\). If not, \(\mathcal {S}\) aborts and outputs \(\mathbf{abort} _2\). Otherwise, \(\mathcal {S}\) computes four keys \(k_{j,0}=H(s_j*C)\) and \(k_{j,1}=H(s_j*C^t)\) for \(j\in \{0,1\}\). Then, \(\mathcal {S}\) computes four ciphertexts \(c_{0,j} \leftarrow \mathsf{E}_{k_{0,j}}(M_j)\) and \(c_{1,j} \leftarrow \mathsf{E}_{k_{1,j}}(s_1 \parallel str)\) for \(j\in \{0,1\}\). \(\mathcal {S}\) sends \((A_0,A_1,c_{0,0},c_{0,1},c_{1,0},c_{1,1})\) to \(\mathcal {R}\).

    4. 4.

      \(\mathcal {R}\) runs the proof of ability to decrypt first. \(\mathcal {R}\) checks whether \(A_1 \in \mathcal {E}\). If not, \(\mathcal {R}\) aborts and outputs \(\mathbf{abort} _1\). Otherwise, \(\mathcal {R}\) computes \(k'_{1,i}=H(r*A_1)\) and \((s'_1 \parallel str') \leftarrow \mathsf{D}_{k'_{1,i}}(c_{1,i})\). Verify whether \(s'_1*(r*E)=r*A_1\). If not, output \(\mathbf{abort} _1\). Otherwise, continue.

    5. 5.

      \(\mathcal {R}\) computes \(k'_{1,1-i}=H(s'_1*(r*E)^t)\). Verify whether \(\mathsf{D}_{k'_{1,1-i}}(c_{1,i-i})=(s'_1 \parallel str')\). If not, output \(\mathbf{abort} _1\). Otherwise, continue.

    6. 6.

      \(\mathcal {R}\) verifies \(A_0 \in \mathcal {E}\). If not, \(\mathcal {R}\) aborts and outputs \(\mathbf{abort} _1\). Otherwise, compute the decryption key \(k'_{0,1}=H(r*A_0)\) and output \(M_i \leftarrow \mathsf{D}_{k'_{0,i}}(c_{0,i})\). And send \(str'\) to \(\mathcal {S}\).

    7. 7.

      \(\mathcal {S}\) checks whether \(str=str'\). If not, \(\mathcal {S}\) aborts and outputs \(\mathbf{abort} _2\). Otherwise, \(\mathcal {S}\) accepts and outputs \(\perp \).

Intuitively, to simulate a sender controlled by an adversary, we have to show that the receiver’s message’s distribution with input \(i=0\) and that with input \(i=1\) are indistinguishable. Asides from that, the simulator needs to extract the real input of the message pair since the adversary can replace the original input. Lemma 1 assures the first requirement. The second condition is attained by controlling the functionality \(\mathcal {F}_{TSC}\). As a result, the simulator can decrypt two ciphertexts by using the trapdoor of \(\mathcal {F}_{TSC}\) and extract the real input of the sender.

To simulate a receiver corrupted by an adversary, the simulator should extract the adversary’s input by observing the hash queries. In order to extract the input, the receiver should demonstrate the ability to decrypt. The reason to do this is that the corrupted receiver who skips all hash queries makes the input intractable to the simulator. The additional proof of ability to decrypt mechanism forces the adversary either to abort or to prove its ability to decrypt by querying the hash function. Here the sender will send another curve \(s'*E\) distinct from \(s*E\) for transferring messages. The sender encrypts the ideal \(s'\) and a concatenated random string with key pair derived from \(s'*E\). The receiver decrypts one ciphertext with X, and the other ciphertext serves as a verification of the equality of encrypted messages. By requiring this together with Corollary 1, the mechanism enables the simulator to extract the input by observing the random oracle queries. Furthermore, since the simulator can only obtain one real message from the trusted third party (corresponding to the extracted input i), the simulator must forge the other ciphertext via the non-committing encryption scheme. The difference between the unchosen ciphertexts is not noticeable unless the environment machine knows the corresponding decryption key. In this case, the environment machine contains a pair of curves which is exactly the solution for the reciprocal CSIDH problem. See Sect. 4 for more details.

Fig. 4.
figure 4

Our CSIDH-based oblivious transfer protocol. For the sake of readability, we label the steps related to the process of “proof of ability to decrypt” with \(\star \).

4 Security Analysis

In this section, we prove the security of our two schemes from Sects. 3.1 and  3.2 against semi-honest and malicious adversaries respectively.

4.1 Semi-honest Security

Eavesdropper. An eavesdropper receives all the communications of parties and does not intervene in the execution. We assume that such an adversary knows the parties’ inputs while the simulator tasked with simulating an indistinguishable transcript is given nothing. The reason for this assumption is to match the definition of UC-security [9] where the environment machine decides the inputs. In fact, security against such eavesdroppers corresponds exactly to the honest-honest case discussed in the proof below.

Semi-Honest Adversary. A static semi-honest adversary can choose to corrupt either, both or neither of the parties and will follow the protocol specification. We will prove that such adversary cannot obtain any information from the transcript of our two-round protocol (Fig. 3) assuming that the computational inverse CSIDH problem is hard.

We remark that it is not meaningless to design two different protocols for different security levels. As security against semi-honest adversaries is easier to achieve, it is better to use a simpler and more efficient protocol when only such guarantees are required. This then implies that it is not necessary to prove the semi-honest security of our second protocol since the first provides a simpler secure variant. We highlight the fact that some maliciously secure protocols fail to also be semi-honest secure [24] and stress that we do not claim the semi-honest security of our second protocol of Sect. 3.2.

Theorem 1

The protocol \(\pi \) of Fig. 3 securely computes \(\mathcal {F}_{OT}\) in the presence of static semi-honest adversaries if the computational inverse CSIDH problem (Problem 4) is infeasible, assuming that \(H(\cdot )\) is a random oracle and the encryption scheme \((\mathsf{E},\mathsf{D})\) is IND-CPA.

Proof

(Correctness). Let \(i\in \{0,1 \}\) be the input of the receiver \(\mathcal {R}\). Say the sender \(\mathcal {S}\) generates ideal \(s \in Cl\) and \(\mathcal {R}\) generates \(r\in Cl\). If \(i=0\), then \(C=r*E\). \(\mathcal {S}\) computes the encryption key \(k_0\) as \(H(s*C)\), and sends \(A=s*E\). \(\mathcal {R}\) computes \(k'_0=H(r*A)\) as the decryption key; as we have \(r*A=r*(s*E)=s*(r*E)=s*C\), we indeed have \(k'_0 = k_0\). On the other hand, if \(i=1\), then \(C=(r*E)^t\). \(\mathcal {S}\) computes \(k_1 = H(s *C^t)\) while \(\mathcal {R}\) computes \(k'_1=H(r *A)\). We have \(s *C^t = s *((r *E)^t)^t = s \cdot r *E = r *A\) which implies \(k'_1 = k_1\) and shows the correctness of the protocol.

(Corrupt sender \(\mathcal {S}^*\)) The simulator \(S_1\) takes as input \((M_0,M_1,\perp )\) and is required to simulate the view \(view_1^{\pi } (M_0,M_1,i) = (M_0,M_1,rp,C)\) where rp is a random tape. To generate this, \(S_1\) performs these steps:

  1. 1.

    Uniformly generate a random tape rp for \(\mathcal {S}^*\).

  2. 2.

    Generate acting as an honest \(\mathcal {R}\) and using a private random tape.

  3. 3.

    Output \((M_0, M_1, rp, C'=r'*E)\).

In a real execution, the curve C sent by the honest receiver is either \(r *E\) if \(i = 0\), or \((r *E)^t\) if \(i = 1\). In the first case, the transcript output by \(S_1\) is identically distributed to that produced by a real execution. In the second case, Lemma 1 gives us that the distribution of \(C'\) produced by \(S_1\) is statistically close to that of C produced by the real receiver. Thus, any polynomial-time distinguisher that is given a tuple \((M_0,M_1,i)\) is not able to distinguish \(\{S_1((M_0,M_1),\perp )\}_{(M_0,M_1),i}\) from \(\{view^{\pi }_1(M_0,M_1,i) \}_{M_0,M_1,i} \).

(Corrupt receiver \(\mathcal {R}^*\)) The simulator \(S_2\) takes as input \((i,M_i)\) and is required to simulate the view \(view_2^{\pi } (M_0,M_1,i) = (i, rp, A, c_0, c_1)\) where rp is a random tape. To generate this, \(S_2\) performs these steps:

  1. 1.

    Choose a uniform generated random tape rp for \(\mathcal {R}^*\).

  2. 2.

    Generate acting as an honest \(\mathcal {S}\) and using a private random tape, and generate using rp. Compute the curve C as \(r'*E\) or \((r'*E)^t\) depending on i.

  3. 3.

    Compute the decryption keys \(k'_i, k'_{1-i}\) honestly using \(s'\) and C. Replace \(k'_{1-i}\) with

  4. 4.

    Compute ciphertexts \(c_i=\mathsf{E}_{k'_i}(M_i)\) and \(c_{1-i}=\mathsf{E}_{\widetilde{k'}}(\widetilde{M})\) where \(\widetilde{M}\) is a string of the same length as \(M_i\) sampled at random from the message space \(\mathcal {M}\).

  5. 5.

    Output \((i,rp,s'*E, c_0, c_1)\).

We claim that if there exists a successful PPT distinguisher between the simulated view and the real view, then reductions can be made to solve the computational problems (Problem 3 or the equivalent Problem 4) or to break the IND-CPA security of the encryption scheme.

To show this, we build a series of hybrid views. Let \(\mathcal {H}_0\) be the view of the real adversary, and \(\mathcal {H}_2\) be the view generated by \(S_2\) (i.e., \(\{view^{\pi }_2(M_0,M_1,i) \}_{(M_0,M_1),i}\) and \(\{S_2((M_0,M_1),\perp )\}_{(M_0,M_1),i}\), resp). Let the intermediate \(\mathcal {H}_1\) be the view produced by running a real execution and replacing the encryption key \(k_{1-i}\) with a random . The difference between \(\mathcal {H}_1\) and \(\mathcal {H}_2\) is then that the real message \(M_{1-i}\) is replaced with a random one .

Hybrid 1. We first claim \(\mathcal {H}_0 =_c \mathcal {H}_1\) if the computational inverse CSIDH problem (Problem 4) is hard. To offer an intuition: let \(E_{1 - i}\) denote the curve from which the replaced key \(k_{1-i}\) is derived. When \(i = 0\), we have \(E_{1-i} = s *C^t = s *(r *E)^t = r^{-1} *(s^{-1} *E)^t\); and when \(i = 1\), we have \(E_{1-i} = s *C = s *(r *E)^t = r^{-1} *(s^{-1} *E)^t\) as well. In both cases we see that the hard-to-compute curve contains \(s^{-1} *E\) which we use to reduce a successful distinguisher to the computational inverse CSIDH problem (Problem 4).

Let \(\mathcal {Z}\) be an environment that can successfully distinguish between \(\mathcal {H}_0\) and \(\mathcal {H}_1\), then a solver \(\mathcal {B}\) for Problem 4 with the assistance of \(\mathcal {Z}\) runs as follows:

  1. 1.

    Receive challenge \((E', s'*E')\) from Problem 4, where \(s' \in Cl\) is unknown.

  2. 2.

    Set \(E'\) to be the public curve used by the protocol \(\pi \) and set \(s'*E'\) as the curve A sent to the receiver.

  3. 3.

    Randomly generate random tape rp for the receiver, use it to sample r, and compute C according to i.

  4. 4.

    While running, simulate the random oracle by assigning a random value from \(\mathcal {K}\) whenever a new query is made and recording a list of past queries during the execution.

  5. 5.

    When deriving the real encryption key \(k_i\), compute it as \(r *(s' *E')\) (since \(s'\) from the challenge is unknown).

  6. 6.

    Replace the other encryption key \(k_{1-i}\) with to simulate the output of \(\mathcal {H}_1\); abort if \(\widetilde{k}\) already appears on the list of answers to random oracle queries.

  7. 7.

    Invoke the distinguisher \(\mathcal {Z}\) with the produced output of \(\mathcal {H}_1\).

  8. 8.

    When \(\mathcal {Z}\) terminates, randomly select a curve \(\widetilde{E}\) in the list of past queries of the simulated random oracle and return \((r *\widetilde{E})^t\) as the computational inverse CSIDH solution.

Note that, if \(\mathcal {B}\) does not abort, the only difference between \(\mathcal {H}_0\) and \(\mathcal {H}_1\) is the key for \(M_{i-1}\), thus a distinguisher \(\mathcal {Z}\) which does not query this key must have a zero advantage.

Let \(\mathbf {A}\) denote the event that \(\mathcal {B}\) aborts when sampling the replacement key. Denoting by \(q_H\) the maximum number of queries made to H during the reduction, we have that \(\Pr [\mathbf {A}] \le \frac{q_H}{|\mathcal {K}|}\). Also let \(\mathbf {E}\) denote the event that the targeted curve \(E'_{1-i} = r^{-1} *(s^{-1} *E')^t\) is present on the query list. We see that the reduction \(\mathcal {B}\) wins with probability \(1 / q_H\) when \(\mathbf {E}\) happens, and we can then write:

(1)

Looking an arbitrary distinguisher \(\mathcal {Z}\), we then have

$$\begin{aligned} | \Pr [\mathcal {Z}(\mathcal {H}_0) = 1] - \Pr [\mathcal {Z}(\mathcal {H}_1) = 1] | = {}&| \Pr [\mathcal {Z}(\mathcal {H}_0) = 1 | \mathbf {E}] \cdot \Pr [\mathbf {E}] \nonumber \\&{} - \Pr [\mathcal {Z}(\mathcal {H}_1) = 1 | \mathbf {E}] \cdot \Pr [\mathbf {E}] \nonumber \\&{} + \Pr [\mathcal {Z}(\mathcal {H}_0) = 1 | \lnot \mathbf {E}] \cdot \Pr [\lnot \mathbf {E}] \nonumber \\&{} - \Pr [\mathcal {Z}(\mathcal {H}_1) = 1 | \lnot \mathbf {E}] \cdot \Pr [\lnot \mathbf {E}] | \nonumber \\ {} \le {}&\Pr [\mathbf {E}] \end{aligned}$$
(2)

since \(|\Pr [\mathcal {Z}(\mathcal {H}_0) = 1 | \lnot \mathbf {E}] - \Pr [\mathcal {Z}(\mathcal {H}_1) = 1 | \lnot \mathbf {E}] | = 0\) and \(|\Pr [\mathcal {Z}(\mathcal {H}_0) = 1 | \mathbf {E}] - \Pr [\mathcal {Z}(\mathcal {H}_1) = 1 | \mathbf {E}] | \le 1\) by definition. By combining (1) and (2) we see that if \(\mathcal {Z}\) distinguishes the two views with non-negligible advantage \(\epsilon \), then \(\mathcal {B}\) successfully solves Problem 4 with probability at least \(\epsilon \cdot (1 - \frac{q_H}{|\mathcal {K}|}) / q_H\) which is non-negligible if and . This contradicts the assumption that Problem 4 is intractable and therefore implies that \(\mathcal {H}_0\) and \(\mathcal {H}_1\) are computationally indistinguishable to any PPT environment \(\mathcal {Z}\).

Hybrid 2. We now claim \(\mathcal {H}_1 =_c \mathcal {H}_2\) for any PPT distinguisher if the encryption scheme \((\mathsf{E},\mathsf{D})\) is IND-CPA secure. The only difference is the encryption \(\mathsf{E}_{\widetilde{k}}(M_{1-i})\) in \(\mathcal {H}_1\) and the encryption \(\mathsf{E}_{\widetilde{k}}(\widetilde{M})\) in \(\mathcal {H}_2\), where \(\widetilde{k}\) is uniformly sampled from \(\mathcal {K}\). A successful distinguisher \(\mathcal {Z}\) between the two distributions can be reduced to an adversary against the IND-CPA security of \((\mathsf {E}, \mathsf {D})\) in a straightforward manner. As this reduction is common in the literature, we only include a sketch here.

The IND-CPA adversary \(\mathcal {B}\) has access to a left-right encryption oracle which uses a secret key randomly sampled from \(\mathcal {K}\) to encrypt either the left or the right input; this hidden key plays the role of \(\widetilde{k}\) in the generation of the view given to \(\mathcal {Z}\). After setting up and executing the protocol honestly, \(\mathcal {B}\) uses the left-right oracle to encrypt either \(M_{1-i}\) or a random \(\widetilde{M}\) as the ciphertext \(c_{1-i}\); depending on the hidden bit (left or right) of the oracle, the view \(view_\mathcal {B}\) generated by \(\mathcal {B}\) for \(\mathcal {Z}\) is distributed identically to either \(\mathcal {H}_1\) or \(\mathcal {H}_2\). After the distinguisher terminates, the reduction returns its output as the guess of the oracle’s hidden bit. Labelling the oracle’s hidden bit as \(\mathsf {b}\), we then have

$$\begin{aligned} \mathsf {Adv}_{\mathcal {B}, (\mathsf {E}, \mathsf {D})}^{\text {IND-CPA}}&{} = | \Pr [\mathcal {B}= 1 \mid \mathsf {b}= 0] - \Pr [\mathcal {B}= 1 \mid \mathsf {b}= 1] | \\&{} = | \Pr [\mathcal {Z}(view_\mathcal {B}) = 1 \mid \mathsf {b}= 0] - \Pr [\mathcal {Z}(view_\mathcal {B}) = 1 \mid \mathsf {b}= 1] | \\&{} = | \Pr [\mathcal {Z}(\mathcal {H}_1) = 1] - \Pr [\mathcal {Z}(\mathcal {H}_2) = 1] | \end{aligned}$$

which immediately shows that if \(\mathcal {Z}\) is successful with non-negligible advantage, then so is \(\mathcal {B}\) which contradicts the assumption that \((\mathsf {E}, \mathsf {D})\) is IND-CPA secure.

(Honest sender and honest receiver) We now claim that there exists a PPT simulator that can generate a transcript tuple, without knowledge of the parties’ inputs, which is indistinguishable from the view of an eavesdropper \(\mathcal {Z}\) that knows the parties’ inputs (but not their random tapes). This simulator is constructed from the following sequence:

  1. 1.

    \(\mathcal {S}_0\) knows the real inputs \((M_0, M_1)\) and i of the parties; by sampling random tapes and acting honestly, it produces a perfect simulation.

  2. 2.

    \(\mathcal {S}_1\) always uses \(i = 0\); by Lemma 1 and the argument made in the case of a corrupt sender, the output of \(\mathcal {S}_1\) is either identically distributed or statistically indistinguishable from the output of \(\mathcal {S}_0\).

  3. 3.

    \(\mathcal {S}_2\) replaces \(k_1\) with a randomly sampled key; as above, this is computationally indistinguishable from the output of \(\mathcal {S}_1\) assuming that Problem 4 is intractable.

  4. 4.

    \(\mathcal {S}_3\) replaces \(M_1\) with a randomly sampled message; as above, this is computationally indistinguishable from the output of \(\mathcal {S}_2\) assuming that the encryption scheme is IND-CPA secure.

  5. 5.

    \(\mathcal {S}_4\) always uses \(i = 1\); as above, the output of \(\mathcal {S}_4\) is statistically indistinguishable from the output of \(\mathcal {S}_3\).

  6. 6.

    \(\mathcal {S}_5\) and \(\mathcal {S}_6\) respectively first replace \(k_0\) and then \(M_0\) with random values; as above, these changes are computationally indistinguishable assuming the hardness of Problem 4 and the IND-CPA security of the encryption scheme.

Finally, we observe that the last simulator \(\mathcal {S}_6\) does not use any of the real inputs to produce a random transcript. By the sequence above, this simulation is indistinguishable from the transcript of a real execution.

(Corrupt sender and corrupt receiver) In this case, the simulator knows the inputs of both corrupt parties; as for \(\mathcal {S}_0\) in the previous case, it can generate a perfect simulation of the views of the parties.

The four cases considered above cover all possible corruption strategies; this thus completes the proof that the protocol \(\pi \) securely computes \(\mathcal {F}_{OT}\).

4.2 Malicious Adversary

Malicious Adversary. A malicious adversary with static corruptions can corrupt either, both or neither of the parties prior to the execution. The environment machine decides the initial inputs of all parties. The adversary will be in charge of the corrupted party or parties, and decide all messages to be sent. In particular, the adversary can replace the inputs of the participants from the environment machine and deviate from the protocol specification. We will prove that the construction in Fig. 4 UC-realizes the functionality \(\mathcal {F}_{OT}\) in the presence of malicious adversaries with static corruptions.

Theorem 2

The protocol \(\pi \) of Fig. 4, where the encryption scheme \((\mathsf{E}, \mathsf{D})\) is non-committing, securely UC-realizes the functionality \(\mathcal {F}_{OT}\) in the hybrid model with the functionality \(\mathcal {F}_{RO}\) and a trusted setup \(\mathcal {F}_{TSC}\) in the presence of malicious adversaries and static corruption if the computational reciprocal CSIDH problem is infeasible.

Proof

(Honest Sender and Honest Receiver) We start with the honest sender and the honest receiver. The goal is to show that the execution of \(\pi \) is indistinguishable from the ideal functionality when the parties follow the specification.

By following the same process as the honest-sender-and-honest-receiver case in Theorem 1, we can construct the simulator that simulates the first-half messages. By continuing the process of \(\mathcal {S}_1\) or \(\mathcal {S}_4\), the simulator can simulate the second-half messages \(A_1,c_{1,0}\) and \(c_{1,1}\) by generating \(s_1\) and str. Since the second-half part requires no inputs from either the sender or the receiver, it produces a perfect simulation. Therefore, the simulator outputs a transcript indistinguishable from the one of a real execution.

(Corrupted Sender and Corrupted Receiver) When two parties are corrupted, the simulator can invoke the adversary with the input \((x=(M_0,M_1),y=i,z)\) given by the environment \(\mathcal {Z}\) to run the whole execution. The simulator outputs whatever the adversary outputs for both parties to produce a perfect simulation.

(Honest Sender and Corrupted Receiver) Let \(\mathcal {A}\) be the malicious adversary controlling the receiver. In order to emulate the adversary, the simulator needs to extract the input of the adversary, and send it to the trusted party in the ideal execution. Say the environment \(\mathcal {Z}\) generates input \((x=(M_0,M_1),y=i,z)\) and gives (yz) to the simulator. The simulator \(\mathcal {S}_2\) passes any query from \(\mathcal {Z}\) to \(\mathcal {A}\) and returns the output of \(\mathcal {A}\). The simulator \(\mathcal {S}_2\) with auxiliary input (yz) proceeds the protocol execution with the adversary as follows:

  1. 1.

    The simulator \(\mathcal {S}_2\) emulates a random oracle \(\mathcal {F}_{RO}\) by kee** a list L in \(\mathcal {E} \times \mathcal {K}\) that records each past query. It initializes the random oracle with an empty list L. If the simulator receives a query on \(E' \in \mathcal {E}\), the simulator checks whether \((E',k') \in L\) for some \(k' \in \mathcal {K}\). If not, generate and add the entry \((E',k')\) to the list L. Finally, \(\mathcal {S}_2\) returns \(k'\) to emulate the random oracle.

  2. 2.

    Generate the public curve \(E=t*E_0\) by sampling to simulate \(\mathcal {F}_{TSC}\). Invoke the adversary \(\mathcal {A}\) with the input (yz) and E.

  3. 3.

    Receive a curve X, the first message, from the adversary. Check whether \(X \in \mathcal {E}\), if not, end the session by outputting \(\mathbf{abort} _2\) to the trusted party in the ideal execution. Otherwise, continue.

  4. 4.

    Activate the algorithm \(B_1\) of the non-committing encryption scheme. Generate \(c_{0,0},c_{0,1}\) with \(B_1\), and . Compute \(A_0,A_1\) and \(c_{1,0},c_{1,1}\) as the honest sender. Send \((A_0,A_1,c_{0,0}, c_{0,1}, c_{1,0},c_{1,1})\) to the receiver.

  5. 5.

    After Step 4, the simulator starts to do an additional process for any hash query of a curve \(E' \in \mathcal {E}\). Firstly, check whether \(E'=s_j*X\) or \(s_j*X^t\) for any \(j\in \{ 0,1 \}\) (any one out of four). If not, then skip this step and process the query in a standard way as Step 1. Else, check whether both \(s_0*X\) and \(s_0*X^t\) (i.e., the other decryption key) have been queried. If so, then abort the session by outputting \(\mathbf{abort} _2\). Else, check whether \(E'\) is listed in the past queries \((E',k')\in L\). If so, then skip this step and return \(k'\). Else, send the ideal message i to \(\mathcal {F}_{OT}\) in the ideal execution where \(i=0\) for the case \(E'=s_j*X\) or \(i=1\) for the case \(E'=s_j*X^t\), which is the extraction procedure. After obtaining \(M_i\) from \(\mathcal {F}_{OT}\), generate the decryption key \(k' \leftarrow B_2(c_{0,i},M_i)\) and store \((s_0*X,k')\) for the case \(i=0\) or \((s_0*X^t,k')\) for the case \(i=1\) in the list, which is the response for the case \(j=0\). For the case \(j=1\), process the hash query in a standard way as Step 1.

  6. 6.

    After receiving \(str'\), the third message, from the adversary, verify \(str=str'\). If not, end the session by outputting \(\mathbf{abort} _2\). Otherwise, continue.

  7. 7.

    After the outputs of the adversary, if none of \(s_0*X\), \(s_0*X^t\), \(s_1*X\), and \(s_1*X^t\) are in the list L, then end the session by outputting \(\mathbf{abort} _2\). Otherwise, the simulator outputs whatever the adversary outputs.

We claim \(\{ {HYBRID}^{\mathcal {F}_{RO},\mathcal {F}_{TSC}}_{\pi ,\mathcal {A}(z),2}(x,y) \}_{x,y,z} =_c{} \{ {IDEAL}_{\mathcal {F}_{OT},{S}_2(z),2}(x,y) \}_{x,y,z}\). In comparison with the real execution, the abort in Step 5 implies the solution to the reciprocal CSIDH problem \((E,s*E)\) lies in the list L, which contradicts the assumption. The other abort in Step 7, together with the result of Step 6, implies the adversary decrypts the ciphertext \(c_{1,j}\) without the knowledge of the key. If this occurs with non-negligible probability, then it contradicts the non-committing assumption since the real ciphertext can be decrypted without the key, while the dummy ciphertext cannot be (because it can be generated before the plaintext by \(B_1\)).

Other differences caused by the simulator are the ciphertexts for the receiver. The ciphertext \(c_{0,i}\) in the pair \((c_{0,0},c_{0,1})\) is indistinguishable from the one in the real execution due to the non-committing encryption scheme. The only suspicious part is \(c_{0,i-1}\), which is a dummy ciphertext generated by the algorithm \(B_1\) of the encryption scheme. The counterpart in the real execution is the encrypted message \(\mathsf{E}_{k_{1-i}}(M_{1-i})\) where \(k_{1-i}\) is either \(H(s*X)\) or \(H(s^{-1}*X)\).

Similar to the previous proof, the distinguisher (the environment machine) can only succeed with negligible advantage only without the knowledge of \(k_{1-i}\). Precisely, let \(\mathbf {E}\) denote the event that the targeted curves \(s*X,s*X^t\) are both queried where \((s*X^t)^t=s^{-1}*X\). We have that is not greater than

Claim that is negligible if the encryption scheme is non-committing. Given the non-committing challenge (ck), a solver runs as follows:

  1. 1.

    Randomly generate \(j\in \{ 0,1 \}\).

  2. 2.

    Run as the simulator \(\mathcal {S}_2\) with the environment machine except in Step 4 that assign value c to the variable \(c_{0,j}\)

  3. 3.

    Say the simulation in Step 2 extracts i from the input of the receiver. If \(i \ne j\), then abort and restart the session.

  4. 4.

    If the environment machine judges the machine as the ideal machine, then output 1. Otherwise, output 0.

If \(\mathcal {Z}\) succeeds with non-negligible advantage p(n) without the knowledge of the key, then the reduction can win the non-committing challenge with non-negligible advantage p(n)/2 where the loss is caused by the guess in Step 2.

Since is negligible, we have

Therefore, if the distinguisher can succeed with non-negligible advantage, then the solution for the reciprocal CSIDH problem (Problem 5) is in the list of the hash queries with non-negligible probability. Let the challenge of the reciprocal CSIDH problem start with E. A solver \(\mathcal {B}\) for the problem runs as follows:

  1. 1.

    Run as the simulator \(\mathcal {S}_2\) with the environment machine except for the changes in Step 4 and 5, and an extraction in Step 3. The solver \(\mathcal {B}\) commits to the curve X obtained in Step 3 in the reciprocal CSIDH experiment.

  2. 2.

    Say \(\mathcal {B}\) receives \(s*E\) from the challenge. Then, in Step 4, assign \(s*E\) to the variable \(A_0\).

  3. 3.

    In Step 5, guess \(i\in \{0,1\}\) and obtain the decryption key \(k_i\) via \(B_2(c_{0,i},M_i)\). Randomly pick a curve \(X_1\) of \(\mathcal {F}_{RO}\) queries, and assign \(k_i\) to it. (Due to the unknown element s, the solver needs to guess here.)

  4. 4.

    After the simulation, if the environment machine judges the machine as the ideal machine, then randomly pick a curve \(X_2\) in the hash query list, and output \((X_1,X_2)\). Otherwise, restart the challenge.

If the environment machine can win with non-negligible advantage p(n) with q hash queries, then the solver \(\mathcal {B}\) can win the reciprocal CSIDH challenge with non-negligible advantage \(p(n)/(2q^2)\) where the loss is caused by the guesses in Step 3 and 4. To sum up, if the encryption scheme is non-committing, and the reciprocal CSIDH problem is hard, then the simulator \(\mathcal {S}_2\) indistinguishably simulates the adversary.

We remark that the simulator \(\mathcal {S}_2\) correctly extracts the input of the adversary in Step 5. According to Corollary 1, if the simulator extracts the wrong input, then the adversary can also be used to solve the inverse CSIDH problem.

(Corrupted Sender and Honest Receiver) Let \(\mathcal {A}\) be a malicious adversary controlling the sender. In order to emulate the adversary, the simulator needs to extract the input of the adversary, and send it to the trusted party in the ideal execution. The input here is the message pair which the honest receiver will read. Say the environment machine \(\mathcal {Z}\) generates input \((x=(M_0,M_1),y=i,z)\) and gives (xz) to the simulator. The simulator \(\mathcal {S}_1\) with input (xz) proceeds as follows:

  1. 1.

    Firstly, the simulator \(\mathcal {S}_1\) emulates a random oracle \(\mathcal {F}_{RO}\) by kee** a list L in \(\mathcal {E} \times \mathcal {K}\) that records every past query. It initializes the random oracle with an empty list L. Whenever it receives a query on \(E' \in \mathcal {E}\), the simulator checks whether \((E',k') \in L\) for some \(k' \in \mathcal {K}\). If not, it generates and adds the entry \((E',k')\) to the list L. Finally, \(\mathcal {S}_1\) returns \(k'\) to emulate the random oracle.

  2. 2.

    Generate the public curve \(E=t*E_0\) by sampling to simulate \(\mathcal {F}_{TSC}\). Invoke the adversary \(\mathcal {A}\) with the input (xz) and E. Keep t as the trapdoor secret.

  3. 3.

    Generate ,, and compute \(C=r*E\). Send C to the adversary, and act as the procedure of an honest receiver with the input \(i=0\) throughout the remaining execution. (Note that the simulator does not know the input of the receiver here.)

  4. 4.

    If the adversary aborts, then send \(\mathbf{abort} _1\) to \(\mathcal {F}_{OT}\) and finish the session. Otherwise, assume the execution is not aborted. Say it receives \((A_0,A_1,c_{0,0},c_{0,1},\) \(c_{1,0},c_{1,1})\) from the adversary. Compute \(k_{0}=H(r*A_0)\), \(k_{1}=H((tr*(t^{-1}*A_0)^t)^t)\), and \(m_j=\mathsf{D}_{k_j}(c_{0,j})\) for \(j \in \{0,1\}\).

  5. 5.

    Send \((m_0,m_1)\) to the trusted third party in the ideal execution, output whatever the adversary outputs to complete the simulation. (Note that \((M_0,M_1)\) and \((m_0,m_1)\) are not necessary the same since the adversary can change the original input.)

Claim \(\{ {HYBRID}^{\mathcal {F}_{RO},\mathcal {F}_{TSC}}_{\pi ,\mathcal {A}(z),1}(x,y) \}_{x,y,z} =_c \{ {IDEAL}_{\mathcal {F}_{OT},\mathcal {S}(z),1}(x,y) \}_{x,y,z}\). In contrast to the real execution, there are two differences here. Firstly, the simulator possesses the trapdoor t of the public curve. The process is identical to \(\mathcal {F}_{TSC}\), and the simulator acts as an honest receiver throughout the process. Hence, this difference is unnoticeable to the adversary.

The other difference is the receiver the simulator plays always uses input \(i=0\). By Lemma 1, the distribution of the first message (C) in the protocol as \(i=0\) is indistinguishable to that generated as \(i=1\). Hence, it suffices to show the correctness of the extraction in Step 4.

If an honest receiver sends C to the sender with the input \(i=0\), then the decryption key is \(k_0=H(r*A_0)\). The message the receiver will obtain is \(D_{k_0}(c_{0,0})=m_0\). Besides, if an honest receiver sends C to the sender with the input \(i=1\), then the private ideal is equivalent to \(r^{-1}t^{-2}\) since \((r^{-1}t^{-2}*E)^t=(r^{-1}t^{-1}*E_0)^t=(r^{-1}*E^t)^t=(r*E)=C\). Hence, the receiver will decrypt \(c_{0,1}\) with \(H(r^{-1}t^{-2}*A_0)\). Due to \(k_1=H((tr*(t^{-1}*A_0)^t)^t)=H((tr)^{-1}t^{-1}*A_0)=H(r^{-1}t^{-2}*A_0)\), the receiver will therefore get the message \(m_1=\mathsf{D}_{k_1}(c_{0,1})\). That is, the simulator correctly extracts the input of the adversary. Hence, the real execution is indistinguishable from the ideal execution.

Remark. In the formal description of [9], the environment machine and the adversary (simulator) starts with z, and the inputs of the parties are given through further instruction messages. Regarding readability and simplicity, we combine them into a single statement here without undermining the effectiveness of the proof.

5 Comparison

5.1 Efficiency

Table 1 illustrates a comparison between our oblivious transfer protocols with [1, 2, 17, 38] in terms of efficiency, including the number of curves in the domain parameters or generated by a trusted party, the number of curves in the public keys for the sender and the receiver, the total number of isogeny computations for the sender and the receiver, and the number of rounds, respectively. Among the isogeny-based OTs, our 2-round OT proposal is the most efficient with respect to every criteria against semi-honest adversaries. It only takes an additional round and two isogeny computations for each participant to achieve UC-secure against static malicious adversaries.

Table 1. Comparison between isogeny-based OTs on efficiency where n is the security parameter. We give the costs for both our 2-round protocol from Fig. 3 and the full construction from Fig. 4.

In [2], they used some properties of SIDH. The receiver randomly subtracts two selected points \(U,V\in E_B\) to the points \((\phi _B(P_A),\phi _B(Q_A))\) to produce public points \((\hat{G_A},\hat{H_A})\) with respect to the secret bit i. The sender adds the same points jUjV to the received points for \(j\in \{0,1 \}\) to produce two decryption keys. The additional mechanism allows the receiver and the sender to generate the same points UV. As stated in their work, randomly generated \(U,V \in E_B\) may reveal the secret bit to an honest-but-curious sender by checking the equality of Weil pairings \(e(P_A,Q_A)^{l_A^{e_A}}\), \(e(\hat{G_A},\hat{H_A})\), and \((\hat{G_A}+\lambda U,\hat{H_A}+\lambda V)\) for \(\lambda \in \mathbb {Z}\). On the other hand, it is also possible that the honest-but-curious receiver gets the isomorphic curves. In order to prevent these, the UV are generated through a delicate process.

The two frameworks of [17] includes DH, SIDH, and CSIDH settings. The first construction is a two-message oblivious transfer and requires one more curve in the trusted setup phase.

The paper [38] showed a construction based on exponentiation-only Diffie-Hellman. The construction can fit in the DH, SIDH, and CISDH settings. But, as stated in their work, it will be totally insecure in the CSIDH setting against a malicious receiver. Specifically, their two-inverse problem is given curves \((E,a*E,b*E)\) to find some curve tuple \((X,{a}^{-1}*X,{b}^{-1}*X)\) where X is isogenous to E. This can be done in the CSIDH setting by taking quadratic twists of \((E,a*E,b*E)\).

In [1], both constructions are based on the decisional group action problem (the decisional CSIDH problem for instance). If the number of isogeny computations in the encryption (and decryption) algorithm is \(\ell = \omega (\log (n))\), then the statistical distance between a pair of ciphertexts is \(\varDelta =n^{-\omega (1)}\). In particular, the parameter \(\ell \) here is taken to be n so that the distance is less than \(2^{-n}\).

5.2 Security

Table 2. Comparison between previous isogeny OTs and our constructions. The models include the random oracle model (ROM), the common reference string model (CRS) and trusted setup curves (TSC).

On the issue of security, a comparison is shown in Table 2. In [2], the claim is incorrect. Firstly, the adapted definition is Definition 2.6.1 of [24] that guarantees the privacy in the presence of malicious adversaries for a two-round oblivious transfer protocol while the scheme in [2] is three-round. Except for the misuse of the definition, the view-based simulation proof is incomplete even against semi-honest adversaries. The evidence is the further algebraic analysis appended after the proof. The context manifests that the protocol might still leak information even both the sender and the receiver follow the protocol specification. In other words, the proof is incomplete even against semi-honest adversaries.

In [17], the schemes are universally composable secure in the semi-honest model. In [38], they proposed a security definition called the semantic security of oblivious transfer, which guarantees indistinguishability for the sender within the distinct executions. The scheme is under a weak decisional problem which, in the SIDH setting, is easier than the decisional SIDH problem.

Section 4.2 and 4.3 of [1] present two OT constructions. Through using group actions and develo** new tools, the first one is derived from a dual-mode public key encryption based on the Diffie-Hellman setting of [34]. The second construction is a plain model OT, which is statistically sender-private. The notion ensures computational indistinguishability privacy for the receiver and statistical indistinguishability privacy for the sender. The schemes’ main drawback is efficiency since both of them are bit-transferring and require a poly(n) number of isogeny computations.

Remark. One can also show that the construction of Fig. 3 is a private oblivious transfer (Definition 2.6.1 in [24]) ensuring privacy for both parties in the malicious model. Since the proof would sidetrack the goal of this work, we leave this to the reader.

6 Conclusion

In this paper, we present the first practical UC-secure isogeny-based oblivious transfer protocol in the presence of static corruptions and malicious adversaries. The construction is simple and compact, and the number of isogeny computations is constant. Moreover, the scheme shares the same hardness as the CSIDH key agreement scheme.

To achieve this outcome, we developed six techniques in this work. In the beginning, the communication bandwidth is reduced through mixing the key-exchange-type problem and an equivalent variant. Next, by utilizing a new use of quadratic twists, we not only compress the number of rounds of the protocol but also fortify the hardness of the underlying assumption (achieving the self-reconciling property). By combining the self-reconciling proposition and proof of ability to decrypt at the cost of one extra round, the simulator is able to extract the input of the receiver to achieve one-sided simulation. Furthermore, for the purpose of extracting the input of the sender, we set up trapdoors for the protocol via a new use of quadratic twists to get a fully-simulatable construction. Finally, we develop a new computational assumption as well as the inverse and square variants and prove equivalence to the standard CSIDH assumption with quantum reductions.

We remark that these techniques are not exclusive to isogeny-based cryptography except for the use of quadratic twists. We envisage that these techniques can serve as potential cryptographic tools in future work.