1 Introduction

Secure multi-party computation (MPC) protocols allow a group of n parties to compute some function f on the parties’ private inputs, while preserving a number of security properties such as privacy and correctness. The former property implies data confidentiality, namely, nothing leaks from the protocol execution but the computed output. The latter requirement implies that the protocol enforces the integrity of the computations made by the parties, namely, honest parties are not led to accept a wrong output. Security is proven either in the presence of a passive adversary that follows the protocol specification but tries to learn more than allowed from its view of the protocol, or an active adversary that can arbitrarily deviate from the protocol specification in order to compromise the security of the other parties in the protocol.

The past decade has seen huge progress in making MPC protocols communication efficient and practical; see [KS08, DPSZ12, DKL+13, ZRE15, LPSY15, WMK17, HSS17] for just a few examples. In the two-party setting, actively secure protocols [WRK17a] by now reach within a constant overhead factor over the notable semi-honest construction by Yao [Yao86]. On the practical side, a Boolean circuit with around 30,000 gates (6,400 AND gates and the rest XOR) can be securely evaluated with active security in under 20 ms [WRK17a]. Moreover, current technology already supports protocols that securely evaluate circuits with more than a billion gates [KSS12]. On the other hand, secure multi-party computation with a larger number of parties and a dishonest majority is far more difficult due to scalability challenges regarding the number of parties. Here, the most efficient practical protocol with active security has a multiplicative factor of \(O(\lambda /\log |C|)\) due to cut-and-choose [WRK17b] (where \(\lambda \) is a statistical security parameter and |C| is the size of the computed circuit). On the practical side, the same Boolean circuit of 30,000 gates can be securely evaluated at best in 500 ms for 14 parties [WRK17b] in a local network where the latency is neglected, or in more than 20 s in a wide network. The problem is that current MPC protocols do not scale well with the number of parties, where the main bottleneck is a relatively high communication complexity, while the number of applications requiring large scale communication networks are constantly increasing, involving sometimes hundreds of parties.

An interesting example is safely measuring the Tor network [DMS04] which is among the most popular tools for digital privacy, consisting of more than 6000 relays that can opt-in for providing statistics about the use of the network. Nowadays and due to privacy risks, the statistics collected over Tor are generally poor: There is a reduced list of computed functions and only a minority of the relays provide data, which has to be obfuscated before publishing [DMS04]. Hence, the statistics provide an incomplete picture which is affected by a noise that scales with the number of relays.

In the context of securely computing the interdomain routing within the Border Gateway Protocol (BGP) which is performed at a large scale of thousands of nodes, a recent solution in the dishonest majority setting [ADS+17] centralizes BGP so that two parties run this computation for all Autonomous Systems. Large scale protocols would allow scaling to a large number of systems computing the interdomain routing themselves using MPC, hence further reducing the trust requirements.

Another important application that involves a massive number of parties is an auction with private bids, where the winning bid is either the first or the second price. Auctions have been widely studied by different communities improving different aspects and are central in the area of web electronic commerce. When considering privacy and correctness, multi-party computation offers a set of tools that allow to run the auction while preserving the privacy of the bidders (aka. passive security). MPC can also enforce independent of inputs between the corrupted and honest parties as well as correctness, in the sense that parties are not allowed to change their vote once they learn they lost. This type of security requires more complicated tools and is knows as active security. Designing secure solutions for auctions played an important role in the literature of MPC. In fact, the first MPC real-world implementation was for the sugar beet auction [BCD+09] with three parties and honest majority, where the actual number of parties was 1129. In a very recent work by Keller et al. [KPR18], the authors designed a new generic protocol based on semi-homomorphic encryption and lattice-based zero-knowledge proofs of knowledge, and implemented the second-price auction with 100 parties over a field of size \(2^{40}\). The running time of their offline phase for the SPDZ protocol is 98 s. The authors did not provide an analysis of their communication complexity.

Motivated by the fact that current techniques are insufficient to produce highly practical protocols for such scenarios, we investigate the design of protocols that can more efficiently handle large numbers of parties with strong security levels. In particular, we study the setting of active security with only a minority (around 10–30%) of honest participants. By relaxing the well-studied, very strong setting of all-but-one corruptions (or full-threshold), we hope to greatly improve performance. Our starting point is the recent work by Hazay et al. [HOSS18] which studied this corruption setting with passive security and presented a new technique based on “short keys” to improve the communication complexity and the running times of full-threshold MPC protocols. In this paper we extend their results to the active setting.

Technical background for [HOSS18]. Towards achieving their goal, Hazay et al. observed that instead of basing security on secret keys held by each party individually, they can base security on the concatenation of all honest parties’ keys. Namely, a secure multi-party protocol with h honest parties can be built by distributing secret key material so that each party only holds a small part of the key. Formalizing this intuition is made possible by reducing the security of their protocols to the Decisional Regular Syndrome Decoding (DRSD) problem, which, given a random binary matrix \({\mathbf {H}}\), is to distinguish between the syndrome obtained by multiplying \({\mathbf {H}}\) with an error vector \({\varvec{e}}= ({\varvec{e}}_1 \Vert \cdots \Vert {\varvec{e}}_h)\) where each \({\varvec{e}}_i \in \{0,1\}^{2^\ell }\) has Hamming weight one, and the uniform distribution. This can equivalently be described as distinguishing \(\bigoplus _{i=1}^h \mathsf {H}(i,k_i)\) from the uniform distribution, where \(\mathsf {H}\) is a random function and each \(k_i\) is a random \(\ell \)-bit key. A specified in [HOSS18], when h is large enough, the problem is unconditionally hard even for \(\ell =1\), which means for certain parameter choices 1-bit keys can be used without introducing any additional assumptions.

Our contribution. In this work we develop a new theory for concretely efficient, large-scale MPC in the presence of an active adversary. More concretely, we extend the short keys technique from [HOSS18] to the active setting. Adapting these ideas to the active setting is quite challenging and requires modifying information-theoretic MACs used in previous MPC protocols [BDOZ11, DPSZ12] to be usable with short MAC keys. As our first, main contribution, we present several new methods for constructing efficient, distributed, information-theoretic MACs with short keys, for the setting of a small, honest minority out of a large set of parties. Our schemes allow for much lower costs when creating MACs in a distributed manner compared with previous works, due to the use of short MAC keys. For our second contribution, we show how to use these efficient MAC schemes to construct actively secure MPC for binary circuits, based on the ‘TinyOT’ family of protocols [NNOB12, BLN+15, FKOS15, HSS17, WRK17b]. All previous protocols in that line of work supported \(n-1\) out of n corruptions, so our protocol extends this to be more efficient for the setting of large-scale MPC with a few honest parties.

Concrete efficiency improvements. The efficiency of our protocols depends on the total number of parties, n, and the number of honest parties, h, so there is a large range of parameters to explore when comparing with other works. We discuss this in more detail in Sect. 8. Our protocol starts to concretely improve upon previous protocols when we reach \(n=30\) parties and \(t=18\) corruptions: here, our triple generation method requires less than half the communication cost of the fastest MPC protocol which is also based on TinyOT [WRK17b] (dubbed WRK) tolerating up to \(n-1\) corruptions. For a fairer comparison, we also consider modifying WRK to run in a committee of size \(t+1\), to give a protocol with the same corruption threshold as ours. In this setting, we see a small improvement of around 10% over WRK, but at larger scales the impact of our protocol becomes much greater. For example, with \(n=200\) parties and \(t=160\) corruptions we have up to an 8 times improvement over WRK with full-threshold, and a 5 times improvement when WRK is modified to the threshold-t setting.

Technical Overview

In our protocols we assume that two committees, \({\mathcal P}_{(h)}\) and \({\mathcal P}_{(1)}\), have been selected out of all the n parties providing inputs in the MPC protocol, such that \({\mathcal P}_{(h)}\) contains at least h honest parties and \({\mathcal P}_{(1)}\) contains at least 1 honest party. These can be chosen deterministically, for instance, if there are h honest parties in total we let \({\mathcal P}_{(h)}= \{P_1,\dots ,P_n\}\) and \({\mathcal P}_{(1)}= \{P_1,\dots ,P_{n-h+1}\}\). We can also choose committees at random using coin-tossing, if we start with a very large group of parties from which \(h' > h\) are honest. Since we have \(|{\mathcal P}_{(h)} | > |{\mathcal P}_{(1)} |\), to avoid unnecessary interaction we take care to ensure that committee \({\mathcal P}_{(h)}\) is only used when needed, and when possible we will do operations in committee \({\mathcal P}_{(1)}\) only.

Section 3. We first show a method for authenticated secret-sharing based on information-theoretic MACs with short keys, where given a message x, a MAC \({\varvec{m}}\) and a key \({\varvec{k}}\), verification consists of simply checking that s linear equations hold. Our construction guarantees that forging a MAC to all parties can only be done with probability \(2^{-\lambda }\), even when the key length \(\ell \) is much smaller than \(\lambda \), by relying on the fact that at least h parties are honest. We note that the reason for taking this approach is not to obtain a more efficient MAC scheme, but to design a scheme allowing more efficient creation of the MACs. Setting up the MACs typically requires oblivious transfer, with a communication cost proportional to the key length, so a smaller \(\ell \) gives us direct efficiency improvements to the preprocessing phase, which is by far the dominant cost in applications. Our basic MAC scheme requires all parties in both committees to take part, but to improve this we also present several optimizations, which can greatly reduce the storage overhead by “compressing” the MACs into a single, SPDZ-like sharing in only committee \({\mathcal P}_{(1)}\).

Sections 4 and 5. We next show how to efficiently create authenticated shares for our MAC scheme with short keys. As a building block, we need a protocol for random correlated oblivious transfer (or random \(\varDelta \)-OT) on short strings. We consider a variant of the OT extension protocol of Keller et al. [KOS15], modified to produce correlated OTs (as done in [NST17]) and with short strings. Our authentication protocol for creating distributed MACs improves upon the previous best-known approach for creating MACs (optimized to use h honest parties) by a factor of \(h(n-h)/n\) times in terms of overall communication complexity. This gives performance improvements for all \(h > 1\), with a maximum n/4-fold gain as h approaches n/2.

Section 7. Finally, we introduce our triple generation protocol, in two phases. Similarly to [WRK17b], we first show how to compute the cross terms in multiplication triples by computing so-called ‘half-authenticated’ triples. This protocol does not authenticate all terms and the result may yield an incorrect triple. Next, we run a standard cut-and-choose technique for verifying correctness and removing potential leakage. Our method for checking correctness does not follow the improved protocol from [WRK17b] due to a limitation introduced by our use of the DRSD assumption. The security of our protocol relies on a variant of the DRSD assumption that allows one bit of leakage, and for this reason the number of triples r generated by these protocols depends on the security of RSD. So, while we can produce an essentially unlimited number of random correlated OTs and random authenticated bits, if we were to produce ‘half-authenticated’ triples in a naive way, we would be bounded on the total number of triples and hence the size of the circuits we can evaluate. To fix this issue we show how to switch the MAC representation from using one key \(\varDelta \) to a representation under another independent key \(\tilde{\varDelta }\). This switch is performed every r triples.

Extension to Constant Rounds. Since Hazay et al. [HOSS18] also described a constant round protocol based on garbled circuits with passive security, it is natural to wonder if our approach with active security also extends to this setting. Unfortunately, it is not straightforward to extend our approach to multi-party garbled circuits with short keys and active security, since the adversary can flip a garbled circuit key with non-negligible probability, breaking correctness. Nevertheless, we can build an alternative, efficient solution based on the transformation from [HSS17], which shows how to turn any non-constant round, actively secure protocol for Boolean circuits into a constant round [BMR90]-based protocol. When applying [HSS17] to our protocol, we obtain a multi-party garbling protocol with full-length keys, but we still improve upon the naive (full-threshold) setting, since the preprocessing phase is more efficient due to our use of TinyOT with short keys. More details will be given in the full version.

2 Preliminaries

We denote the computational and statistical security parameter by \(\kappa \) and \(\lambda \), respectively. We say that a function \(\mu :\mathbb {N}\rightarrow \mathbb {N}\) is negligible if for every positive polynomial \(p(\cdot )\) and all sufficiently large \(\kappa \) it holds that \(\mu (\kappa )<\frac{1}{p(\kappa )}\). The function \(\mu \) is noticeable (or non-negligible) if there exists a positive polynomial \(p(\cdot )\) such that for all sufficiently large \(\kappa \) it holds that \(\mu (\kappa ) \ge \frac{1}{p(\kappa )}\). We use the abbreviation PPT to denote probabilistic polynomial-time. We further denote by \(a \leftarrow A\) the uniform sampling of a from a set A, and by [d] the set of elements \(\{1,\ldots ,d\}\). We often view bit-strings in \(\{0,1\}^k\) as vectors in \(\mathbb {F}_2^k\), depending on the context, and denote exclusive-or by “\(\oplus \)” or “\(+\)”. If \(a, b \in \mathbb {F}_2\) then \(a \cdot b\) denotes multiplication (or AND), and if \({\varvec{c}}\in \mathbb {F}_2^\kappa \) then \(a \cdot {\varvec{c}}\in \mathbb {F}_2^\kappa \) denotes the product of a with every component of \({\varvec{c}}\).

Security and Communication Models. We use the universal composability (UC) framework [Can01] to analyse the security of our protocols. We assume all parties are connected via secure, authenticated point-to-point channels, as well as a broadcast channel which is implemented using a standard 2-round echo-broadcast. The adversary model we consider is a static, active adversary who corrupts up to t out of n parties at the beginning of the protocol. We denote by A the set of corrupt parties, and \(\bar{A}\) the set of honest parties.

Regular Syndrome Decoding Problem. We recall that the regular syndrome decoding (RSD) problem is to recover a secret error vector \({\varvec{e}}= ({\varvec{e}}_1 \Vert \cdots \Vert {\varvec{e}}_h)\), where each \({\varvec{e}}_i \in \{0,1\}^{m/h}\) has Hamming weight one, given only \(({\mathbf {H}}, {\mathbf {H}}{\varvec{e}})\), for a randomly chosen binary \(r \times m\) matrix \({\mathbf {H}}\). In [HOSS18] it was shown that the search and decisional versions of this problem are equivalent and even statistically secure when h is big enough compared to r. In this work we use an interactive variant of the problem, where the adversary is allowed to try to guess a few bits of information on the secret \({\varvec{e}}\) before seeing the challenge; if the guess is incorrect, the game aborts. We conjecture that this ‘leaky’ version of the problem, defined below, is no easier than the standard problem. Note that on average the leakage only allows the adversary to learn 1 bit of information on \({\varvec{e}}\), since if the game does not abort he only learns that \(\bigwedge P_i({\varvec{e}}) = 1\).

The ‘leaky’ part of the assumption is introduced as a result of an efficient instantiation of random correlated OTs on short strings (Sect. 4). Once the adversary has tried to guess these short strings, which act as short MAC keys in the authentication protocol (Sect. 5), a DRSD challenge is presented to him during the protocol computing the cross terms of multiplication triples (Sect. 7.1). As in [HOSS18], the appearance of the DRSD instance is due to the fact of ‘hashing’ the short MAC keys of at least h honest parties during said multiplications.

Definition 2.1

(Decisional Regular Syndrome Decoding with Leakage). Let \(r, h,\ell \in \mathbb {N}\) and \(m = h \cdot 2^\ell \). Consider the game for \(b \in \{0,1\}\), defined between a challenger and an adversary:

  1. 1.

    Sample \({\mathbf {H}}\leftarrow \mathbb {F}_2^{r \times m}\) and a random, weight-h vector \({\varvec{e}}\in \mathbb {F}_2^m\).

  2. 2.

    Send \({\mathbf {H}}\) to the adversary and wait for the adversary to adaptively query up to h efficiently computableFootnote 1 predicates \(P_i: \mathbb {F}_2^m \rightarrow \{0,1\}\). For each \(P_i\) queried, if \(P_i({\varvec{e}}) = 0\) then abort, otherwise wait for the next query.

  3. 3.

    If \(b=0\), sample \({\varvec{u}}\leftarrow \mathbb {F}_2^r\) and send \(({\mathbf {H}}, {\varvec{u}})\) to the adversary. Otherwise if \(b=1\), send \(({\mathbf {H}}, {\mathbf {H}}{\varvec{e}})\).

The DRSD problem with leakage with parameters \((r,h,\ell )\) is to distinguish between and with noticeable advantage.

2.1 Resharing

At several points in our protocols, we have a value \(x = \sum _{i \in X} x^i\) that is secret-shared between a subset of parties \(\{P_i\}_{i \in X}\), and wish to re-distribute this to a fresh sharing amongst a different set of parties, say \(\{P_j\}_{j \in Y}\). The naive method to do this is for every party \(P_i\) to generate a random sharing of \(x^i\), and send one share to each \(P_j\). This costs \(|X |\cdot |Y |\cdot m\) bits of communication, where m is the bit length of x. When m is large, we can optimize this using a pseudorandom generator \(G: \{0,1\}^\kappa \rightarrow \{0,1\}^m\), as follows:

  1. 1.

    For \(i \in X\), party \(P_i\) does as follows:

    1. (a)

      Pick an index \(j' \in Y\)Footnote 2

    2. (b)

      Sample random keys \(k^{i,j} \leftarrow \{0,1\}^\kappa \), for \(j \in Y \setminus j'\)

    3. (c)

      Send \(k^{i,j}\) to party \(P_j\), and send \(x^{i,j'} = \sum _j G(k^{i,j}) + x^i\) to party \(P_{j'}\)

  2. 2.

    For \(j \in Y\), party \(P_j\) does as follows:

    1. (a)

      Receive \(k^{i,j}\) from each \(P_i\) who sends \(P_j\) a key, and a share \(x^{i,j}\) from each \(P_i\) who sends \(P_j\) a share. For the keys, compute the expanded share \(x^{i,j} = G(k^{i,j})\).

    2. (b)

      Output \(x^j = \sum _{i \in X} x^{i,j}\).

Now each \(P_i\) only needs to send a single share of size m bits, since the rest are compressed down to \(\kappa \) bits using the PRG. This gives an overall communication complexity of \(O(|X |\cdot |Y |\cdot \kappa + |X |\cdot m)\) bits.

3 Information-Theoretic MACs with Short Keys

We now describe our method for authenticated secret-sharing based on information-theoretic MACs with short keys. Our starting point is the standard information-theoretic MAC scheme on a secret \(x \in \{0,1\}\) given by \({\varvec{m}}= {\varvec{k}}+ x \cdot \varDelta \), for a uniformly random key \(({\varvec{k}},\varDelta )\), where \({\varvec{k}}\in \{0,1\}^\ell \) is only used once per message x, whilst \(\varDelta \in \{0,1\}^\ell \) is fixed. Given the message x, the MAC \({\varvec{m}}\) and the key \({\varvec{k}}\), verification consists of simply checking the linear equation holds. It is easy to see that, given x and \({\varvec{m}}\), forging a valid MAC for a message \(x' \ne x\) is equivalent to guessing \(\varDelta \). In a nutshell, we adapt this basic scheme for the multi-party, secret-shared setting, with the guarantee that forging a MAC to all parties can only be done with probability \(2^{-\lambda }\), even when the key length \(\ell \) is much smaller than \(\lambda \), by relying on the fact that at least h parties are honest.

Our scheme requires choosing two (possibly overlap**) subsets of parties \({\mathcal P}_{(h)}\), \({\mathcal P}_{(1)}\) \(\subseteq {\mathcal P}\), such that \({\mathcal P}_{(h)}\) has at least h honest parties and \({\mathcal P}_{(1)}\) at least 1 honest party. To authenticate a secret value x, we first additively secret-share x between \({\mathcal P}_{(1)}\), and then give every party in \({\mathcal P}_{(1)}\) a MAC on its share under a random MAC key given to each party in \({\mathcal P}_{(h)}\), as follows:

where \({\varvec{k}}^{i,j}[x^j]\) is a key chosen by \(P_i\) from \(\{0,1\}^\ell \) to authenticate the message \(x^j\) that is chosen by \(P_j\) whereas \({\varvec{m}}^{j,i}[x^j]\) is a MAC on a message \(x^j\) computed using the keys \(\varDelta ^i\) and \({\varvec{k}}^{i,j}[x^j]\). We denote this representation by \([x]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}_\varDelta \). Note that sometimes we use representations with a different set of global keys \(\varDelta = \{\varDelta ^i\}_{i \in {\mathcal P}_{(h)}}\), but when it is clear from context we omit \(\varDelta \) and write \([x]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\).

We remark that a special case is when \({\mathcal P}_{(h)}= {\mathcal P}_{(1)}= {\mathcal P}\), which gives the usual n-party representation of an additively shared value \(x = x^1 + \dots + x^n\), as used in [BDOZ11, BLN+15]:

$$\begin{aligned}{}[x] = \{x^i, \varDelta ^i, \{{\varvec{m}}^{i,j}, {\varvec{k}}^{i,j}\}_{j \ne i}\}_{i \in [n]}, \quad {\varvec{m}}^{i,j} = {\varvec{k}}^{j,i} + x^i \cdot \varDelta ^j, \end{aligned}$$

where each party \(P_i\) holds the \(n-1\) MACs \(\{{\varvec{m}}^{i,j}\}\) on \(x^i\), as well as the keys \({\varvec{k}}^{i,j}\) on each \(x^j\), for \(j \ne i\), and a global key \(\varDelta ^i\).

The idea behind our setup is that to cheat when opening x to all parties would require guessing at least h MAC keys of the honest parties in committee \({\mathcal P}_{(h)}\). In Figs. 1 and 2 we describe our protocols for opening values to a subset \(\bar{{\mathcal P}} \subseteq {\mathcal P}\) and to a single party, respectively, and checking MACs. First each party in \({\mathcal P}_{(1)}\) broadcasts its share \(x^j\) to \({\mathcal P}_{(h)}\), and then later, when checking MACs, \(P_j\) sends the MAC \({\varvec{m}}^{j,i}\) to \(P_i\) for verification. To improve efficiency, we make two optimizations to this basic method: firstly, instead of sending the individual MACs, when opening a large batch of values \(P_j\) only sends a single, random linear combination of all the MACs. Secondly, the verifier \(P_i\) does not check every MAC equation from each \(P_j\), but instead sums up all the MACs and performs a single check. This has the effect that we only verify the sum x was opened correctly, and not the individual shares \(x^j\).

Overall, to open x to an incorrect value \(x'\) requires guessing the \(\varDelta ^i\) keys of all honest parties in \({\mathcal P}_{(h)}\), so can only be done with probability \(\le 2^{-h\ell }\). This means we can choose \(\ell = \lambda /h\) to ensure security. Note that it is crucial when opening \([x]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\) that the shares \(x^j\) are broadcast to all parties in \({\mathcal P}_{(h)}\), to ensure consistency. Without this, a corrupt \(P_j\) could open, for example, an incorrect value to a single party in \({\mathcal P}_{(h)}\) with probability \(2^{-\ell }\), and the correct share to all other parties.

More details on the correctness and security of our open and MACCheck protocols are given in the full version of this paper.

Fig. 1.
figure 1

Protocols for opening and MAC-checking on \(({\mathcal P}_{(h)},{\mathcal P}_{(1)})\)-authenticated secret shares

Fig. 2.
figure 2

Protocol for privately opening \(({\mathcal P}_{(h)},{\mathcal P}_{(1)})\)-party authenticated secret shares to a single party \(P_{i_0}\) and MAC-checking

Efficiency Savings From Short Keys. Note that the reason for taking this approach is not to obtain a more efficient MAC scheme, but to design a scheme allowing more efficient creation of the MACs. Setting up the MACs typically requires oblivious transfer, with a communication cost proportional to the key length, so a smaller \(\ell \) gives us direct efficiency improvements to the preprocessing phase, which is by far the dominant cost in applications (see Sect. 5 for details). Regarding the scheme itself, notice that this is actually less efficient, in terms of storage and computation costs, than the distributed MAC scheme used in the SPDZ protocol [DKL+13], which only requires each party to store \(\lambda +1\) bits per authenticated Boolean value. However, it turns out that these overheads are less significant in practice compared with the communication cost of setting up the MACs, where we gain a lot.

Extension to Arithmetic Shares. The scheme presented above can easily be extended to the arithmetic setting, with shares in a larger field instead of just \(\mathbb {F}_2\). To do this with short keys, we simply choose the MAC keys \(\varDelta ^i\) to be from a small subset of the field. For example, over \(\mathbb {F}_p\) for a large prime p, each party chooses \(\varDelta ^i \in \{0, \dots , 2^{\ell }-1\}\), and will obtain MACs of the form \(m^{j,i} = k^{i,j} + x^j \cdot \varDelta ^i\) over \(\mathbb {F}_p\), where \(k^{i,j}\) is a random element of \(\mathbb {F}_p\). This allows for a reduced preprocessing cost when generating MACs with the MASCOT protocol [KOS16] based on oblivious transfer: instead of requiring k OTs on k-bit strings between all \(n(n-1)\) pairs of parties, where \(k = \lceil \log _2 p \rceil \), we can adapt our preprocessing protocol from Sect. 5 to \(\mathbb {F}_p\) so that we only need to perform \(\ell \) OTs on k-bit strings between \((n-1)(t+1)\) pairs of parties to set up each shared MAC.

3.1 Operations on \([\cdot ]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\)-Shared Values

Recall that \({\mathcal P}_{(h)}\cap {\mathcal P}_{(1)}\) is not necessarily the empty set.

Addition and multiplication with constant: We can define addition of \([x]_\varDelta ^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\) with a public constant \(c \in \{0,1\}\) by:

  1. 1.

    A designated \(P_{i^*} \in {\mathcal P}_{(1)}\) replaces its share \(x^{i^*}\) with \(x^{i^*} + c\).

  2. 2.

    Each \(P_i\) (for \(i \in {\mathcal P}_{(h)}, i \ne i^*\)) replaces its key \({\varvec{k}}^{i,1}[x]\) with \({\varvec{k}}^{i,1}[x] + c \cdot \varDelta ^i\). (All other values are unchanged.)

We also define multiplication of \([x]_\varDelta ^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\) by a public constant \(c \in \{0,1\}\) (or in \(\{0,1\}^\ell \)) by multiplying every share \(x^i\), MAC \({\varvec{m}}^{i,j}[x]\) and key \({\varvec{k}}^{i,j}[x]\) by c.

Addition of shared values: Addition (XOR) of two shared values \([x]_\varDelta ^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}},[y]_\varDelta ^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\) is straightforward addition of the components. Note that it is possible to compute the sum \([x]_\varDelta ^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}} + [y]_\varDelta ^{{\mathcal P}_{(h)},{\mathcal P}_{(h)}}\) of values shared within different committees in the same way, obtaining a \([x+y]_\varDelta ^{{\mathcal P}_{(h)},{\mathcal P}_{(h)}\cup {\mathcal P}_{(1)}}\) representation.

3.2 Converting to a More Compact Representation

We can greatly reduce the storage overhead in our scheme by “compressing” the MACs into a single, SPDZ-like sharing in only committee \({\mathcal P}_{(1)}\) with longer keys. Recall that the SPDZ protocol MAC representation [DPSZ12, DKL+13] of a secret bit x held by the parties in \({\mathcal P}_{(1)}\) is given by

$$\begin{aligned} \llbracket x \rrbracket = \{ x^j, {\varvec{m}}^j[x]\}_{j \in {\mathcal P}_{(1)}} \end{aligned}$$

where each party \(P_j\) in \({\mathcal P}_{(1)}\) holds a share \(x^j\), a MAC share \({\varvec{m}}^j[x] \in \mathbb {F}_2^\lambda \) and a global MAC key share \(\varDelta ^j \in \mathbb {F}_2^\lambda \), such that

$$\begin{aligned} x = \sum _{j \in {\mathcal P}_{(1)}} x^j, \quad \sum _{j \in {\mathcal P}_{(1)}} {\varvec{m}}^j = (\sum _{j \in {\mathcal P}_{(1)}} x^j) \cdot (\sum _{j \in {\mathcal P}_{(1)}} \varDelta ^j) \end{aligned}$$

Using this instead of the previous representation gives a much simpler and more efficient MAC scheme in the online phase of our MPC protocol, since each party only stores \(\lambda +1\) bits per value, instead of up to \(|{\mathcal P}_{(h)} |\cdot \ell +1\) bits with the scheme using short keys. Therefore, to obtain both the efficiency of generating MACs in the previous scheme, and using the MACs with SPDZ, below we show how to convert an inefficient, pairwise sharing \([x]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\) into a more compact SPDZ sharing \(\llbracket x \rrbracket \). This procedure is shown in Fig. 3.

Note that with the SPDZ representation, the parties in \({\mathcal P}_{(1)}\) can perform linear computations and openings (within \({\mathcal P}_{(1)}\)) in just the same way. For completeness, we present the opening and MAC check protocols in the full version of this paper.

Fig. 3.
figure 3

Protocol for transforming \([x]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\) representations to \(\llbracket x \rrbracket \) representations

To see correctness, first notice that from step 2a, we have that \(\sum _j \tilde{k}^{i,j} = r^i \cdot \sum _j k^{i,j}\). So each party in \({\mathcal P}_{(1)}\) holds a share \(x^j\) and a MAC share \(\tilde{\varvec{m}}^j \in \mathbb {F}_{2^\lambda }\), which satisfy:

The security of this scheme now depends on the single, global MAC key \(\tilde{\varDelta } = \sum _i \varDelta ^i \cdot {\varvec{r}}^i\), instead of the concatenation of \(\varDelta ^i\) for \(i \in {\mathcal P}_{(h)}\). Since at least h of the short keys \(\varDelta ^i \in \mathbb {F}_{2^\ell }\) are unknown and uniformly random, from the leftover hash lemma [ILL89] it holds that \(\tilde{\varDelta }\) is within statistical distance \(2^{-\lambda }\) of the uniform distribution over \(\{0,1\}^\lambda \) as long as \(h\ell \ge 3\lambda \). This gives a slightly worse bound than the previous scheme, but allows for a much more efficient online phase of the MPC protocol since, once the SPDZ representations are produced, only parties in \({\mathcal P}_{(1)}\) need to interact, and they have much lower storage and local computation costs. Note that in our instantiation of this scheme for the overall MPC protocol, we also need to choose the parameters \(h,\ell \) such that the assumption is hard; it turns out that all of our parameter choices (see Sect. 8) for this already satisfy \(h\ell \ge 3 \lambda \), so in this case using more compact MACs does not incur any extra overheads.

Improved Analysis for 1-bit Keys. When the key length is 1, we can improve upon the previous bound from the leftover hash lemma with a more fine-grained analysis. Notice that we can write the new key \(\tilde{\varDelta }\) as \(\tilde{\varDelta } = {\mathbf {R}}\cdot \varDelta \), where \({\mathbf {R}}\in \{0,1\}^{\lambda \times n}\) is a matrix with \({\varvec{r}}^i\) as columns. Since at least h positions of \(\varDelta \) are uniformly random, from randomness extraction results for bit-fixing sources (as used in, e.g. [NST17, Theorem 1]) it holds that since every honestly sampled row of \({\mathbf {R}}\) is uniformly random, \(\tilde{\varDelta }\) is within statistical distance \(2^{\lambda -h}\) of the uniform distribution. We therefore require \(h \ge 2\lambda \), instead of \(h \ge 3\lambda \) as previously.

Optimization with Vandermonde Matrices Over Small Fields. If we choose each of the \(\varDelta ^i\) keys to come from a small finite field \(\mathbb {F}\), with \(|\mathbb {F} | \ge n\), then we can optimize the compact MAC scheme even further, so that there is no overhead on top of the previous pairwise scheme. The idea is to use a Vandermonde matrix to extract randomness from all parties’ small MAC keys in a deterministic fashion, instead of using random vectors \({\varvec{r}}^i\) as before. This technique is inspired by previous applications of hyper-invertible matrices to MPC in the honest majority setting [BTH08].

Let \(v_1,\dots , v_n\) be distinct points in \(\mathbb {F}\), where \(\mathbb {F}\) is such that \(h\cdot |\mathbb {F} | \ge \lambda \). Now let \({\mathbf {V}}\in \mathbb {F}^{n \times h}\) be the Vandermonde matrix given by

$$ {\mathbf {V}}= \begin{pmatrix} 1 &{} v_1 &{} \dots &{} v_1^{h-1} \\ 1 &{} v_2 &{} \dots &{} v_2^{h-1} \\ \vdots &{} \ddots &{} \ddots &{} \vdots \\ 1 &{} v_n &{} \dots &{} v_n^{h-1} \\ \end{pmatrix} $$

Party \(P_i\) defines the new MAC key share , where \({\varvec{v}}_i\) is the i-th row of \({\mathbf {V}}\). This results in a new global key given by \(\tilde{\varDelta } = (\varDelta ^1, \dots , \varDelta ^n) \cdot {\mathbf {V}}\in \mathbb {F}^h\). From the fact that at least h components of \(\varDelta \) are uniformly random, and the property of the Vandermonde matrix that any square matrix formed by taking h rows of \({\mathbf {V}}\) is invertible, it follows that \(\tilde{\varDelta }\) is a uniformly random vector in \(\mathbb {F}^h\). More formally, this means that if \(n-h\) components of \(\varDelta \) are fixed and we define \(\varDelta _H\) to be the h honest MAC key components, then the map** \(\varDelta _H \mapsto \varDelta \cdot {\mathbf {V}}\) is a bijection, so \(\tilde{\varDelta }\) is uniformly random as long as \(\varDelta _H\) is. Therefore we can choose \(h \ge \lambda /|\mathbb {F} |\) to obtain \(\le 2^{-\lambda }\) cheating probability in the resulting MAC scheme.

Allowing leakage on the MAC keys. In our subsequent protocol for generating MACs, to obtain an efficient protocol we need to allow some leakage on the individual MAC keys \(\varDelta ^i \in \{0,1\}^\ell \), in the form of allowing the adversary to guess a single bit of information on each \(\varDelta ^i\). For both the pairwise MAC scheme and the compact, SPDZ-style MACs, this leakage does not affect an adversary’s probability of forging MACs in our actual protocols, since the entire MAC key still needs to be guessed to break security — allowing guesses on smaller parts of the key does not help, as a single incorrect guess causes the protocol to abort. We analyse the security of this for our compact MAC representation in the full version.

4 Correlated OT on Short Strings

As a building block, we need a protocol for random correlated oblivious transfer (or random \(\varDelta \)-OT) on short strings. This is a 2-party protocol, where the receiver inputs bits \(x_1,\dots ,x_m\), the sender inputs a short string \(\varDelta \in \{0,1\}^\ell \), and the receiver obtains random strings \({\varvec{t}}_i \in \{0,1\}^\ell \), while the sender learns \({\varvec{q}}_i = {\varvec{t}}_i + x_i \cdot \varDelta \). The ideal functionality for this is shown in Fig. 4.

The protocol we use to realise this (shown in the full version of this paper) is a variant of the OT extension protocol of Keller et al. [KOS15], modified to produce correlated OTs (as done in [NST17]) and with short strings. The security of the protocol can be shown similarly to the analysis of [KOS15]. That work showed that a corrupt party may attempt to guess a few bits of information about the sender’s secret \(\varDelta \), and will succeed with probability \(2^{-c}\), where c is the number of bits. In our case, since \(\varDelta \) is small, a corrupt receiver may actually guess all of \(\varDelta \) with some noticeable probability, in which case all security for the sender is lost. This is modelled in the functionality , which allows a corrupt receiver to submit such a guess. This leakage does not cause a problem in our multi-party protocols, because an adversary would have to guess the keys of all honest parties to break security, and this can only occur with negligible probability.

Communication complexity. Recall that \(\lambda \) is the statistical security parameter and \(\kappa \) the computational security parameter. The initialization phase requires \(\ell \) random OTs, which costs \(\ell \kappa \) bits of communication when implemented using OT extension. The communication complexity of the Extend phase, to create m s, is \(\ell (m + \lambda )\) bits to create the OTs, and \(\kappa + 2\lambda \) bits for the consistency check (we assume \(P_S\) only sends a \(\kappa \)-bit seed used to generate the \(\chi _i\)’s). This gives an amortized cost of \(\ell + (\kappa + 3\lambda )/m\) bits per , which is less than \(\ell + 4\) bits when \(m > \kappa \).

Fig. 4.
figure 4

Functionality for oblivious transfer on random, correlated strings.

5 Bit Authentication with Short Keys

In this section we describe our protocols for authenticating bits with short MAC keys. To capture the short keys used for authentication we need to define a series of different functionalities.

5.1 Authenticated Bit Functionality \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\)

We begin with the description of the ideal functionality \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\) described in Fig. 5 that formalises the MACs we create. Each party \(P_i \in {\mathcal P}_{(h)}\) chooses a global \(\varDelta ^i \in \{0,1\}^\ell \), then \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\) calls the subroutine (Fig. 6) that uses these global MAC keys \(\{\varDelta ^i\}_{i \in {\mathcal P}_{(h)}}\) stored by the functionality to create pairwise MACs of the same length, as illustrated in Sect. 3.

Fig. 5.
figure 5

Functionality for authenticated bits

Fig. 6.
figure 6

Macro used by \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\) to authenticate bits

5.2 Bit Authentication Protocol

We now present our bit authentication protocol \(\varPi _{\scriptstyle \mathsf {aBit}}\), described in Fig. 7, implementing the functionality \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\) (Fig. 5). The protocol first runs the \(\varDelta \)-OT protocol with short keys between every pair of parties in \({\mathcal P}_{(h)}\times {\mathcal P}_{(1)}\) to authenticate the additively shared inputs, in a standard manner. We then need to adapt the consistency check from the TinyOT-style authentication protocol presented by Hazay et al. [HSS17] to our setting of MACs with short keys distributed between two committees, to ensure that all parties input consistent values in all the \({\mathsf {COT}}\) instances.

Taking a closer look at the consistency checks in Step 3f, the first check verifies the consistency of the \(\varDelta ^i\) values, whereas in the second set of checks we test the consistency of the individual shares \(x^j\). To see correctness when all parties are honest, notice that in the first check, for \(i \in {\mathcal P}_{(h)}\) we have:

For a corrupt party who misbehaves during the protocol, there are two potential deviations:

  1. 1.

    A corrupt \(P_i, i \in {\mathcal P}_{(h)}^\mathcal {A}\) provides an inconsistent \(\varDelta ^{i,j}\) when acting as a sender in with different honest parties, i.e. \(\varDelta ^i \ne \varDelta ^{i,j}\) for some \(j \in {\mathcal P}_{(1)}\setminus A\).

  2. 2.

    A corrupt \(P_j, j \in {\mathcal P}_{(1)}^\mathcal {A}\) provides an inconsistent input \(x_\iota ^{i,j}\) when acting as a receiver in with different parties, i.e. \(x_\iota ^i \ne x_\iota ^{i,j}\), for some \(j \in {\mathcal P}_{(h)}\setminus A\).

Note that in the above, the ‘correct’ inputs \(\varDelta ^i, x^j_\iota \) for a corrupt \(P_i \in {\mathcal P}_{(h)}\) or \(P_j \in {\mathcal P}_{(1)}\) are defined to be those in the instance with some fixed, honest party \(P_{i_1} \in {\mathcal P}_{(1)}\) or \(P_{j_1} \in {\mathcal P}_{(h)}\), respectively. We now prove the following two claims.

Claim 5.1

Assuming a non-abort execution, then for every corrupted party \(P_i, i \in {\mathcal P}_{(h)}^\mathcal {A}\), all \(\varDelta ^i\) are consistent.

Proof

In order to ensure that all \(\varDelta ^i\) are consistent we use the first check. More precisely, we fix \(P_j\in {\mathcal P}_{(h)}^\mathcal {A}\) and check that \( \sum _{i \in [n]} {\varvec{z}}^{i,j} =0, \forall j\). Since we require that \({\varvec{y}}\in \{0,1\}^\lambda \), the probability to pass the check is \(1/2^\lambda \). More formally, let us assume that a corrupt \(P^*_j\) uses inconsistent \(\varDelta ^{j,i}\) in with some \(i \not \in {\mathcal P}_{(h)}^\mathcal {A}\), then to pass the check \(P^*_j\) can send adversarial values in step 3c, i.e. when it broadcasts values \(\bar{\varvec{y}}^j\), or in step 3d, when committing to the values \({\varvec{z}}^{j,i}\). Let \({\varvec{e}}_y \in \{0,1\}^\lambda \) denote an additive error so that \(\sum _{i \in [n]} \bar{\varvec{y}}^i={\varvec{y}}+ {\varvec{e}}_y\), and let \({\varvec{e}}_z \in \{0,1\}^\lambda \) denote an additive error so that \(\sum _{j \in {\mathcal P}_{(h)}^\mathcal {A}} \hat{\varvec{z}}^{j,i} = \sum _{j \in {\mathcal P}_{(h)}^\mathcal {A}} {\varvec{z}}^{j,i} + {\varvec{e}}_z\). Finally, let \(\delta ^{j,i}= \varDelta ^j + \varDelta ^{j,i}\). Then if the check passes, it holds that:

$$\begin{aligned} 0 =&\sum _i {\varvec{z}}^{i,j} = {\varvec{e}}_z + {\varvec{z}}^{j} + \sum _{i \ne j} {\varvec{z}}^{i,j} = {\varvec{e}}_z + ({\varvec{y}}+ {\varvec{e}}_y + {\varvec{y}}^j) \cdot \varDelta ^j + \sum _{i \ne j} {\varvec{y}}^i \cdot \varDelta ^{j,i}\\&\iff \; {\varvec{e}}_z + {\varvec{e}}_y \cdot \varDelta ^j = \sum _{i \ne j} {\varvec{y}}^i \cdot \delta ^{j,i}, \end{aligned}$$

which implies that the additive errors \({\varvec{e}}_z\) and \({\varvec{e}}_y\), that make the above equation equal to zero, depend on the \({\varvec{y}}^i\) values, and that the adversary has to guess at least one of them in order to pass the check. This event happens with probability \(2^{-\lambda }\) since the only information the adversary has about these values is that they are uniform additive shares of \({\varvec{y}}\), due to the randomization in step 3c.   \(\square \)

Claim 5.2

Assuming a non-abort execution, then for every corrupted party \(P_j, j \in {\mathcal P}_{(1)}^\mathcal {A}\), all \(x_\iota ^{i,j}\) are consistent.

Proof

We need to check that a corrupt \(P^*_j\) cannot input inconsistent \(x^{j,i}\) to different honest parties without being caught. For every ordered pair of parties \((P_i,P_j)\), we can define \(P_j\)’s MAC \({\varvec{m}}^{j,i}[{\varvec{y}}]\) and \(P_i\)’s key \({\varvec{k}}^{i,j}[{\varvec{y}}]\) respectively as

$$\begin{aligned}&\sum _{\iota =1}^m \chi _{\iota } \cdot {\varvec{m}}^{j,i}[x_\iota ] + \sum _{k=1}^\lambda X^{k -1}\cdot {\varvec{m}}^{j,i}[r_{k}] \quad {\text { and }}\\&\sum _{\iota =1}^m \chi _{\iota } \cdot {\varvec{k}}^{i,j}[x_\iota ] + \sum _{k=1}^\lambda X^{k -1}\cdot {\varvec{k}}^{i,j}[r_{k}] \;. \end{aligned}$$

A corrupt \(P_j\) can commit to incorrect MACs \(\hat{\varvec{z}}^{j,i}\), so that \(\hat{\varvec{z}}^{j,i} = {\varvec{z}}^{j,i} + {\varvec{e}}^{j,i}_{z}\) and \(\hat{\varvec{y}}^j = {\varvec{y}}^{j,i} + {\varvec{e}}^{j,i}_{y}\). In order to have the check passed, we have:

$$ {\varvec{z}}^{j,i} + {\varvec{e}}_{z}^{j,i} = {\varvec{k}}[{\varvec{y}}]^{i,j} + ({\varvec{y}}^{j,i} + {\varvec{e}}^j_{y}) \cdot \varDelta ^i, $$

Which happens if and only if:

$$\begin{aligned}&{\varvec{e}}_{z}^{j,i} + ({\varvec{y}}^{j,i} + {\varvec{e}}^j_{y}) \cdot \varDelta ^i = {\varvec{m}}^{j,i}[{\varvec{y}}] + {\varvec{k}}^{i,j}[{\varvec{y}}]\\&= \big (\sum _{\iota =1}^m \chi _{\iota } \cdot (x_\iota ^j + \delta ^{j,i}_\iota ) + \sum _{k=1}^\lambda X^{k -1}\cdot (r_{k}^j+\delta '^{j,i}_k)\big )\cdot \varDelta ^i\\&\iff {\varvec{e}}_{z}^{j,i} = \big ({\varvec{y}}^{j,i} + {\varvec{e}}^j_{y} + \sum _{\iota =1}^m \chi _{\iota } \cdot (x_\iota ^j + \delta ^{j,i}_\iota ) + \sum _{k=1}^\lambda X^{k -1}\cdot (r_{k}^j+\delta '^{j,i}_k)\big )\cdot \varDelta ^i\\&= ({\varvec{e}}^j_{y} + \sum _{\iota =1}^m \chi _{\iota } \cdot \delta ^{j,i}_\iota + \sum _{k=1}^\lambda X^{k -1}\cdot \delta '^{j,i}_k) \cdot \varDelta ^i. \end{aligned}$$

Then there are two cases for which the adversary can pass the check:

  1. 1.

    In case \({\varvec{e}}_z^{j,i} = ({\varvec{e}}^j_{y} + \sum _{\iota =1}^m \chi _{\iota } \cdot \delta ^{j,i}_\iota + \sum _{k=1}^\lambda X^{k -1}\cdot \delta '^{j,i}_k) \cdot \varDelta ^i \ne 0\) the adversary needs to guess \(\varDelta ^i\), which can only happen with probability \(2^{-\ell }\). Note that in order to pass this check the adversary needs to guess all honest parties’ keys. This is due to the fact that a corrupted \(P_j\) opens the same \(\hat{\varvec{y}}^j\) to all parties, so if it cheats and provides an inconsistent value then it must pass the above check with respect to all honest parties. Therefore, the overall probability of passing this check is \(2^{-\ell h} \le 2^{-\lambda }\).

  2. 2.

    In case \({\varvec{e}}_{z}^{j,i}=0\) and \({\varvec{e}}^j_{y} = \sum _{\iota =1}^m \chi _{\iota } \cdot \delta ^{j,i}_\iota + \sum _{k=1}^\lambda X^{k -1}\cdot \delta '^{j,i}_k, \forall i \not \in {\mathcal P}_{(1)}^\mathcal {A}\). Assuming that there is at least one \(i \not \in {\mathcal P}_{(h)}^\mathcal {A}\) s.t. \(\delta ^{j,i}_\iota =\delta ^i=0\) (recall that we view the inputs of \(P_j\) in the interaction with party \(P_{j_1}\) as the ‘correct’ inputs, then there must be at least one party for which this condition holds). This implies that \({\varvec{e}}^j_{y} = 0\) as well. Thus, for every \(i\notin {\mathcal P}_{(h)}^\mathcal {A}\cup j_1\) it needs to holds that

    $$ 0 = \sum _{\iota =1}^m \chi _{\iota } \cdot \delta ^{j,i}_\iota + \sum _{k=1}^\lambda X^{k -1}\cdot \delta '^{j,i}_k. $$

    Since each \(\chi _\iota \) is uniformly random in \(\mathbb {F}_{2^\lambda }\) and independent of the \(\delta _\iota ^{j,i}, \delta '^{j,i}_\iota \) values, it is easy to see that this only holds with probability \(2^{-\lambda }\) if any \(\delta _\iota ^{j,i}\) is non-zero.   \(\square \)

In the full version we prove the following theorem.

Theorem 5.1

Protocol \(\varPi _{\scriptstyle \mathsf {aBit}}^{{\mathcal P}_{(h)}, {\mathcal P}_{(1)}, m,\ell }\) securely implements the functionality \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}^{{\mathcal P}_{(h)}, {\mathcal P}_{(1)}, m,\ell }\) in the -hybrid model.

5.3 Efficiency Analysis

We now analyse the efficiency of our protocol and compare it with the previous best known approach to secret-shared bit authentication. When there are n parties with h honest, the previous best approach would be to use the standard TinyOT-style MAC scheme (as in [WRK17b, HSS17]) inside a committee of size \(n-h+1\) parties, to guarantee at least one honest party. Here, the MACs must be of length at least \(\lambda \), and the amortized communication complexity can be around \(\lambda (n-h+1)(n-h)\) bits per authenticated bit. In contrast, in our scheme we have two committees of sizes \(n_1\) and \(n_2\), with h and 1 honest party, respectively. If we suppose the committees are deterministically chosen from a set of n parties with h honest, then we get \(n_1 = n\) and \(n_2 = n-h+1\). To ensure security of the MAC scheme we need MACs of length \(\ell \ge \lambda /h\), for statistical security \(\lambda \). This gives an amortized complexity for creating a MAC of around \(\ell n_1n_2 = \lambda n (n-h+1)/h\) bits. Compared with the TinyOT approach, this gives a reduction in communication of \(h(n-h)/n\) times in our protocol. This is maximized when \(h=n/2\), with a n/4 times reduction in communication cost over TinyOT, and for smaller h we still have savings for all \(h > 1\).

Fig. 7.
figure 7

Protocol for authentication of random shared bits using committees

6 Actively Secure MPC Protocol with Short Keys

Similarly to prior constructions such as [DPSZ12, NNOB12, FKOS15, KOS16], our protocol is in the pre-processing model where the main difference is that the computation is carried out via two random committees \({\mathcal P}_{(h)}\) and \({\mathcal P}_{(1)}\). The preprocessing phase is function and input independent, and provides all the correlated randomness needed for the online phase where the function is securely evaluated.

6.1 The Online Phase

Our online protocol, shown in Fig. 8, runs mostly as that of [DPSZ12, DKL+13] within a small committee \({\mathcal P}_{(1)}\subseteq {\mathcal P}\) with at least 1 honest party. The main difference is that we need the help of the bigger \({\mathcal P}_{(h)}\subseteq {\mathcal P}\) committee with at least h honest parties to authenticate the inputs of any \(P_i \in {\mathcal P}\) using the \([\cdot ]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\)-representation before converting them to the more compact \(\llbracket \cdot \rrbracket \)-representation described in Sect. 3.2.

Fig. 8.
figure 8

The Boolean MPC protocol

6.2 The Preprocessing Phase

The task of \(\mathcal {F}_{\scriptstyle \mathsf {Preprocessing}}\) is to create random authenticated bits under the \([\cdot ]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\)-representation and random authenticated triples under the compact -representation.

7 Triple Generation

Here we present our triple generation protocol implementing the functionality described in Fig. 9. First, protocol \(\varPi _{\scriptstyle \mathsf {HalfAuthTriple}}\) (Fig. 11) implements the functionality \(\mathcal {F}_{\scriptstyle \mathsf {HalfAuthTriple}}\) (Fig. 10) to compute cross terms in triples: each party \(P_i \in {\mathcal P}_{(h)}\) inputs random shares \(y^i_k, k \in [m]\), and committees \({\mathcal P}_{(h)}, {\mathcal P}_{(1)}\) obtain random representations \([x_k]_\varDelta \) as well as shares of the cross terms defined by \(\sum _{i \in {\mathcal P}_{(h)}}\) \(\sum _{j \ne {\mathcal P}_{(1)}\setminus \{P_i\}} x_k^j \cdot y_k^i, k \in [m]\).

Given this intermediate functionality, protocol \(\varPi _{\scriptstyle \mathsf {Triple}}\) (Fig. 12) implements \(\mathcal {F}_{\scriptstyle \mathsf {Triple}}^{m,\ell }\) (Fig. 9) computing correct authenticated and non-leaky triples such that \((\sum _{j \in {\mathcal P}_{(1)}} x_k^j) \cdot (\sum _{j \in {\mathcal P}_{(1)}} y_k^j) = \sum _{j \in {\mathcal P}_{(1)}} z_k^j\). Checking correctness and removing leakage is achieved using classic cut-and-choose and bucketing techniques. Note that even though the final triples are under the compact -representation we produce them first using \([\cdot ]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\)-representations in order to generate MACs more efficiently and having an efficient implementation of \(\mathcal {F}_{\scriptstyle \mathsf {HalfAuthTriple}}\).

It is crucial to note that the security of \(\varPi _{\scriptstyle \mathsf {HalfAuthTriple}}\) is based on the hardness of RSD, and for this reason the number of triples r generated by this protocol depends on the security RSD. So while essentially an unlimited number of random correlated OTs and random authenticated bits can be produced as described on previous sections, a naive use of short keys would actually result in an upper bound on the number of triples that can be produced securely. To fix this issue, during \(\varPi _{\scriptstyle \mathsf {HalfAuthTriple}}\) we make the parties ‘switch the correlation’ on representations \([x]_{\varDelta }\), so they output a new representation under an independent correlation \([x]_{\tilde{\varDelta }}\), with \(\varDelta \ne \tilde{\varDelta }\) being the relevant value for the RSD assumption. Finally, the fact that \(\tilde{\varDelta }\) is short combined with the adversarial possibility of querying some predicates about it requires the reduction to use an interactive version of RSD, which we denote by as in Definition 2.1.

Fig. 9.
figure 9

Functionality for triples generation.

7.1 Half Authenticated Triples

Here we show how \(\varPi _{\scriptstyle \mathsf {HalfAuthTriple}}^{{\mathcal P}_{(h)},{\mathcal P}_{(1)},r,\ell }\) securely computes cross terms in triples. The main difficulty arises from modelling the leakage due to using short keys in the real world, and proving that it cannot be distinguished from uniformly random. Looking at individual parties, security relies on the fact that on step 6a of the protocol \(s_k^{i,j}\) is a fresh, random sharing of zero and hence \(y_k^{i,j}\) is perfectly masked. Nevertheless, when considering the joint leakage from all honest parties, the assumption kicks in and requires a more thoughtful consideration.

Security is showed in the following theorem, proved in the full version.

Fig. 10.
figure 10

Functionality for half authenticated triples

Fig. 11.
figure 11

Protocol for half authenticated triples

Theorem 7.1

Protocol \(\varPi _{\scriptstyle \mathsf {HalfAuthTriple}}^{{\mathcal P}_{(h)},{\mathcal P}_{(1)},r,\ell }\) securely implements \(\mathcal {F}_{\scriptstyle \mathsf {HalfAuthTriple}}^{{\mathcal P}_{(h)},{\mathcal P}_{(1)},r,\ell }\) in the

\((\mathcal {F}_{\scriptstyle \mathsf {aBit}}, \mathcal {F}_{\scriptstyle \mathsf {Zero}})\)-hybrid model as long as is secure.

7.2 Correct Non-leaky Authenticated Triples

Here we describe the protocol \(\varPi _{\scriptstyle \mathsf {Triple}}\) (Fig. 12) to create m correct random authenticated triples with compact MACs , \(k \in [m]\).

First, parties in \({\mathcal P}_{(h)}\cup {\mathcal P}_{(1)}\) call \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\) obtaining \(m'=m \cdot B^2 +c\) random authenticated bits \(\{[y_k]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\}_{k \in m'}\), where B and c are parameters of the sub-protocol \(\varPi _{\scriptstyle \mathsf {TripleBucketing}}\) (Fig. 13). Then, each \(P_j \in {\mathcal P}_{(1)}\) reshares their values \(y_k^j\) to parties in \({\mathcal P}_{(h)}\) obtaining \([\hat{y}_k]^{{\mathcal P}_{(h)},{\mathcal P}_{(h)}}_{k \in m'}\) such that \(\sum _{i \in {\mathcal P}_{(h)}} \hat{y}^i_k = y_k, k \in [m]\).

This allows \({\mathcal P}_{(h)}\cup {\mathcal P}_{(1)}\) to call \(\mathcal {F}_{\scriptstyle \mathsf {HalfAuthTriple}}^{{\mathcal P}_{(h)},{\mathcal P}_{(1)},r,\ell } \,\) \(\hat{m} = m/r\) times, on inputs \(\{\hat{y}_{(\iota - 1)\cdot r +k}\}_{k \in [r]}\), for each \(\iota \in \hat{m}\). The outputs of each of these calls are the sharings \(v^{\tau }_{(\iota - 1) \cdot r +k}\), \(\tau \in {\mathcal P}_{(h)}\cup {\mathcal P}_{(1)}\) and \(k \in [r]\), of r cross terms products, i.e.

$$ \sum _{\tau \in {\mathcal P}_{(h)}\cup {\mathcal P}_{(1)}} v^{\tau }_{(\iota - 1) \cdot r +k} = \sum _{i \in {\mathcal P}_{(h)}} \sum _{j \in {\mathcal P}_{(1)}} x^j_{(\iota -1) \cdot r +k} \cdot \hat{y}^i_{(\iota - 1)\cdot r +k}. $$

Notice that the number r of cross terms computed by \(\mathcal {F}_{\scriptstyle \mathsf {HalfAuthTriple}}^{{\mathcal P}_{(h)},{\mathcal P}_{(1)},r,\ell }\) depends on the leaky DRSD problem, and for this reason the protocol needs to call the functionality \(\hat{m}\) times to obtain all the \(m'\) outputs it needs.

After this, parties in \({\mathcal P}_{(h)}\) reshare all the \(v^i_k, k \in m'\) to \({\mathcal P}_{(1)}\), so that each \(P_j \in {\mathcal P}_{(1)}\) gets \(\hat{v}_k^j, k \in [m']\), where

$$\begin{aligned} \sum _{j \in {\mathcal P}_{(1)}} \hat{v}_k^j = \sum _{j \in {\mathcal P}_{(1)}} x_k^j \sum _{i \in {\mathcal P}_{(1)}\setminus j} y_k^i = \sum _{\tau \in {\mathcal P}_{(h)}\cup {\mathcal P}_{(1)}} v_k^\tau , \end{aligned}$$
(1)

so that parties in \({\mathcal P}_{(1)}\) can locally add shares \(x_k^j \cdot y_k^j\) to \(\hat{v}_k^j\) obtaining \(z_k^j\), \(k \in [m']\).

Finally, \({\mathcal P}_{(h)}\cup {\mathcal P}_{(1)}\) call \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\) to obtain \([z_k]^{{\mathcal P}_{(h)},{\mathcal P}_{(1)}}\), and run the \(\varPi _{\scriptstyle \mathsf {TripleBucketing}}\) subprotocol. This subprotocol is similar to the bucket-based cut-and-choose technique introduced by Larraia et al. [LOS14] and optimized by Frederiksen et al. [FKOS15], but adapted to run with two committees. It takes as input \(m'= B^2 \cdot m +c\) triples. First, in Step I and II, it ensures that all the triples are correctly generated sacrificing \(B\cdot m \cdot (B-1)+c\) triples, and then (Step III) it uses random bucketing technique to remove potential leakage on the \(x_k\) values obtaining m private and correct triples. All the MACs on previously opened values are eventually checked (Step IV) calling the Batch Check command in \(\varPi _{\scriptstyle \mathsf {[Open]}}\) (Fig. 1). Finally, on that last step, the remaining triples are converted to SPDZ-style triples in \({\mathcal P}_{(1)}\) using \(\varPi _{\scriptstyle \mathsf {MAC Compact}}\).

Fig. 12.
figure 12

Protocol for triples

Fig. 13.
figure 13

Checking correctness and removing leakage from triples with cut-and-choose

Correctness easily follows form the discussion above:

$$\begin{aligned} \sum _{j \in {\mathcal P}_{(1)}} z^j_k = \sum _{j \in {\mathcal P}_{(1)}} x_k^j \cdot {y}_k^j + \hat{v}_k^j, \end{aligned}$$
(2)

where \(\hat{v}_k^j\) is the re-sharing inside \({\mathcal P}_{(1)}\) of \(\mathcal {F}_{\scriptstyle \mathsf {HalfAuthTriple}}^{{\mathcal P}_{(h)},{\mathcal P}_{(1)},r,\ell }\)’s output. More precisely, using Eq. 1 we can rewrite Eq. 2 as follows:

$$\begin{aligned}&\sum _{j \in {\mathcal P}_{(1)}} z^j_k = \sum _{j \in {\mathcal P}_{(1)}} x_k^j \cdot {y}_k^j + \sum _{j \in {\mathcal P}_{(1)}} x_k^j \cdot \sum _{i \in {\mathcal P}_{(1)}\setminus j} y_k^i\\&= \sum _{j \in {\mathcal P}_{(1)}} x_k^j \cdot \bigl (y_k^j + \sum _{i \in {\mathcal P}_{(1)}\setminus j} y_k^i \bigr ) = \bigl (\sum _{j \in {\mathcal P}_{(1)}} x_k^j \bigr ) \cdot \bigl (\sum _{j \in {\mathcal P}_{(1)}} y_k^j \bigr ). \end{aligned}$$

Security is showed in the following theorem, proved in the full version.

Theorem 7.2

Protocol \(\varPi _{\scriptstyle \mathsf {Triple}}\) securely implements \(\mathcal {F}_{\scriptstyle \mathsf {Triple}}^{m,\ell }\) in the

\((\mathcal {F}_{\scriptstyle \mathsf {Rand}}, \mathcal {F}_{\scriptstyle \mathsf {aBit}}, \mathcal {F}_{\scriptstyle \mathsf {HalfAuthTriple}}^{{\mathcal P}_{(h)},{\mathcal P}_{(1)},r,\ell })\)-hybrid model.

Parameters: Based on the analysis from previous works [FKOS15, FLNW17, WRK17a], we choose \(B = 3\) and 4, to guarantee security except with probability \(2^{-64}\) in our estimations. The additional cut-and-choose parameter c can be as low as 3, so is insignificant as we initially need \(m' = B^2m + c\) triples to produce m final triples.

8 Complexity Analysis

We now analyse the complexity of our protocol and compare it with the state-of-the-art actively secure MPC protocols with dishonest majority. As our online phase is essentially the same (even better) than that of SPDZ and TinyOT mixed with committees, we focus on the preprocessing phase.

Furthermore, since the underlying computational primitives in our protocol are very simple, the communication cost in the triple generation algorithm will be the overall bottleneck. We compare the communication cost of our triple generation algorithm with that of the corresponding multiparty Tiny-OT protocol by Wang et al. [WRK17b].

The main cost for producing m triples in this work, is \(3mB^2\) calls to \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\) using keys \(\varDelta ^i \in \{0,1\}^\ell \), plus \(m B^2\) calls to \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\) using new keys \(\varDelta ^i + \tilde{\varDelta }^i \in \{0,1\}^\ell \) every r triples. The latter calls under new keys are more expensive, as the setup costs that incurs is roughly \(128 \cdot \ell \cdot |{\mathcal P}_{(h)} | \cdot |{\mathcal P}_{(1)} |\) bits and is amortized only across those r triples. Measuring the cost of \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\) after setup as \(|{\mathcal P}_{(h)} | \cdot |{\mathcal P}_{(1)} | \cdot \ell \) bits, we obtain an amortized communication complexity of \(B^2 \cdot |{\mathcal P}_{(h)} | \cdot |{\mathcal P}_{(1)} | \cdot \ell \cdot (3 + (r + 128)/r)\) bits per triple.

The main cost for producing m triples in [WRK17b] is 3mB calls to their long-key equivalent of \(\mathcal {F}_{\scriptstyle \mathsf {aBit}}\) with long keys, plus sending 2mB outputs of a hash function. On the other hand, all their communication is within the smaller committee \({\mathcal P}_{(1)}\). Their main (amortized) cost is then of \(B \cdot |{\mathcal P}_{(1)} |^2 \cdot 128 \cdot (3+2)\) bits per triple. Define \(\alpha = |{\mathcal P}_{(h)} | / |{\mathcal P}_{(1)} |\). We can then conclude that the improvement in communication complexity of our work w.r.t. WRK is roughly that of a multiplicative factor of:

$$\begin{aligned} \frac{128 \cdot 5}{\alpha \cdot B \cdot \ell \cdot (4 + 128/r)} \end{aligned}$$
Table 1. Amortized communication cost (in kbit) of producing triples in our protocol and WRK.

Given the total number of parties n and honest parties h, we first consider the case of two deterministic committees \({\mathcal P}_{(h)}\) and \({\mathcal P}_{(1)}\) such that \(|{\mathcal P}_{(h)}| = n\) and \(|{\mathcal P}_{(1)}| = n-h+1\), respectively. To give a fair comparison, we have chosen the parameters in such a way that \(n-h+1\) in our protocol is equal to n in WRK. The estimated amortized costs in kbit of producing triples are given in Table 1. Notice that given n and h, the key lenght \(\ell \) and the number of triples r are established according to the corresponding leaky-DRSD instance with \(\kappa \) bits of security. We consider \(\kappa =128\) and bucket size \(B=3\) and 4.

As we can see from the table, the improvement of our protocol over WRK becomes greater as (nh) increase (and \(\ell \) consequently decreases). The key lenght greatly influences the communication cost as a smaller \(\ell \) reduces significantly the cost of computing the pairwise OTs needed both for triple generation and authentication.

When n is larger we can use random committees \({\mathcal P}_{(h)}\) and \({\mathcal P}_{(1)}\) such that, except with negligible probability \(2^{-\lambda }\), \({\mathcal P}_{(h)}\) has at least \(h_2 \le h\) honest parties and \({\mathcal P}_{(1)}\) has at least 1 honest party. Let \(|{\mathcal P}_{(h)}| =n_2\), \(|{\mathcal P}_{(1)}| = n_1\) and \(\lambda =64\), Table 2 compares the communication cost of our triple generation protocol with random committees with WRK, where we take \(n=n_1\).

Table 2. Amortized costs in kbit for triple generation with n parties and h honest parties using two random committees of sizes \(n_1,n_2\) with 1 and \(h_2\) honest parties.
Fig. 14.
figure 14

Varying the larger committee size with total number of parties and corruptions \((n,t)=(500,350)\) and (1000, 850).

Varying the size of the committee \({\mathcal P}_{(h)}\), and the number \(h_2\) of honest parties within \({\mathcal P}_{(h)}\), we obtain a tradeoff: with a larger committee we obtain a larger committee size \(n_2\) and lower overall communication complexity, but on the other hand there are more parties interacting, which may introduce bottlenecks in the networking. Figure 14 illustrates this with 500 and 1000 parties in total and 350 and 850, respectively, corruptions.