1 Introduction

System designers who strive to remove single points of failure often rely on tools provided by threshold cryptography [21, 58] and secure multi-party computation [20, 34]. In this paper, we study distributed key generation (DKG) [31, 53], a method from threshold cryptography that often plays an essential role during the setup of distributed systems, including Byzantine consensus [6, 64], time-stam** services [14, 61], public randomness beacons [29, 59], and data archive systems [45, 63]. A DKG enables a set of parties to generate a keypair such that any sufficiently large subset can perform an action that requires the secret key while any smaller subset cannot. To achieve this, a DKG essentially turns each party into a dealer for a verifiable secret sharing (VSS) scheme [19, 25, 54]. This process yields a single collective public key, generated in a distributed manner, with each party kee** a share of the secret key for themselves.

Current DKGs [27, 31, 32, 53] commonly require that all n parties broadcast \(\mathcal {O}(n)\)-sized messages that are then used by each party to verify the shares they received from their peers. This results in each party communicating \(\mathcal {O}(n^2)\) sized messages via broadcast. While some DKGs have \(\mathcal {O}(n)\) communication and verification per party, they rely on constant-sized polynomial commitment schemes that require trusted setup [42, 43]. In this work, we show how to reduce the size of the final DKG transcript to \(\mathcal {O}(n)\) by making the parties’ contributions aggregatable. This enables us to relay (partial) transcripts in an efficient and resilient manner, e.g., over gossip networks, ensuring that transcripts do not grow in size since aggregation can be done in a continuous manner. Aggregatability also enables us to refresh the transcript if and when shares get compromised.

Our DKG transcripts contain the information needed for parties to decrypt their secret shares. During aggregation it is therefore essential to ensure that only valid (partial) transcripts are aggregated. We achieve this by making our transcripts publicly-verifiable so that anybody receiving and aggregating transcripts can verify their correctness. Making the transcripts publicly-verifiable has several other advantages: It ensures that all parties can obtain their secret shares, even if they go offline momentarily, and also enables us to remove the “complaint rounds” that are used in previous DKGs to expose misbehaving parties. This improves overall latency, since fewer communication rounds are required, and reduces the protocol’s complexity from an implementation perspective.

A consequence of our approach is that the DKG secret key and its shares are group elements rather than field elements. While this prevents us from using it for many well-known cryptosystems, we demonstrate its applicability by introducing a verifiable unpredictable function (VUF) [23, 48] whose secret key is a group element, and prove its security in the random oracle model. Threshold VUFs are useful in the construction of verifiable random beacons, which themselves are invaluable in building proof-of-stake-based cryptocurrencies [33, 44]. To the best of our knowledge, our VUF is the first that takes a group element as the secret key, and its performance is also reasonable: our VUF output consists of 6 source group elements and can be verified using 10 pairings. We also provide further optimizations enabling us to reduce the VUF contributions in the threshold protocol to just 2 source group elements that can be verified with 3 pairings.

We also revisit the definitions for DKGs in the hope of reducing complexities and inefficiencies. In particular, previous definitions [31] required secrecy, in the sense that the output of the DKG must be indistinguishable from random. While this notion has the benefit of making the DKG modular (one can replace key generation with a DKG in any context), it also is difficult to realise. Indeed, Gennaro et al. [31] demonstrated that the popular Pedersen DKG does not have secrecy, and introduced an alternative and considerably less efficient protocol to achieve secrecy. Additionally, no one-round DKG can achieve secrecy because a rushing adversary (an adversary that plays last) can always influence the final distribution. Instead, we look to prove that a DKG is robust (see Definition 6) and security-preserving (see Definition 8) in the sense that any adversary that breaks the security of a threshold version of the scheme (i.e., one using a DKG) also breaks the original security property.

Gennaro et al. [32] previously observed that the Pedersen DKG suffices to construct threshold Schnorr signatures [57]. Recently, Benhamouda et al. [10] found an attack on this approach when the adversary is concurrent. (Gennaro et al. had not considered concurrent adversaries.) Komlo and Goldberg [47] show that it is possible to avoid the attack but, in doing so, they lose robustness (e.g., if a single party goes offline a signature will not be produced). This raises questions as to whether it is still okay to use the Pedersen DKG with respect to other signature schemes such as BLS. In this paper, we provide a positive answer in the form of a security proof that holds concurrently and does not rely on rewinding the adversary. Specifically, we show that the Pedersen DKG is security-preserving with respect to any rekeyable encryption scheme, signature scheme, or VUF scheme where the sharing algorithm is the same as encryption or signing (see Definition 5).

Our contributions. In Sect. 5, we construct an aggregatable and publicly-verifiable DKG. The aggregation can be completed by any party (there are no additional secrets) and can also be done incrementally. The cost of verifying our transcripts is \(\mathcal {O}(n \log {n})\) whereas prior approaches were \(\mathcal {O}(n^2)\) [27]. If any user temporarily goes offline, they can still recover their secret shares. Dealing DKG shares takes \(\mathcal {O}(n \log {n})\) time and aggregation costs are \(\mathcal {O}(n)\).

We prove security of our DKG using a natural definition (see Sect. 3.6), which roughly states that, if it is possible to break a cryptosystem’s security game with a DKG swapped in, then it is possible to break that cryptosystem’s original security game that did not use the DKG for key generation. We further demonstrate that, counter-intuitively, it is possible to prove that a DKG realises this definition without needing a separate proof for each cryptosystem. Indeed, we show that any encryption scheme, signature scheme, or VUF that are rekeyable, and where the sharing algorithm is the same as encryption or signing, can be securely instantiated using a key-expressable DKG (see Definition 7). This includes El-Gamal encryption, BLS signatures, a new VUF we introduce in Sect. 7, and, we suspect, many others.

We further demonstrate the applicability of our techniques by showing that all three of the Pedersen DKG [53], the Fouque-Stern DKG [27], and our aggregatable DKG are key-expressable and thus can be used securely with rekeyable encryption schemes, signature schemes, and VUFs whose decryption/ signing algorithms are the same as the algorithms to generate decryption/ signature shares. Our proof allows for rushing adversaries and holds concurrently (i.e., with respect to an adversary that can open many sessions at the same time). We cannot cover Schnorr signatures, however, because their threshold variants do not appear to be rekeyable.

Our final contribution, in Sect. 8, is a Rust implementation of our aggregatable DKG to demonstrate its practicality by showing that its overheads are indeed linear. For example, the evaluation of our implementation shows that for 64/128/8192 nodes it takes 71 ms/137 ms/8,000 ms to share one secret and 359 ms/747 ms/42,600 ms to verify the corresponding transcript.

2 Related Work

Table 1. Complexities of prior DKG protocols with n parties, per party. In the “Broadcast” column, we count the number of broadcasts by size (either O(n) or O(1)-sized). “P2P” means the total size of the messages sent over public and private communication channels (excludes broadcast messages). “PV” means publicly-verifiable. “Verifier local” indicates the per-party time spent verifying their shares from other parties, while “global” indicates the time to verify the final DKG transcript.

We provide an asymptotic overview of the state-of-the-art for DKGs in Table 2. Here we assume that the threshold t is linear in n. Our comparisons consider the optimistic case where there is no more than a constant number of complaints for protocols where these are relavant (recall in our protocol there are no complaints) (Table 1).

Pedersen introduced the first efficient DKG protocol for discrete log-based cryptosystems [54], building on top of Feldman’s VSS [25]. Gennaro et al. [32] showed Pedersen’s DKG does not generate uniformly distributed secrets, and proposed a protocol that does but at the cost of lower efficiency. They also fix problems with the complaint phase in Pedersen’s DKG. Neji et al. [50] gave a more efficient protocol that ensures uniformity in Pedersen’s DKG. Kate [42] reduced the broadcast overhead per DKG party from \(\mathcal {O}(n)\) to \(\mathcal {O}(1)\) using their constant-sized polynomial commitment scheme [43]. However, there scheme depends on a trusted setup algorithm, the costs of which are not considered in Table 2. Trusted setup algorithms have a round complexity of t, and each of these rounds require users to broadcast \(\mathcal {O}(n)\) sized messages [15, 37]. Unlike our protocol, all of these protocols rely on complaints rounds, are not publicly verifiable and have \(\mathcal {O}(n^2)\) communication complexity.

Fouque and Stern present a one-round, publicly-verifiable DKG that uses only public channels. However, their final transcript size is \(\mathcal {O}(n^2)\) whereas ours is \(\mathcal {O}(n)\) because we can aggregate. Furthermore, their security proof does not allow for rushing adversaries. While they do not measure performance, their use of Paillier encryption [52] is likely to make their DKG slow and have high communication costs. Nonetheless, unlike our DKG, theirs has the advantage of outputting secrets that are field, rather than group, elements.

Other works tackle the DKG problem from different angles. Canetti et al.’s DKG [16] has adaptive security, while ours is secure only against static adversaries that fix the set of corrupted parties before the protocol starts. Canny and Sorkin [17] study DKG protocols with poly-logarithmic communication and computation cost per-party, but their protocol relies on a trusted dealer that permutes the parties before the protocol starts. Kate et al. [41, 42] and Kokoris-Kogias et al. [46] study DKG protocols in the asynchronous setting, unlike our work and most previous work. Schindler et al. [56] use the Ethereum blockchain to instantiate the synchronous broadcast channel all DKG protocols mentioned so far assume, including ours. Tomescu et al. [60] lower the computational cost of dealing in Kate et al.’s DKG [43], at a logarithmic increase in communication. Lastly, several works implement and benchmark synchronous, statically-secure DKG protocols for discrete log-based cryptosystems [22, 39, 40, 51, 56].

Abe et al. [1] observed that any fully structure preserving signature scheme that depends solely on algebraic operations cannot be used as a VUF or VRF. Unlike our VUF (which is not algebraic), this rules out a number of structure preserving signatures from being candidates for building VUFs [2,3,4, 62].

3 Definitions

3.1 Preliminaries

If x is a binary string then |x| denotes its bit length. If S is a finite set then |S| denotes its size and \(x\xleftarrow {\$}S\) denotes sampling a member uniformly from S and assigning it to x. We use \(\lambda \in \mathbb {N}\) to denote the security parameter and \(1^{\lambda }\) to denote its unary representation. Algorithms are randomized unless explicitly noted otherwise. “PPT” stands for “probabilistic polynomial time.” We use \(\boldsymbol{y}\leftarrow A(\boldsymbol{x};r)\) to denote running algorithm A on inputs \(\boldsymbol{x}\) and randomness r and assigning its output to \(\boldsymbol{y}\). We use \(\boldsymbol{y}\xleftarrow {\$}A(\boldsymbol{x})\) to denote \(y\leftarrow A(x;r)\) for uniformly random r. We use \([A(\boldsymbol{x})]\) to denote the set of values that have non-zero probability of being output by A on input \(\boldsymbol{x}\). For two functions \(f,g:\mathbb {N}\rightarrow [0,1]\), we use \(f(\lambda )\approx g(\lambda )\) to denote \(|f(\lambda ) - g(\lambda )| = \lambda ^{-\omega (1)}\). We use code-based games in security definitions and proofs [9]. A game \(\mathsf {Sec}_\mathcal {A}(\lambda )\), played with respect to a security notion \(\mathsf {Sec}\) and adversary \(\mathcal {A}\), has a \(\textsc {main}\) procedure whose output is the output of the game. The notation \(\mathsf {Pr}[\mathsf {Sec}_\mathcal {A}(\lambda )]\) denotes the probability that this output is 1.

We formalize bilinear groups via a bilinear group sampler, which is an efficient deterministic algorithm \(\mathsf {GroupGen}\) that given a security parameter \(1^{\lambda }\) (represented in unary), outputs a tuple \(\mathsf {bp}= (p, \mathbb {G}_1, \mathbb {G}_2, \mathbb {G}_T, e, g_1, \hat{h}_1)\) where \(\mathbb {G}_1\), \(\mathbb {G}_2\), \(\mathbb {G}_T\) are groups with order divisible by the prime \(p \in \mathbb {N}\), \(g_1\) generates \(\mathbb {G}_1\), \(\hat{h}_1\) generates \(\mathbb {G}_2\), and \(e :\mathbb {G}_1 \times \mathbb {G}_2 \rightarrow \mathbb {G}_T\) is a (non-degenerate) bilinear map. Galbraith et al. distinguish between three types of bilinear group samplers [28]. Type I groups have \(\mathbb {G}_1 = \mathbb {G}_2\) and are known as symmetric bilinear groups. Types II and III are asymmetric bilinear groups, where \(\mathbb {G}_1 \ne \mathbb {G}_2\). Type II groups have an efficiently computable homomorphism \(\psi : \mathbb {G}_2 \rightarrow \mathbb {G}_1\), while Type III groups do not have an efficiently computable homomorphism in either direction. Certain assumptions are provably false with respect to certain group types (e.g., \(\mathsf {SXDH}\) only holds for Type III groups), and we work only with Type III groups.

3.2 Communication and Threat Models

In this section we discuss our communication and threat models.

Synchrony: We assume perfect synchrony. There is a strict time bound between rounds. All messages (honest and adversarial) within a round will be seen by all parties by the end of the round.

Communication channel: We assume the existence of a broadcast channel for sending messages. If a non-faulty party broadcasts a message then it will be seen by everyone by the end of the round. It is not possible to forge messages from non-faulty parties.

Adversarial threshold: We denote by t the adversarial threshold; i.e., the number of parties that the adversary can corrupt. The total number of parties is denoted by n. We set no specific bounds on the adversarial threshold because a rational adversary might prefer to attack the secrecy of the DKG over blocking the communication channels [5, 30].

Assumptions on the adversary: Our security proofs are given with respect to static adversaries, meaning the adversary must state at the start of the security game all of the parties that it has corrupted. We allow the adversary to control the ordering of messages within a round, and in particular the adversary can wait to receive all messages within a round before it broadcasts its message (this is called a rushing adversary). The adversary can also choose not to participate at all.

Byzantine adversary: A byzantine adversary is a malicious entity that may differ arbitrarily from the protocol.

Crashed party: A crashed party is a party that has gone offline e.g. due to a faulty internet connection. After a party has crashed they will not send any more messages.

3.3 Assumptions

Our security proofs are provided in the random oracle model; i.e., there exists a simulator that can program the output of a hash function provided that their chosen outputs are indistinguishable from random.

We rely on the \(\mathsf {SXDH}\) assumption [7, 8], which is an extension of the DDH assumption to Type III bilinear groups. Informally, it states that given \(g_1^\alpha \) and \(g_1^{\beta }\) it is hard to distinguish \(g_1^{\alpha \beta }\) from random.

The \(\mathsf {BDH}\) assumption is an extension of the CDH assumption to Type III bilinear groups [12]. Informally, it states that given \(g_1^\alpha , g_1^{\beta }, \hat{h}_1^{\gamma }, \hat{h}_1^{\alpha \gamma }\) it is hard to compute \(e(g_1, \hat{h}_1)^{\alpha \beta }\).

3.4 Verifiable Unpredictable Functions (VUFs)

A VUF allows a party with a secret key to compute a deterministic (keyed) function and prove to an external verifier that the result is correct. The notion is related to signatures, with the extra requirement that the output of the signer must be unique, even to a party that can choose the secret key. We have made the following changes to prior definitions [23] in order to better suit our setting: (1) we include a global setup algorithm to generate a common reference string; (2) we include a derive algorithm to map the prover’s output onto the unique function output.

Definition 1 (Verifiable Unpredictable Function)

Let \(\varPi = (\mathsf {VUF.Setup} , \mathsf {VUF.Gen}, \mathsf {VUF.Eval} , \mathsf {VUF.Sign}, \mathsf {VUF.Derive}, \mathsf {VUF.Ver})\) be the following set of efficient algorithms:

  • \(\mathsf {crs}_{\mathsf {vuf}} \leftarrow \mathsf {VUF.Setup} (1^{\lambda }):\) a DPT algorithm that takes as input the security parameter and outputs a common reference string.

  • \((\mathsf {pk}, \mathsf {sk}) \xleftarrow {\$}\mathsf {VUF.Gen} (\mathsf {crs}_{\mathsf {vuf}})\): a PPT algorithm that takes as input a common reference string and returns a public key and a secret key.

  • \( \mathsf {out} \leftarrow \mathsf {VUF.Eval} (\mathsf {crs}_{\mathsf {vuf}}, \mathsf {sk}, m)\): a DPT algorithm that takes as input a common reference string, secret key, and message \(m \in \{0,1\}^{\lambda }\) and returns \(\mathsf {out} \in \{0,1\}^{\lambda }\).

  • \(\sigma \xleftarrow {\$}\mathsf {VUF.Sign} (\mathsf {crs}_{\mathsf {vuf}}, \mathsf {sk}, m)\): a PPT algorithm that takes as input a common reference string, secret key, and message, and returns a signature \(\sigma \).

  • \(\mathsf {out} \leftarrow \mathsf {VUF.Derive} (\mathsf {crs}_{{\mathsf {vuf}}}, \mathsf {pk}, m, \sigma )\): a DPT algorithm that takes as input a common reference string, public key, message and signature and returns \(\mathsf {out} \in \{0,1\}^{\lambda }\).

  • \(0/1 \leftarrow \mathsf {VUF.Ver} (\mathsf {crs}_{\mathsf {vuf}}, \mathsf {pk}, m, \sigma )\): a DPT algorithm that takes as input a common reference string, public key, message and signature and returns 1 to indicate acceptance and 0 to indicate rejection.

We say that \(\varPi \) is a verifiable unpredictable function (VUF) if it satisfies correctness, uniqueness, and unpredictability (defined below).

A VUF is correct if an honest signer always convinces an honest verifier and always outputs a seed such that the derive function outputs the correct value.

Definition 2 (Correctness)

A VUF is correct if for all \(\lambda \in \mathbb {N}\) and \(m \in \{0,1\}^{\lambda }\) we have that

$$ \Pr \left[ \begin{array}{l|l} \mathsf {crs}_\mathsf {vuf}\leftarrow \mathsf {Setup}(1^{\lambda }), &{} \ \mathsf {VUF.Derive} (\mathsf {crs}_\mathsf {vuf}, \mathsf {pk}, m, \sigma ) = \\ (\mathsf {pk}, \mathsf {sk}) \xleftarrow {\$}\mathsf {VUF.Gen} (\mathsf {crs}_\mathsf {vuf}), &{} \ \mathsf {VUF.Eval} (\mathsf {crs}_\mathsf {vuf}, \mathsf {sk}, m) \\ \sigma \xleftarrow {\$}\mathsf {VUF.Sign} (\mathsf {crs}_\mathsf {vuf}, \mathsf {sk}, m) &{} \ \wedge \ \mathsf {VUF.Ver} (\mathsf {crs}_\mathsf {vuf}, \mathsf {pk}, m, \sigma ) = 1 \\ \end{array} \right] = 1. $$

A VUF is unique if an adversary (even one that chooses the secret key) cannot output a verifying signature such that the derive function outputs the wrong value.

Definition 3 (Uniqueness)

For a VUF \(\varPi \) and an adversary \(\mathcal {A}\), let \(\mathsf {Adv}_{\mathcal {A}}^{\mathsf {unique}}(\lambda )= \Pr [\mathsf {Game}_{\mathcal {A}}^{\mathsf {unique}}(\lambda )]\), where \(\mathsf {Game}_{\mathcal {A}}^{\mathsf {unique}}(\lambda )\) is defined as follows:

figure a

We say that \(\varPi \) is unique if for all PPT adversaries \(\mathcal {A}\) we have that \(\mathsf {Adv}_{\mathcal {A}}^{\mathsf {unique}}(\lambda )\le \mathsf {negl}(\lambda )\).

Finally, a VUF is unpredictable if an adversary cannot predict the output of the function \(\mathsf {VUF.Eval} \) on a message for which it has not seen any valid signatures.

Definition 4 (Unpredictability)

For a VUF \(\varPi \) and an adversary \(\mathcal {A}\), let \(\mathsf {Adv}_{\mathcal {A}}^{\mathsf {predict}}(\lambda )= \Pr [\mathsf {Game}_{\mathcal {A}}^{\mathsf {predict}}(\lambda )]\) where \(\mathsf {Game}_{\mathcal {A}}^{\mathsf {predict}}(\lambda )\) is defined as follows:

figure b

We say that \(\varPi \) is unpredictable if for all PPT adversaries \(\mathcal {A}\) we have that \(\mathsf {Adv}_{\mathcal {A}}^{\mathsf {predict}}(\lambda )\le \mathsf {negl}(\lambda )\).

3.5 Rekeyability

To show that existing cryptographic primitives can be instantiated with our DKG, and other DKGs in the literature, we rely on a property called rekeyability. Intuitively, rekeyability says that it is possible to transform an object (e.g., a ciphertext or signature) that was formed using one cryptographic key into an object formed with a related key. As one concrete example, in the BLS signature scheme, in which a signature on a message m is of the form \(\sigma = H(m) ^ {\mathsf {sk}_1}\), it is possible to transform this into a signature under the key \(\alpha \mathsf {sk}_1 + \mathsf {sk}_2\) by computing \(\sigma ^{\alpha } \cdot H(m)^{\mathsf {sk}_2}\). This means that BLS can be efficiently rekeyed with respect to the secret key. While this notion is related to the idea of re-randomizability [26, 35, 55], we are not aware of any formalizations in the literature and it may be of independent interest.

Definition 5 (Rekeyability)

For a public-key primitive \(\varPi = (\mathsf {KeyGen}, \varPi _1, \ldots , \varPi _n)\) and functions \(f_\mathsf {k}(\alpha , \mathsf {k}_1, \mathsf {k}_2)\) that outputs \(\alpha \mathsf {k}_1 \oplus \mathsf {k}_2\) for some binary operator \(\oplus \) (typically \(+\) or \(\times \)), we define rekeyability as follows for all \(\alpha \in \mathbb {N}\) and \((\mathsf {pk}_1,\mathsf {sk}_1), (\mathsf {pk}_2, \mathsf {sk}_2)\in [\mathsf {KeyGen}(1^{\lambda })]\):

  • We say that an algorithm \(\varPi _i\) is rekeyable with respect to the secret key if there exists an efficient function \(\mathsf {rekey}_i\) such that

    $$ \mathsf {rekey}_i(\alpha , \mathsf {pk}_1, \mathsf {sk}_2, x, \varPi _i(\mathsf {sk}_1, x; r)) = \varPi _i(f_{\mathsf {sk}}(\alpha , \mathsf {sk}_1, \mathsf {sk}_2), x; r) $$

    for all \(x\in \mathsf {Domain}(\varPi _i)\) and randomness r. Likewise, we say that an algorithm \(\varPi _j\) is rekeyable with respect to the public key if there exists an efficient function \(\mathsf {rekey}_j\) such that

    $$ \mathsf {rekey}_j(\alpha , \mathsf {pk}_1, \mathsf {sk}_2, \varPi _j(\mathsf {pk}_1, x; r)) = \varPi _j(f_{\mathsf {pk}}(\alpha , \mathsf {pk}_1, \mathsf {pk}_2), x; r) $$

    for all \(x\in \mathsf {Domain}(\varPi _i)\) and randomness r.

  • We say that \((\varPi _i, \varPi _j)\) is rekeyable with respect to the secret key if (1) \(\varPi _i\) is rekeyable with respect to the secret key and (2)

    $$ \varPi _j(\mathsf {pk}_1, y) = \varPi _j(f_{\mathsf {pk}}(\alpha , \mathsf {pk}_1, \mathsf {pk}_2), \mathsf {rekey}_i(\alpha , \mathsf {pk}_1, \mathsf {sk}_2, y)). $$

    Likewise we say that \((\varPi _i, \varPi _j)\) is rekeyable with respect to the public key if (1) \(\varPi _i\) is rekeyable with respect to the public key and (2)

    $$ \varPi _j(\mathsf {sk}_1, y) = \varPi _j(f_{\mathsf {sk}}(\alpha , \mathsf {sk}_1, \mathsf {sk}_2), \mathsf {rekey}_i(\alpha , \mathsf {pk}_1, \mathsf {sk}_2, y)). $$

For encryption, we would want that \((\mathsf {Encrypt}, \mathsf {Decrypt})\) is rekeyable with respect to the public key, meaning new key material can be folded into ciphertexts without affecting the ability to decrypt. For signing, we would want that \((\mathsf {Sign}, \mathsf {Verify})\) is rekeyable with respect to the secret key, meaning that if signatures verify then so do their rekeyed counterparts.

3.6 Distributed Key Generation (DKG)

We define a distributed key generation (DKG) as an interactive protocol that is used to generate a keypair \((\mathsf {pk},\mathsf {sk})\). We define this as \((\mathsf {transcript}, \mathsf {pk})\xleftarrow {\$}\mathtt {DKG}(I, n)\), where n is the number of participants in the DKG, I is the indices of the adversarial participants (so \(|I| \le t\)), \(\mathsf {pk}\) is the resulting public key, and \(\mathsf {transcript}\) is some representation of the messages that have been exchanged.

We additionally consider an algorithm \(\mathsf {Reconstruct}\) that, given \(\mathsf {transcript}\) and the shares submitted by \(t+1\) honest parties, outputs the secret key \(\mathsf {sk}\) corresponding to \(\mathsf {pk}\). With this in place, we can define an omniscient interactive protocol \((\mathsf {transcript}, (\mathsf {pk}, \mathsf {sk}), \mathsf {state}_\mathcal {A}) \xleftarrow {\$}\mathtt {OmniDKG}(I, n)\) that is aware of the internal state of each participant and thus can output \(\mathsf {sk}\) (by running the \(\mathsf {Reconstruct}\) algorithm) and \(\mathsf {state}_\mathcal {A}\); i.e., the internal state of the adversary.

The \(\mathsf {Reconstruct}\) algorithm is useful not only in defining this extra interactive protocol, but also in defining a notion of robustness for DKGs (initially called correctness by Gennaro et al. [31]). We define this as follows:

Definition 6 (Robustness)

A DKG protocol is robust if the following properties hold:

  • A DKG transcript \(\mathsf {dkg}\) determines a public key \(\mathsf {pk}\) that all honest parties agree on.

  • There is an efficient algorithm \(\mathsf {Reconstruct}\)

    $$ \mathsf {sk}\leftarrow \mathsf {Reconstruct}(\mathsf {dkg}, \mathsf {sk}_1, \ldots , \mathsf {sk}_{ \ell }) \text{ for } t + 1 \le \ell \le n $$

    that takes as input a set of secret key shares where at least \(t+1\) are from honest parties and verifies them against the public transcript produced by the DKG protocol. It outputs the unique value \(\mathsf {sk}\) such that \(\mathsf {pk}\leftarrow \mathsf {KeyGen}(1^{\lambda }; \mathsf {sk})\).

Beyond robustness, we also want a DKG to preserve security of the underlying primitive for which it is run. Previous related definitions of secrecy for DKGs required there to exist a simulator that could fix the output of the DKG; i.e., given an input y, could output \((\mathsf {transcript}, y)\) that the adversary could not distinguish from a real \((\mathsf {transcript}, \mathsf {pk})\) output by the DKG run with t adversarial participants. While general, this definition is strong and required previous constructions to have more rounds or constraints than would otherwise be necessary; e.g., there seem to be significant barriers to satisfying this definition in any DKG where the adversary is allowed to go last, as they the know the entire transcript and can bias the final result.

In defining what it means for a DKG to preserve security, we first weaken this previous definition. Rather than require a simulator given \(\mathsf {pk}_1\) to have the DKG output exactly \(\mathsf {pk}_1\), we consider that it can instead fix the output public key to have a known relation with its input public key. In particular, a simulator given \(\mathsf {pk}_1\) can fix the output of the DKG to be \(\mathsf {pk}\), where the simulator knows \((\alpha , \mathsf {pk}_2, \mathsf {sk}_2)\) such that \(\mathsf {pk}= f(\alpha , \mathsf {pk}_1, \mathsf {pk}_2)\) for \(\alpha \ne 0\) and f as defined in the rekeyability definition (see Definition 5). We call this property key expressability.

Definition 7 (Key expressability)

For a simulator \(\mathsf {Sim}\), define as \((\mathsf {transcript}, \mathsf {pk},\alpha ,\mathsf {pk}_2, \mathsf {sk}_2)\xleftarrow {\$}\mathtt {SimDKG}(\mathsf {Sim},I,n)\) a run of the DKG protocol in which all honest participants are controlled by \(\mathsf {Sim}\), which takes as input a public key \(\mathsf {pk}_1\) and has private outputs \(\alpha \), \(\mathsf {pk}_2\), and \(\mathsf {sk}_2\). We say that a DKG is key-expressable if there exists such a simulator \(\mathsf {Sim}\) such that (1) \((\mathsf {transcript},\mathsf {pk})\) is distributed identically to the output of \(\mathtt {DKG}(I,n)\), (2) \((\mathsf {pk}_2,\mathsf {sk}_2)\) is a valid keypair, and (3) \(\mathsf {pk}= f(\alpha , \mathsf {pk}_1, \mathsf {pk}_2) = \alpha \mathsf {pk}_1 \oplus \mathsf {pk}_2\).

To now define a security-preserving DKG, we intuitively consider a DKG being run in the context of a security game. To keep our definition as general as possible, our only requirements are that (1) the security game contains a line of the form \((\mathsf {pk},\mathsf {sk})\xleftarrow {\$}\mathsf {KeyGen}(1^{\lambda })\) (it also works if \(\mathsf {KeyGen}\) takes a common reference string as additional input), and (2) \(\mathsf {pk}\) is then later given as input to the adversary. We then say that the DKG preserves security if it is not possible for an adversary participating in the DKG to do better than it would have done in the original security game, in which it was given \(\mathsf {pk}\) directly. Formally, we have the following definition.

Definition 8 (Security-preserving)

Define \(\mathsf {Game}\) as any security game containing the line \((\mathsf {pk},\mathsf {sk})\xleftarrow {\$}\mathsf {KeyGen}(1^{\lambda })\), denoted \(\mathsf {line}_\mathsf {pk}\), and where \(\mathsf {pk}\) is later input to an adversary \(\mathcal {A}\) (in addition to other possible inputs). Define \(\mathsf {Game}'(\mathsf {line},x)\), parameterized by a starting line \(\mathsf {line}\) and some value x, as \(\mathsf {Game}\) but with \(\mathsf {line}_\mathsf {pk}\) replaced by \(\mathsf {line}\) and \(\mathcal {A}\) given x as input rather than \(\mathsf {pk}\). It is clear that \(\mathsf {Game}= \mathsf {Game}'(\mathsf {line}_\mathsf {pk}, \mathsf {pk})\).

Define \(\mathsf {line}_\mathsf {dkg}\) as the line \((\mathsf {transcript},(\mathsf {pk},\mathsf {sk}),\mathsf {state}_\mathcal {A}) \xleftarrow {\$}\mathtt {OmniDKG}(I,n)\), and define \(\mathsf {DKG}\text{- }\mathsf {Game}\leftarrow \mathsf {Game}'(\mathsf {line}_\mathsf {dkg}, \mathsf {state}_\mathcal {A})\). We say the DKG preserves security for \(\mathsf {Game}\) if

$$ \mathsf {Adv}_{\mathcal {A}}^{\mathsf {DKG}\text{- }\mathsf {Game}}(\lambda ) \le \mathsf {Adv}_{\mathcal {A}}^{\mathsf {Game}}(\lambda ) + \mathsf {negl}(\lambda ) $$

for all PPT adversaries \(\mathcal {A}\).

We do not view our requirements for the original security game as restrictive, given the number of security games that satisfy them. For signature unforgeability, for example, our definition says that an adversary that participates in the DKG, and can carry its state from that into the rest of the game (including all of the messages it saw), cannot achieve better advantage than when it is just given the public key (as in the standard EUF-CMA game).

While the relationship between key expressability and security-preserving DKGs is not obvious, we show in the full version of our paper [38] that it is typically the case that when key-expressable DKGs are used for rekeyable primitives, they preserve the security of that primitive’s underlying security game.

4 Our Enhanced Scrape PVSS

A secret sharing scheme allows a dealer to deal out n secret shares so that any subset of \(t + 1\) shares suffices to reconstruct the secret, but subsets of size \(\le t\) shares do not. A publicly verifiable secret sharing (PVSS) scheme is a secret sharing scheme in which any third party can verify that the dealer has behaved honestly. Importantly, PVSS obviates the need for a complaint round in VSS protocols, which simplifies designing PVSS-based DKG protocols [27]. Cascudo and David designed an elegant PVSS scheme called Scrape [18] with O(n) verification costs. In this section, we describe a slightly-modified variant of Scrape that supports aggregation and uses Type III pairings and an additional element \(\hat{u}_2 \in \mathbb {G}_2\) that will help our DKG security proofs later on. We rely on Type III pairings, not only for efficiency, but also because the SXDH assumption does not hold in symmetric groups. We give a formal description in Fig. 1. In Sect. 5, we use this slightly-modified variant of Scrape to construct our DKG.

Common reference string (CRS). All parties use the same CRS consisting of (1) a bilinear group description \(\mathsf {bp}\) (which fixes \(g_1\in \mathbb {G}_1\) and \(\hat{h}_1\in \mathbb {G}_2\)), (2) a group element \(\hat{u}_1 \in \mathbb {G}_2\) and (3) encryption keys \(\mathsf {ek}_i \in \mathbb {G}_2\) for every party \(P_i\) with corresponding decryption keys \(\mathsf {dk}_i \in \mathbb {F}\) known only to \(P_i\) such that \(\mathsf {ek}_i = \hat{h}_1^{\mathsf {dk}_i}\).

Dealing. Scrape resembles other Shamir-based [58] secret sharing schemes. The Scrape dealer will share a secret \(\hat{h}_1^{a_0}\in \mathbb {G}_2\), whose corresponding \(a_0\in \mathbb {F}\) the dealer knows. (This is different than other VSS schemes, which typically share a secret in \(\mathbb {F}\) rather than in \(\mathbb {G}_2\).) The dealer picks a random, degree-t polynomial \(f(X) = (a_0, a_1, \dots , a_t)\), where \(f(0)=a_0\), and commits to it via Feldman [25] as \(F_i = g_1^{a_i},\forall i\in [0,t]\). Party \(P_i\)’s share will be \(\hat{h}_1^{f(\omega _i)}\). The dealer then computes Feldman commitments \(A_i = g_1^{f(\omega _i)}\) and encryptions \(\mathsf {ek}_i^{f(\omega _i)}\) of each share. (The term “encryption” here is slightly abused since these are not IND-CPA-secure ciphertexts.) The PVSS transcript will consist of the Feldman commitments to f(X) and to the shares, plus the encryptions of the shares. Additionally, we augment the transcript with \(\hat{u}_2 = \hat{u}_1^{a_0}\), which helps our DKG security proofs.

Verifying. Each party \(P_i\) can verify that the PVSS transcript is a correct sharing of \(\hat{h}_1^{a_0}\). For this, \(P_i\) checks the Feldman commitments \(A_i\) to the shares \(f(\omega _i)\) are consistent with the Feldman commitment to f(X) via Lagrange interpolation in the exponent (see Fig. 1). Then, each \(P_i\) checks their encryption of \(f(\omega _i)\) against \(A_i\). Altogether, this guarantees that the encrypted shares are indeed the evaluations of the committed polynomial f.

Fig. 1.
figure 1

Dealing, verification and aggregation algorithms for the Scrape PVSS. Here, \(\boldsymbol{\mathsf {ek}},\boldsymbol{F},\boldsymbol{A},\boldsymbol{\hat{Y}}\) denote vectors of \(\mathsf {ek}_i\)’s, \(F_i\)’s, \(A_i\)’s and \(\hat{Y}_i\)’s. The polynomial \(\ell _j(X)\) denotes the Lagrange polynomial equal to 1 at \(\omega _j\) and 0 at \(\omega _{i} \ne \omega _{j}\). The \(\omega _i\)’s are public predetermined values which, for efficiency purposes, should be chosen as roots of unity of degree n. For more details, see the full version of our paper [38].

Aggregating transcripts. One of our key contributions is an algorithm for aggregating two Scrape PVSS transcripts \(\mathsf {pvss}_1\) and \(\mathsf {pvss}_2\) for polynomials \(f_1\) and \(f_2\) into a single transcript for their sum \(f_1 + f_2\). This is a key ingredient of our DKG from Sect. 5. Our aggregation leverages the homomorphism of Feldman commitments and of the encryption scheme. Indeed, suppose we have Feldman commitments to \(f_b\) consisting of \(F_{b,i} = g_1^{a_{b,i}},\forall i\in [0,t]\), where \(a_{b,i}\)’s are the coefficients of \(f_b\), for \(b\in \{1,2\}\). Then, \(F_i = F_{1,i}F_{2,i} = g^{a_{1,i} + a_{2,i}},\forall i\in [0,t]\) will be a Feldman commitment to \(f_1+f_2\). Similarly, we can aggregate the share commitments \(A_{b,i}=g^{f_b(\omega _i)}\) as \(A_i = A_{1,i} A_{2,i}= g^{(f_1+f_2)(\omega _i)},\forall i\in [n]\). Lastly, the encryptions \(\mathsf {ek}_i^{f_b(\omega _i)}\) can be aggregated as \(\mathsf {ek}_i^{(f_1+f_2)(\omega _i)}=\mathsf {ek}_i^{f_1(\omega _i)}\mathsf {ek}_i^{f_2(\omega _i)}\). We summarize this aggregation algorithm in Fig. 1.

Reconstructing the secret. At the end of the PVSS protocol, each party \(P_i\) can decrypt their share as \(\hat{A}_i = \hat{Y}_i^{\mathsf {dk}_i^{-1}} = (\mathsf {ek}_i^{f(\omega _i)})^{\mathsf {dk}_i^{-1}} = \hat{h}_1^{f(\omega _i)}\). Recall that the degree t polynomial f(X) encodes the secret \(f(0) = a_0\). Thus, any set S of \(\ge t + 1\) honest parties can reconstruct \(\mathsf {sk}=\hat{h}_1^{f(0)}\) as follows:

  1. 1.

    For each share \(\hat{A}_i\) provided, check that \(e(A_i, \hat{h}_1) = e(g_1, \hat{A}_i)\), where \(A_i = g_1^{f(\omega _i)}\) is part of the PVSS transcript. If this check fails, or if \(P_i\) does not provide a share, then remove \(P_i\) from S.

  2. 2.

    Return, \(\mathsf {sk}= \prod _{i \in S} \hat{A}_i^{ \ell _{S, i}(0) }\) where \(\ell _{S, i}(X)\) is a Lagrange polynomial equal to 0 at \(\omega _{j} \in S\) for \(i \ne j\), and 1 at \(\omega _i\).

5 Distributed Key Generation

In this section, we describe our distributed key generation (DKG) protocol for generating a key-pair \((\mathsf {pk},\mathsf {sk})\) of the form

$$\begin{aligned} \mathsf {pk}= (g_1^{a}, \hat{u}_1^{a}) \in \mathbb {G}_1 \times \mathbb {G}_2 \quad \text {and}\quad \mathsf {sk}= \hat{h}_1^{a} \in \mathbb {G}_2 ,\ \text {where}\ a\in \mathbb {F}\end{aligned}$$

We often refer to \(a\in \mathbb {F}\) as the DKG secret. All parties \(P_i\) use the same Scrape CRS (see Sect. 4) but augmented with verification keys \(\mathsf {vk}_i\) (defined later).

At a high level, our DKG protocol resembles previous protocols based on verifiable secret sharing: each party \(P_i\) deals a secret \(\hat{h}_1^{c_i}\) to all other parties using the Scrape PVSS from Sect. 4. Additionally, each party \(P_i\) includes a proof-of-knowledge of their secret \(c_i\). At this point, each party \(P_j\) would have to verify the PVSS transcript of every other party \(P_i\), resulting in \(O(n^2)\) work. Then, the final secret would be \(\mathsf {sk}=\hat{h}_1^{a}\) with \(a=\sum _{i\in Q} c_i\), where Q is the set of all parties who dealt honestly (i.e., whose PVSS transcript verified). Note that since PVSS transcripts are publicly-verifiable, all parties \(P_i\) agree on Q and there is no need for a complaint round. We often refer to an honest party \(P_i\) as having contributed to the final secret key and to \(c_i\) as its contribution.

Gossip and aggregate. To avoid the \(O(n^2)\) verification work per party, we leverage aggregation of Scrape PVSS transcripts. We observe that a party who verified several transcripts can aggregate them into a single one and forward it to another party, who can now verify this aggregated transcript faster. By carefully aggregating and gossi** transcripts in this manner, we decrease verification time per party from \(O(n^2)\) to \(O(n\log ^2{n})\). One caveat is that, due to the randomized nature of gossi**, a party’s contribution \(c_i\) might be incorporated multiple times, say \(w_i\) times, into the final secret \(\mathsf {sk}=\hat{h}_1^{a}\). As a result, the final \(a=\sum _{i\in Q} w_i c_i\), where \(w_i\) is called the weight of each \(c_i\).

Signatures-of-knowledge of contributions. Similar to previous DKGs [32], our DKG requires each party \(P_i\) to prove knowledge of its contribution \(c_i\) to the final DKG secret. However, since our DKG transcripts must be publicly-verifiable, we also require each party to sign their contributions. We achieve both of these goals using a signature-of-knowledge (SoK). Specifically, \(P_i\) signs \(C_i=g_1^{c_i}\) using its secret key \(\mathsf {sk}_i\), with corresponding verification key \(\mathsf {vk}_i=g_1^{\mathsf {sk}_i}\):

$$ \sigma _i = (\sigma _{i,1},\sigma _{i,2}) = (\mathsf {Hash}_{\mathbb {G}_2}(C_i)^{c_i}, \mathsf {Hash}_{\mathbb {G}_2}(\mathsf {vk}_i, C_i)^{\mathsf {sk}_i}) $$

where \(\mathsf {Hash}_{\mathbb {G}_2}\) is a hash function that maps to \(\mathbb {G}_2\). Any verifier with \(\mathsf {vk}_i\) can verify the signature-of-knowledge \(\sigma _i\) of \(c_i\) as:

$$ e(C_i, \mathsf {Hash}_{\mathbb {G}_2}(C_i)) = e(g_1, \sigma _{i,1}) \ \wedge \ e(\mathsf {vk}_i, \mathsf {Hash}_{\mathbb {G}_2}(\mathsf {vk}_i, C_i)) = e(g_1, \sigma _{i,2}) $$

Our signatures of knowledge are simulation-sound and thus cannot be compressed or combined. However, since they are constant-sized, this is not problematic. We refer to the signing algorithm as \(\mathsf {SoK.Sign}(C_i, \mathsf {sk}_i, c_i)\rightarrow \sigma _i\) and the verification algorithm as \(\mathsf {SoK.Verify}(\mathsf {vk}_i, C_i, \sigma _i)\rightarrow 0/1\).

DKG transcripts. To maintain their public-verifiability, aggregated PVSS transcripts must keep track of the weights \(w_i\) of each party’s contribution \(c_i\) and of the \(\sigma _i\)’s. This gives rise to a new notion of a DKG transcript defined as:

$$\begin{aligned} \mathsf {transcript}= ((C_{1}, \ldots , C_{n}), (w_{1}, \ldots , w_{n}), (\sigma _{1}, \ldots , \sigma _{n}), \mathsf {pvss}), \end{aligned}$$
(1)

where \(C_i = g_1^{c_i}\) is a commitment to the contribution \(c_i\) of party \(P_i\), \(w_i\) is its weight, \(\sigma _i\) is the SoK of \(c_i\) and \(\mathsf {pvss}\) is an (aggregated) PVSS transcript for secret \(a=\sum _{i\in [n]} w_i c_i\).

Recall from Fig. 1 that \(\mathsf {pvss}\) stores a Feldman commitment \(\boldsymbol{F}\) to a polynomial f(X) with \(f(0)=a\) and that \(F_0=g_1^a\). In our protocol, each party \(P_i\) initializes their DKG transcript by picking \(c_i\xleftarrow {\$} \mathbb {F}\) and setting \(\mathsf {pvss}\leftarrow \mathsf {Scrape.Deal}(\mathsf {bp},\boldsymbol{\mathsf {ek}}, \hat{u}_1, c_i)\), \(C_i \leftarrow g_1^{c_i}\), \(w_i\leftarrow 1\) and \(\sigma _i \leftarrow \mathsf {SoK.Sign}(C_i, \mathsf {sk}_i, c_i)\). For \(j\ne i\), \(P_i\) sets \(C_j \leftarrow \bot \), \(w_j \leftarrow 0\) and \(\sigma _j \leftarrow \bot \). Importantly, in our protocol, each party will broadcast the \(C_i\) commitment to their contribution and gossip the rest of their DKG transcript to a subset of the other parties (we discuss this in more detail later on).

Verifying DKG transcripts. To verify the DKG \(\mathsf {transcript}\) from Eq. (1), one first checks that its inner \(\mathsf {pvss}\) transcript verifies. Second, for all non-trivial contributions with \(w_i \ne 0\), one first checks if their signature of knowledge \(\sigma _i\) verifies. Finally, one checks that the contributions correctly combine to the commitment \(F_0\) to the zero coefficient of f(X) shared in \(\mathsf {pvss}\); i.e., that \(C_1^{w_1} \cdots C_n^{w_n} = F_{0}\). If \(\mathsf {transcript}\) passes these checks, then one can be sure that the players \(P_i\) which have \(w_i\ne 0\) in \(\mathsf {transcript}\) have contributed to its corresponding DKG secret. See Fig. 2 for a full description.

Aggregating DKG transcripts. Given two input DKG transcripts

$$\begin{aligned} (C_{b,1}, \ldots , C_{b,n}), (w_{b,1}, \ldots , w_{b,n}), (\sigma _{b,1}, \ldots , \sigma _{b,n}), \mathsf {pvss}_b,\ \text {for}\ b\in \{1,2\} \end{aligned}$$

we can easily aggregate them into a single DKG transcript

$$\begin{aligned} (C_{1}, \ldots , C_{n}), (w_{1}, \ldots , w_{n}), (\sigma _{1}, \ldots , \sigma _{n}), \mathsf {pvss}\end{aligned}$$

We first aggregate the \(\mathsf {pvss}_b\) transcripts into \(\mathsf {pvss}\) via \(\mathsf {Scrape.Aggregate}\) (see Fig. 1). Second, we aggregate the weights, which are field elements, as \(w_i = w_{1,i} + w_{2,i},\forall i\in [n]\). Third, if \(P_i\) contributed in one of the input transcripts, then \(P_i\)’s contribution should also be reflected in the aggregated transcript. In other words, for any \(C_{b,i}\ne \bot \) and valid \(\sigma _{b,i}\), we simply set \(C_i = C_{b,i}\) and \(\sigma _i = \sigma _{b,i}\). The choice of \(C_{b,i}\) does not matter when they are both \(\ne \bot \) since they were both obtained from the broadcast channel, so they must be equal. As a result, their corresponding \(\sigma _{b,i}\)’s will also be equal since our signatures of knowledge are unique.

Fig. 2.
figure 2

Aggregation algorithm for the distributed key generation protocol.

Fig. 3.
figure 3

Our DKG with reconstruction threshold \(t+1\) run by parties \(P_1, \ldots , P_n\).

Reconstructing the secret. As explained in the beginning of this section, the final key-pair will be \(\mathsf {pk}= (g_1^{f(0)}, \hat{u}_2) = (g_1^{f(0)}, \hat{u}_1^{f(0)})\) and \(\mathsf {sk}= \hat{h}_1^{f(0)}\). Since the final DKG transcript is just an augmented Scrape PVSS transcript, reconstruction of \(\mathsf {sk}\) works as explained in Sect. 4.

5.1 A Gossip Protocol

In Step 4 of our DKG, we rely on a gossip protocol to communicate the \(\mathcal {O}(n)\)-sized DKG transcripts. By using gossip, we avoid both the need to broadcast these larger messages, which is expensive, and the need for a central aggregator. We detail our protocol in the full version of our paper [38], but provide some insight here into how it works.

We take an optimistic approach and provide robustness for up to \(t_r < n/2 - \log {n}\) crashed parties but only up to \(\log {n}\) Byzantine adversaries. We believe this approach is often reasonable in practice because if a Byzantine adversary attacks the robustness of a DKG, the only outcome is that the computation required to output the DKG is higher. Furthermore, Byzantine attacks on robustness are detectable, so any faulty party can be manually removed from the system. This is in contrast to an attack on the security preservation of the DKG, which could have far more serious consequences. If we want a security threshold of \(t_s\), then we have to assume that \(t_s\) parties respond. A direct implication is that \(n-t_r\) must be at least \(t_s\), showing an inherent tradeoff between the security and robustness thresholds. In our scheme we can set \(t_s\) to be exactly equal to \(n - t_r\).

The gossip protocol has each party send its currently aggregated DKG transcript to \(\mathcal {O}(c\log {n})\) parties in expectation in each round, and terminate when it has agreed on a “full” transcript; i.e., a valid transcript with at least \(t_s+1\) contributions. Here c is a small success parameter such that \(c \ge 4\). However, deciding when to terminate is non-trivial, because the aggregated “full” transcripts may all be different. We thus still rely on broadcast to agree on which transcript to use, but our goal is to minimize the number of total broadcasts. We do this by having each party with a full transcript broadcast it with probability 2/n in a given round. We argue that this makes the protocol likely to terminate within \(\mathcal {O}(c\log n)\) rounds. Parties agree to use the transcript whose public key has a binary representation with the smallest bit-count (but any other publicly-verifiable convention works too). In terms of complexity, our gossip protocol requires \(\mathcal {O}(cn^2\log {n})\) total words to be communicated in private messages and \(\mathcal {O}(c \log ^2 n)\) broadcasts.

5.2 Security Analysis

Robustness. Our DKG is robust in the sense that all honest parties agree on the final public key, and in the sense that any set S containing at least \(t + 1\) honest parties can reconstruct the secret key.

Theorem 1 (DKG is robust)

The scheme in Fig. 3 is robust for any primitive with keys of the form \(\mathsf {pk}= (g_1^{a}, \hat{u}_1^a) \in \mathbb {G}_1 \times \mathbb {G}_2\).

Proof

First we show that all honest parties have the same value \(\mathsf {pk}\). By perfect synchrony we have that in each round all honest parties agree on a completing set of broadcasts. From the broadcast messages that complete and verify, one must have the most sparse binary decomposition. This message defines a public key \(\mathsf {pk}\) that all parties agree on.

We show that reconstruction always succeeds on input of n shares where at least \(t+1\) are input by non-faulty parties. First observe that if the DKG transcript verifies, then for some random value \(\alpha \) we have that

$$ A_1^{\ell _1(\alpha )} \cdots A_{n}^{\ell _n(\alpha )} = F_0 F_1^{\alpha } \cdots F_{t}^{\alpha ^{t}} $$

By the Schwartz-Zippel Lemma this implies that with overwhelming probability

$$ f(X) = f_0 + f_1 X + \cdots + f_t X^t = a_1 \ell _1(X) + \cdots + a_n \ell _{n}(X) $$

and \(a_i = f(\omega _i)\). Second observe that \(e(A_i, \hat{h}_1) = e(g_1, \hat{A}_i)\) if and only if \(\hat{A}_i = \hat{h}_1^{f(\omega _i)}\). Where at least \(t + 1\) parties are honest the reconstruction algorithm receives at least \(t+1\) verifying shares. With \(t + 1\) verifying shares the reconstruction algorithm always succeeds because f has degree t.

Security preserving. We now prove that our DKG satisfies key expressability; i.e., we construct a simulator that is able to fix the output to be a value \(\alpha \mathsf {pk}_1 + \mathsf {pk}_2\), where \(\mathsf {pk}_1\) is given as input and \(\alpha \ne 0\). This does not directly prove that the DKG preserves security, but in the full version of our paper we detail how combining a key-expressable DKG with rekeyable encryption schemes, signature schemes, and VUFs implies that the DKG also preserves security of these primitives. We cover these three due to their popularity (and our VUF construction in Sect. 7), but envisage that there are many other primitives that are rekeyable and thus similarly preserve their security when combined with key-expressable DKGs.

Theorem 2 (DKG)

The scheme in Fig. 3 is key-expressable as per Definition 7 in the random oracle model for any primitive with keys of the form \(\mathsf {pk}= (g_1^{a}, \hat{u}_1^a) \in \mathbb {G}_1 \times \mathbb {G}_2\) and \(\mathsf {sk}= \hat{h}_1^a \in \mathbb {G}_2\).

Proof

We design an adversary \(\mathcal {B}\) that takes as input \(\mathsf {pk}_1\) such that whenever the DKG outputs \(\mathsf {pk}\), \(\mathcal {B}\) outputs \(\alpha , \mathsf {pk}_2, \mathsf {sk}_2\) such that \(\mathsf {pk}= \alpha \mathsf {pk}_1 + \mathsf {pk}_2\). Suppose \(\mathcal {B}\) receives input \(\mathsf {pk}_1 =(g_2, \hat{v}_2)\).

First \(\mathcal {B}\) runs the DKG with \(\mathcal {A}\). Let \(\mathbb {I}_B \subset [1,n]\) be the set of corrupted (i.e. “bad”) parties and \(\mathbb {I}_G \subset [1,n]\) be the set of uncorrupted (“good”) parties. For good parties \(P_k\), \(\mathcal {B}\) simulates the adversarial view of this party’s output, so that public view \(C_k, \hat{u}_{k,2}\) sent by \(P_k\) is equal to \(( g_2^{a_k}, \hat{v}_2^{a_k})\).

In the course of this simulation, \(\mathcal {B}\) answers \(\mathcal {A}\)’s queries to the oracle \(\mathsf {Hash}_{\mathbb {G}_2}\) by selecting \(r \xleftarrow {\$}\mathbb {F}\) at random, and returning \(\hat{h}_1^r\).

In the registration round, when \(\mathcal {A}\) queries \(\mathcal {B}\) on the k-th honest value, \(\mathcal {B}\) chooses \(\mu _k, \kappa _k \xleftarrow {\$}\mathbb {F}\) randomly from the field and returns the public key \((\mathsf {ek}_k, \mathsf {vk}_k) =( \hat{u}_1^{\mu _k}, g_2^{\kappa _k})\).

In the broadcast round, \(\mathcal {B}\) chooses \(a_k \xleftarrow {\$}\mathbb {F}\) randomly for each honest party and computes \(C_k = g_2^{a_k}\). It then samples \(\chi _k, \psi _k \xleftarrow {\$}\mathbb {F}\) and programs \(\mathsf {Hash}_{\mathbb {G}_2}\) to return \(\hat{u}_1^{\chi _k}\) and \(\hat{u}_{1}^{\psi _k}\) on input \(C_k\) and \((\mathsf {vk}_k, C_k)\) respectively. Finally it broadcasts \(C_k\). With overwhelming probability, \(\mathcal {A}\) is yet to query the randomised value \(C_k\).

In the share creation round, when queried on \(P_k\), \(\mathcal {B}\) is required to output

$$\begin{aligned} (\boldsymbol{F}_k, \hat{u}_{k,2}, \hat{\sigma }_k, \boldsymbol{A}_k, \boldsymbol{\hat{Y}}_k) \end{aligned}$$

that are indistinguishable from a valid output. Assume without loss of generality that \(|\mathbb {I}_B | = t\). It then behaves as follows

  1. 1.

    Choose random \(\bar{x}_{k,j} \xleftarrow {\$}\mathbb {F}\) for each \(j \in \mathbb {I}_B \) and interpolate in the exponent to find \((F_{k,0} , \ldots , F_{k,t})\) such that \(F_{k,i}=g_1^{c_i}\), where \(\sum _{i =0}^{t} c_i X^i\) evaluates to \(\bar{x}_{k,j}\) at \(\omega _j\) for \(j \in \mathbb {I}_B\) and \(a_k \log _{g_1}(g_2)\) at 0. These \(c_i\) values are unknown to \(\mathcal {B}\).

  2. 2.

    Set \(\hat{u}_{k,2} = \hat{v}_2^{a_k}\).

  3. 3.

    Set \(\sigma _k = ( \hat{v}_2^{a_k \chi _k}, \hat{v}_2^{\kappa _k \psi _k} )\).

  4. 4.

    To compute \(A_{k,1}, \ldots , A_{k,n}\), set \(A_{k,j} = \prod _{i=0}^{t} F_{k,i}^{\omega _j^i}\).

  5. 5.

    To compute \(\hat{Y}_{k,j}\) for \(j \in \mathbb {I}_B\), return \(\mathsf {ek}_j^{\bar{x}_{k,j}}\). To compute \(\hat{Y}_{k,j}\) for \(j \in \mathbb {I}_G\), interpolate in the exponent to find \( \hat{u}_1^{c_0}, \ldots , u_1^{c_{t}}\) for \(c_0, \ldots , c_{t-1}\) as in Step 1 (recall that \(\mathcal {B}\) knows \(\hat{u}_1^{\log _{g_1}(g_2)}\)). Return \(\hat{Y}_{k,j}= \prod _{i=0}^{t} \hat{u}_1^{c_i \mu _j \omega _{j}^{i}}\).

This simulation is perfect. Indeed \(c_{0} = \log _{g_1}(C_k)\) and \(c_1, \ldots , c_t\) are randomly distributed. We have that \(\hat{u}_{k,2} = \hat{v}_2^{a_k(\nu + 1)} = \hat{u}_{1}^{\log _{g_1}(C_k)}\). Also, \(\sigma _{k,1} = \mathsf {Hash}_{\mathbb {G}_2}(C_k)^{\log _{g_1}(C_k)}\) and \(\sigma _{k,2} = \mathsf {Hash}_{\mathbb {G}_2}(\mathsf {vk}_k, C_k)^{\log _{g_1}(\mathsf {vk}_k)}\). The values \(A_{k,1}, \ldots , A_{k,n}\) are computed honestly and are the unique encryptions that satisfy the verifier.

Suppose that the DKG terminates with transcript \(((C_{1}', \ldots , C_{n}'), (w_{1}, \ldots , w_{n}), (\sigma _{1}', \ldots , \sigma _{n}'), \mathsf {pvss}) .\) The public key is given by \(C = C_{1}' \cdots C_{n}', \hat{u}_2 = \hat{u}_{1,2} \cdots \hat{u}_{n,2}\). For each adversarial contribution \(C_i'\), \(\mathcal {B}\) looks up r such that \(\mathsf {Hash}_{\mathbb {G}_2}(C_{j}') = \hat{h}_1^r\). Here, i can be any index, as \(\mathcal {A}\) might have forged one of \(\mathcal {B}\)’s contributions. If the adversary has not queried \(\mathsf {Hash}_{\mathbb {G}_2}\) on \(C'\) then the probability of them returning a verifying signature \(\sigma \) is negligible. To get the secret key share, \(\mathcal {B}\) extracts \(\hat{C}_{i} = \hat{\sigma }^{\frac{1}{r}}\) such that \(\hat{C}_{i} = \hat{h}_1^{\log _{g_1} (C_i) }\).

If \(\mathcal {A}\) has included at least one of \(\mathcal {B}\)’s contributions, then \(\mathcal {B}\) computes \(z = \sum _{k \in S} w_k\) for S the set of honest participants whose contribution is included in the transcript. Additionally, \(\mathcal {B}\) computes \(\mathsf {pk}_2 = (\prod _{i \not \in S} C_i', \prod _{i \not \in S} \hat{u}_{i,2})\) and \(\mathsf {sk}_2 = \prod _{i \not \in S} \hat{C}_i\). Then, we have that \( \mathsf {pk}= \alpha \mathsf {pk}_1 + \mathsf {pk}_2 \) for \(\alpha \ne 0\) and \(\mathsf {sk}_2\) is a key for \(\mathsf {pk}_2\). Thus \(\mathcal {B}\) returns \((\alpha , \mathsf {sk}_2)\).

If \(\mathcal {A}\) has not included any contributions from \(\mathcal {B}\), then that \(\mathcal {A}\) has forged a signature \(\sigma _k'\) with respect to some \(\mathsf {vk}_k = g_2^{\kappa _k}\) and contribution \(C_k'\). Using the oracle queries, \(\mathcal {B}\) looks up r such that \(\mathsf {Hash}_{\mathbb {G}_2}(\mathsf {vk}_k, C_k') = \hat{h}_1^{r}\). Since \(\sigma _k' = (\sigma _{k,1}', \sigma _{k,2}')\) verifies, we have that \(\sigma _{k,2}' = \hat{h}_1^{r \kappa _k \log _{g_1}(g_2)}\). Thus, \(\mathcal {B}\) computes \(\mathsf {sk}_1 = (\sigma _{k,2}')^{\frac{1}{r \kappa _k}}\). Additionally, \(\mathcal {B}\) computes \(\mathsf {pk}_2 = (g_2^{-1} \prod _{i} C_i', \hat{v}_2^{-1} \prod _{i} \hat{u}_{i,2})\) and \(\mathsf {sk}_2 = \mathsf {sk}_1^{-1} \prod _{i} \hat{C}_i\). Then, we have that \( \mathsf {pk}= \mathsf {pk}_1 + \mathsf {pk}_2 \) and \(\mathsf {sk}_2\) is a key for \(\mathsf {pk}_2\) and \(\mathcal {B}\) returns \((1, \mathsf {sk}_2)\).

6 Alternative DKGs Have Provable Security

In this section we demonstrate that two popular DKGs, the Pedersen DKG and the Fouque-Stern DKG, are also key-expressable. As a direct consequence, they can be used to securely instantiate a DKG for both El-Gamal encryption and BLS signatures, as we prove in the full version of our paper [38]. Our results generalise to other rekeyable constructions that have public keys in \(\mathbb {G}\) and secret keys in \(\mathbb {F}\). In addition to justifying the applicability of our security definitions and proof techniques, we hope this also fills a gap in the literature as we are unaware of other works that provide correct proofs for these DKGs.

6.1 Pedersen DKG from Feldman’s VSS

We prove that key expressability holds for Pedersen’s DKG provided the threshold of adversarial participants is less than n/2. It is our belief that this bound on the number of adversarial participants can be removed provided that one gives signatures of knowledge of the individual contributions. Pedersen’s DKG can be seen as n parallel instantiations of the Feldman VSS [25]. We remind the reader that key expressability does not imply secrecy (invalidating the attack of Gennaro et al. [31]) but does allow us to prove the security preservation of certain rekeyable schemes. A proof of the following theorem is provided in the full version of our paper.

Theorem 3

The Pedersen DKG is a key-expressable DKG against static adversaries with adversarial threshold \(t < n / 2\) for any scheme whose key generation outputs values \(\mathsf {pk}= g_1^{a} \in \mathbb {G}_1\), \(\mathsf {sk}= a \in \mathbb {F}\).

6.2 The Fouque-Stern Publicly Verifiable DKG

We now show the key expressability of the publicly verifiable Fouque Stern DKG [27]. This DKG has the benefit of outputting field elements as secret keys, but the total communication and verification costs are of order \(\mathcal {O}(n^2)\). Unlike Fouque and Stern’s original argument, we allow for the existence of rushing adversaries. Indeed Fouque and Stern rely in their reduction on an honest party playing last. Instantiating such an assumption would require the use of a trusted third party and therefore negate the benefits of distributing the key generation.

A proof of the following theorem is provided in the full version of our paper.

Theorem 4

The Fouque-Stern DKG is a key-expressable DKG in the random oracle model against static adversaries under the decisional composite residuosity assumption for any scheme whose key generation outputs values \(\mathsf {pk}= g_1^{a} \in \mathbb {G}_1\), \(\mathsf {sk}= a \in \mathbb {F}\).

6.3 El-Gamal and BLS

In the full version of our paper, we observe that El-Gamal encryption and BLS signatures are both rekeyable (and both have field elements as secret keys). We thus obtain the following two corollaries:

Corollary 1

The El-Gamal encryption scheme is IND-CPA-secure when instantiated with the Pedersen DKG or the Fouque-Stern DKG.

Corollary 2

The BLS signature scheme is EUF-CMA-secure when instantiated with the Pedersen DKG or the Fouque-Stern DKG.

7 A Structure-Preserving VUF

In this section, we introduce a verifiable unpredictable function (VUF), secure in the random oracle model, that has group elements as the secret key. We can thus securely instantiate our VUF using our DKG.

As one application, VUFs can be used to create randomness beacons, where unlike in, e.g., BLS multi-signatures [11], if a threshold of signers is reached, then the same signature is always produced. By hashing the outcome of this VUF with a random oracle we can obtain a verifiable random function (VRF). Abe et al. [1] proved that it is impossible to construct an algebraic VUF with a secret key as a group element. Since we are using a hash function, however, we are not fully algebraic and therefore sidestep this impossibility result.

7.1 Our Construction

Our VUF scheme is given in Fig. 4. The techniques were inspired by a combination of BLS signatures [13] and Escala-Groth NIZKs [24] (which are an improvement of Groth-Sahai proofs [36]). Unlike BLS signatures our secret keys are group elements and unlike Escala-Groth NIZKs our VUFs are non-malleable.

Given an input \(m \in \mathbb {F}\) under public key \(g_1^{a}, \hat{u}_1^a\) and secret key \(\hat{h}_1^{a}\), the unique output given by \(\mathsf {VUF.Eval} (\mathsf {sk}, m)\) is \(e(\mathsf {Hash}_{\mathbb {G}_1}(m), \hat{h}_1^{a})\). Given \(g_1^{a} \in \mathbb {G}_1\) and \(\mathsf {Hash}_{\mathbb {G}_1}(m) \in \mathbb {G}_1\), it is hard for an adversary to compute \(e(\mathsf {Hash}_{\mathbb {G}_1} (m), \hat{h}_1)^{a} \in \mathbb {G}_T\). We formally prove in Theorem 5 and 6 that our VUF satisfies uniqueness (see Definition 3) and unpredictability (see Definition 4) under the \(\mathsf {SXDH}\) and \(\mathsf {BDH}\) assumptions.

Fig. 4.
figure 4

Verifiable unpredictable function with group elements as the secret key.

Setup: The setup algorithm is a transparent algorithm that takes as input the bilinear group \(\mathsf {bp}= (p, \mathbb {G}_1, \mathbb {G}_2, \mathbb {G}_T, e, g_1, \hat{h}_1)\) and returns four group elements in the second source group: \( \hat{u}_1, \hat{h}_2, \hat{h}_3, \hat{h}_4 \in \mathbb {G}_2^4. \)

KeyGen: The \(\mathsf {VUF.Gen} \) algorithm takes as input the common reference string. It samples a random field element \(a \xleftarrow {\$}\mathbb {F}\). The public key \(\mathsf {pk}\in \mathbb {G}_1 \times \mathbb {G}_2\) and the secret key \(\mathsf {sk}\in \mathbb {G}_2\) are given as \( \mathsf {pk}= (g_1^{a}, \hat{u}_1^{a}) \text{ and } \mathsf {sk}= \hat{h}_1^{a}. \)

Sign: The \(\mathsf {VUF.Sign} \) algorithm first hashes the message m to obtain \(Z \in \mathbb {G}_1\) as \( Z = \mathsf {Hash}_{\mathbb {G}_1}(m). \) The signer generates a commitment to \(\mathsf {sk}\) by sampling random elements \(\alpha ,\beta \in \mathbb {F}\) and computing

$$ (\hat{\pi }_1, \hat{\pi }_2) = (\hat{h}_1^{-\alpha } \hat{h}_2^{-\beta }, \mathsf {sk}\cdot \hat{h}_3^{-\alpha } \hat{h}_4^{-\beta }). $$

If \(\hat{h}_1, \hat{h}_2, \hat{h}_3, \hat{h}_4\) are randomly distributed, this commitment is perfectly hiding. However, if \(\hat{h}_1, \hat{h}_2, \hat{h}_3, \hat{h}_4\) form an SXDH challenge, then there exists some \(\xi \) such that \(\hat{h}_3 = \hat{h}_1^\xi \) and \(\hat{h}_4 = \hat{h}_2^\xi \), meaning that the commitment forms an El-Gamal encryption of \(\mathsf {sk}\). In this case, we say that the commitment is perfectly binding.

Having generated \((\hat{\pi }_1, \hat{\pi }_2)\), the signer now generates \((\pi _{1}, \pi _{2}, \pi _{3}, \pi _{4}) \in \mathbb {G}_1^{4}\) such that

$$ \begin{array}{lllll} (\pi _{1}, \pi _{2}, \pi _{3}, \pi _{4}) =&(g_1^{\alpha } ,&Z^{\alpha } ,&g_1^{\beta } ,&Z^{\beta } ) \end{array} $$

These signature elements have been designed such that the random blinders \(\alpha , \beta \) are canceled out in the verifier’s equations.

The signer returns the output \( \sigma = (\pi _1, \pi _2, \pi _3, \pi _4, \hat{\pi }_{1}, \hat{\pi }_{2}). \)

Derive: The \(\mathsf {VUF.Derive} \) computes \(Z = \mathsf {Hash}_{\mathbb {G}_1}(m) \) and then returns

$$ T = e(Z, \hat{\pi }_2) e(\pi _2, \hat{h}_3) e(\pi _4, \hat{h}_4 ) $$

as the unique and unpredictable component. If the signer is honest then \(T = e(Z, \mathsf {sk}) = \mathsf {VUF.Eval} (\mathsf {crs}_{\mathsf {vuf}}, \mathsf {sk}, m)\).

Verify: The \(\mathsf {VUF.Ver} \) algorithm parses the signature to check that \((\pi _{1}, \pi _{2}, \pi _{3}, \pi _{4})\) is in \( \mathbb {G}_1^{4} \), and \((\hat{\pi }_1, \hat{\pi }_2)\) is in \(\mathbb {G}_2^{2}\). The verifier computes Z identically to the signer, i.e., \( Z = \mathsf {Hash}_{\mathbb {G}_1}(m). \) The verifier then checks that three pairing equations are satisfied in order to be convinced that there exist \(\alpha , \beta \) such that

$$ (\pi _2, \pi _4, \hat{\pi }_2) = (Z^{\alpha }, Z^{\beta }, \hat{h}_3^{-\alpha } \hat{h}_4^{-\beta } \cdot \mathsf {sk}) $$

Specifically, they check that:

$$\begin{aligned} 1&= e(g_1, \hat{\pi }_1) e(\pi _1, \hat{h}_1) e(\pi _3, \hat{h}_2) \end{aligned}$$
(2)
$$\begin{aligned} 1&= e(Z, \hat{\pi }_1) e(\pi _2, \hat{h}_1) e(\pi _4, \hat{h}_2) \end{aligned}$$
(3)
$$\begin{aligned} e(\mathsf {pk}, \hat{h}_1)&= e(g_1, \hat{\pi }_2) e( \pi _1, \hat{h}_3 ) e( \pi _3, \hat{h}_4 ) \end{aligned}$$
(4)

They return 1 if all these checks pass and 0 otherwise.

Given a signature that satisfies these equations, an extractor that knows a trapdoor SXDH relation between the CRS elements can output a valid witness \(\mathsf {sk}\). However, there also exists a simulated CRS indistinguishable from random such that we can simulate signatures without knowing \(\mathsf {sk}\).

Threshold VUF Scheme We discuss how to transform our VUF into a threshold VUF. The individual VUF shares can be made shorter using an optimisation in the full version of our paper [38]. Suppose that there are n parties \(P_1, \ldots , P_n\) and we want that any \(t + 1\) of them can jointly sign a message, but that t of them cannot. We use Shamir’s secret sharing scheme and choose a degree t polynomial f(X). Let \(\omega _1, \ldots , \omega _n\) denote unique evaluation points and \(\ell _{S,1}(X), \ldots , \ell _{S,t + 1}(X)\) denote the Lagrange polynomials such that for all \(\omega _j \in S\) we have that \(\ell _{S,i}(\omega _j)\) is equal to 1 if \(i = j\) and 0 otherwise.

The threshold setup algorithm runs identically to the non-threshold version to return \(\mathsf {crs}_{\mathsf {vuf}}\). The key generation outputs a public key and n secret key shares of the form

$$ \mathsf {pk}= (g_1^{f(0)}, \hat{u}_1^{f(0)}), \mathsf {sk}_1 = \hat{h}_1^{f(\omega _1)}, \ldots , \mathsf {sk}_{n} = \hat{h}_1^{f(\omega _n)}. $$

To compute their share of the threshold signature on m party \(P_i\) outputs

$$ \sigma _i = (\pi _{i,1}, \pi _{i,2}, \pi _{i,3}, \pi _{i,4}, \hat{\pi }_{i,1}, \hat{\pi }_{i,2}) \xleftarrow {\$}\mathsf {VUF.Sign} (\mathsf {crs}_{\mathsf {vuf}}, \mathsf {sk}_i, m) $$

To aggregate t signature shares on m from parties \(\{P_i\}_{i \in S}\) compute

$$ \sigma = \left( \prod _{i \in S} \pi _{i,1}^{\ell _{S,i}(0)}, \prod _{i \in S} \pi _{i,2}^{\ell _{S,i}(0)}, \prod _{i \in S} \pi _{i,3}^{\ell _{S,i}(0)}, \prod _{i \in S} \pi _{i,4}^{\ell _{S,i}(0)}, \prod _{i \in S} \hat{\pi }_{i,1}^{\ell _{S,i}(0)}, \prod _{i \in S} \hat{\pi }_{i,2}^{\ell _{S,i}(0)} \right) $$

The verification and derive algorithms run identically to their non-threshold counterparts on the input \((\mathsf {crs}_{\mathsf {vuf}}, \mathsf {pk}, m, \sigma )\)

We briefly show that \(\sigma \) is correct. Set \(Z = \mathsf {Hash}_{\mathbb {G}_1}(m)\) and see that \(\sigma = (\pi _1, \pi _2, \pi _3, \pi _4, \hat{\pi }_1, \hat{\pi }_2)\) is given by

$$ \begin{array}{lll} \pi _1 &{} = \prod _{i \in S} \pi _{i,1}^{\ell _{S,i}(0)} &{} = g_1^{\sum _{i \in S} \alpha _i \ell _{S,i}(0)} \\ \pi _2 &{} = \prod _{i \in S} \pi _{i,2}^{\ell _{S,i}(0)} &{} = Z^{\sum _{i \in S} \alpha _i \ell _{S,i}(0)} \\ \pi _3 &{} = \prod _{i \in S} \pi _{i,3}^{\ell _{S,i}(0)} &{} = g_1^{\sum _{i \in S} \beta _i \ell _{S,i}(0)} \\ \pi _4 &{} = \prod _{i \in S} \pi _{i,4}^{\ell _{S,i}(0)} &{} = Z^{\sum _{i \in S} \beta _i \ell _{S,i}(0)} \\ \hat{\pi }_1 &{} = \prod _{i \in S} \hat{\pi }_{i,1}^{\ell _{S,i}(0)} &{} = \hat{h}_1^{ - \sum _{i \in S} \alpha _i \ell _{S,i}(0)} \hat{h}_2^{ - \sum _{i \in S} \beta _i \ell _{S,i}(0)} \\ \hat{\pi }_2 &{} = \prod _{i \in S} \hat{\pi }_{i,2}^{\ell _{S,i}(0)} &{} = \hat{h}_3^{ - \sum _{i \in S} \alpha _i \ell _{S,i}(0)} \hat{h}_4^{ - \sum _{i \in S} \beta _i \ell _{S,i}(0)} \prod _{i \in S} \mathsf {sk}_i^{\ell _{S,i}(0)} \\ &{} &{} = \hat{h}_3^{ - \sum _{i \in S} \alpha _i \ell _{S,i}(0)} \hat{h}_4^{ - \sum _{i \in S} \beta _i \ell _{S,i}(0)} \hat{h}_1^{ f(\omega _i) \ell _{S,i}(0) } \end{array} $$

Since f has degree t we have that \( f(\omega _i) \ell _{S,i}(0) = f(0)\). Denote \(\alpha = \sum _{i \in S} \alpha _i \ell _{S,i}(0)\) and \(\beta = \sum _{i \in S} \beta _i \ell _{S,i}(0)\) in the above equation to get that

$$ (\pi _1, \pi _2, \pi _3, \pi _4, \hat{\pi }_1, \hat{\pi }_2) = (g_1^{\alpha }, Z^{\alpha }, g_1^{\beta }, Z^{\beta }, h_1^{-\alpha } h_2^{-\beta }, h_3^{-\alpha } h_4^{-\beta } \hat{h}_1^{f(0)}) . $$

Thus the threshold signature is distributed identically to the non-threshold counterpart and the verifier and deriver output 1 and \(e(Z, \hat{h}_1)^{f(0)}\), respectively.

Aggregatable signature scheme. It is also possible to use our VUF to instantiate an aggregatable signature scheme with secret keys as group elements. For aggregating, one simply takes the product of the public key elements output by \(\mathsf {VUF.Gen} \) and the signature elements output by \(\mathsf {VUF.Sign} \). Similar to the BLS scheme, this aggregatable signature scheme would be susceptible to rogue key attacks [49]. It is thus important to provide simulation-extractable proofs of knowledge of secret keys as part of a public key infrastructure.

7.2 Security Analysis

To prove that our VUF is secure, we need to prove that it satisfies uniqueness and unpredictability.

Theorem 5

The VUF in Fig. 4 satisfies uniqueness (Definition 3) under the \(\mathsf {SXDH}\) assumption in the random oracle model.

Theorem 6

The VUF in Fig. 4 satisfies unpredictability (Definition 4) under the \(\mathsf {SXDH}\) and the \(\mathsf {BDH}\) assumption in the random oracle model.

We provide formal proofs of these theorems in the full version of our paper. Intuitively, uniqueness relies on the fact that it would be statistically impossible to satisfy the verifiers equations for a wrong evaluation if the CRS was made up of an \(\mathsf {SXDH}\) instance. Since \(\mathsf {VUF.Eval} \) is deterministic there can only be one correct evaluation. Thus if an adversary could break uniqueness in the general case, then we could use them as a subroutine to determine \(\mathsf {SXDH}\) instances from random.

Our unpredictability proof uses an adversary who predicts the VUF to compute a \(\mathsf {BDH}\) output. To do this we embed one component of the \(\mathsf {BDH}\) challenge into the public key being targeted, and the other into the adversaries random oracle queries. However, we also need to simulate responses to the adversaries signature requests, and to do this (after jum** to a hybrid game with a structured CRS) we need to program the oracle such that we know a discrete log. This could present a collision as the adversary may have already queried that point. To counteract, we take a random guess as to which oracle query the adversary will output their prediction for, and if we guess wrong we abort. Thus our reduction is not tight, but does provide us with a polynomial chance of success whenever the adversary succeeds.

After observing that our VUF is rekeyable, we prove the following corollary in the full version of our paper.

Corollary 3

The VUF in Fig. 4 is unique and unpredictable when instantiated with the DKG in Fig. 3.

8 Implementation

We implement our DKG and VUF and summarise the performance of our schemes in Tables 2 and 3. Our implementation is written in Rust on top of the \(\mathtt {libzexe}\) library, which performs efficient finite field arithmetic, elliptic curve arithmetic, and finite field FFTs. We evaluate our DKG and VUF on a desktop machine with an i7-8700k CPU at 3.7 GHz and 32 GB of DDR4 RAM. We use the BLS12-381 curve. For hashing to groups, we use the try-and-reject method by instantiating a ChaCha20 RNG with a Blake2s hash of the input message, sampling field elements and checking if they are valid x-coordinates, deriving the corresponding point if so. Our implementation is not constant-time. Upon publication, we plan to release our implementation as open-source software.

We utilise a few optimization techniques throughout the implementation. First, when verifying multiple pairing equations, we instead compute a randomised check of a single pairing equation so as to amortise the cost of the final exponentiations. We then compute the pairing product efficiently using the underlying \(\mathtt {libzexe}\) implementation. In the same vein, when verifying pairing equations where two pairings are computed with respect to the same source group element, we combine the two into a randomised check. For large multi-exponentiations we use the \(\mathtt {libzexe}\) implementation of Pippenger’s algorithm. For large polynomial evaluations we use FFTs. We additionally utilise batch normalization of projective points.

Table 2. The performance of our DKG, averaged across 10 samples of each operation. For n parties, we use a threshold of \(t = 2n/3\).

We evaluate our DKG with respect to 64, 128, 256, and 8192 parties. We see that the time taken to compute, verify, and aggregate a transcript all increase linearly in the number of parties. Verifying a transcript with 256 parties takes a little more than a second.

Table 3. The performance of our VUF (Sect. 7), our optimised VUF, and the BLS signature scheme. These numbers were averaged across four distinct runs, with 100 samples of each operation per run.

In addition to our VUF presented in Sect. 7, we also evaluate an optimised VUF that we present in the full version of our paper [38]. We compare the performance of our VUF and our optimised VUF with BLS [13], which is the state of the art in the random oracle model. We do not give the derivation time for BLS because this is the identity function. It can be seen that signing and verifying our optimised VUF is only fractionally more expensive than BLS, but that verifying our full VUF is approximately twice as expensive.