Estimating the Bit Security of Pairing-Friendly Curves
2022-2-3 17:15:0 Author: research.nccgroup.com(查看原文) 阅读量:19 收藏

Introduction

The use of pairings in cryptography began in 1993, when an algorithm developed by Menezes, Okamoto and Vanstone, now known as the MOV-attack, described a sub-exponential algorithm for solving the discrete logarithm problem for supersingular elliptic curves.1 It wasn’t until the following decade that efficient pairing-based algorithms were used constructively to build cryptographic protocols applied to identity-based encryption, short signature signing algorithms and three participant key exchanges.

These protocols (and many more) are now being adopted and implemented “in the wild”:

Paring-based protocols present an interesting challenge. Unlike many cryptographic protocols which perform operations in a single cryptographic group, pairing-based protocols take elements from two groups and “pair them together” to create an element of a third group. This means that when considering the security of a pairing-based protocol, cryptographers must ensure that any security assumptions (e.g. the hardness of the discrete logarithm problem) apply to all three groups.

The growing adoption of pairing-based protocols has inspired a surge of research interest in pairing-friendly curves; a special class of elliptic curves suitable for pairing-based cryptography. In fact, currently all practical implementations of pairing-based protocols use these curves. As a result, understanding the security of pairing-friendly curves (and their associated groups) is at the core of the security of pairing-based protocols.

In this blogpost, we give a friendly introduction to the foundations of pairing-based protocols. We will show how elliptic curves offer a perfect space for implementing these algorithms and describe at a high level the cryptographic groups that appear during the process.

After a brief review of how we estimate the bit security of protocols and the computational complexity of algorithms, we will focus on reviewing the bit-security of pairing-friendly curves using BLS-signatures as an example. In particular, we collect recent results in the state-of-the-art algorithms designed to solve the discrete logarithm problem for finite fields and explain how this reduces the originally reported bit-security of some of the curves currently in use today.

We wrap up the blogpost by discussing the real-world ramifications of these new attacks and the current understanding of what lies ahead. For those who wish to hold true to (at least) 128 bit-security, we offer references to new pairing-friendly curves which have been carefully generated with respect to the most recent research.

This blogpost was inspired by some previous work that NCC Group carried out for ZCash, which in-part discussed the bit-security of the BLS12-381 curve. This work was published as a public report.

Maths Note: throughout this blog post, we will assume that the reader is fairly comfortable with the notion of a mathematical group, finite fields and elliptic curves. For those who want a refresher, we offer a few lines in an Appendix: mathematical notation.

Throughout this blog post, we will use p to denote a prime and q = p^m, \; m>0 denote a prime power. It follows that \mathbb{F}_p is a prime field, and \mathbb{F}_{q} is prime-power field. We allow m = 1 so we can use \mathbb{F}_{q} when we wish to be generic. When talking about pairing elements, we will use the notation \mathbb{F}_{p^k} such that the embedding degree k is explicit.

Acknowledgements

Many thanks to Paul Bottinelli, Elena Bakos Lang and Phillip Langlois from NCC Group, and Robin Jadoul in personal correspondence for carefully reading drafts of this blog post and offering their expertise during the editing process.

Bilinear Pairings and Elliptic Curves

Pairing groups together

At a high level, a pairing is a special function which takes elements from two groups and combines them together to return an element of a third group. In this section, we will explore this concept, its applications to cryptography and the realisation of an efficient pairing using points on elliptic curves.

Let G_1, G_2 and G_T be cyclic groups of order r. For this blog post, we will assume an additive structure for G_1 and G_2 with elements P, Q and a multiplicative structure for the target group G_T. A pairing is a map: e : G_1 \times G_2 \to G_T which is:

In the special case when the input groups G_1 = G_2, we say that the pairing is symmetric, and asymmetric otherwise.

For a practical perspective, let’s look at an example where we can use pairings cryptographically. Given a group with an efficient pairing e(\cdot, \cdot), we can use bilinearity to easily solve the decision Diffie-Hellman problem.2 Using a symmetric pairing, we can write

e([a]P, [b]P) = e(P,P)^{ab}, \qquad e(P, [c]P)= e(P,P)^c.

Therefore, given the DDH triple ([a]P, [b]P, [c]P), we can solve DDH by checking whether

e([a]P, [b]P) \stackrel{?}{=} e(P, [c]P),

which will be true only if and only if c = ab.

Pairing Points

🤓 The next few sections have more maths than most of the post. If you’re more keen to learn about security estimates and are happy to just trust that we have efficient pairings which take points on elliptic curves and return elements of \mathbb{F}^*_{p^k}, you can skip ahead to What Makes Curves Friendly? 🤓

Weil Beginnings

As the notation above may have hinted, one practical example of a bilinear pairing is when the groups G_1 and G_2 are generated by points on an elliptic curve. In particular, we will consider elliptic curves defined over a finite field: E(\mathbb{F}_q). We have a handful of practical bilinear pairings that we could use, but all of them take points on curves as an input and return an element of the group \mathbb{F}^*_{p^k}.

The first bilinear pairing for elliptic curve groups was proposed in 1940 by André Weil, and it is now known as the Weil pairing. Given an elliptic curve E(\mathbb{F}_q) with a cyclic subgroup G of order r, the Weil pairing takes points in the r-torsion E(\mathbb{F}_{p^k})[r] and returns an rth root of unity in \mathbb{F}^*_{p^k} for some integer k \geq 1.3

The r-torsion of the curve is the group E(\mathbb{F}_{q})[r] which consists of all the points on the curve which have order r: E(\mathbb{F}_{q})[r] = \{P \; | \; rP = \mathcal{O} , P \in E(\mathbb{F}_q) \}. The integer k appearing in the target group is called the (Weil) embedding degree and is the smallest integer k such that E(\mathbb{F}_{p^k})[r] has r^2 elements.4

To compute the value of the Weil pairing, you first need to be able to compute rational functions f_P and f_Q with a prescribed divisor.5 The Weil pairing is the quotient of these two functions:

e: E(\mathbb{F}_{p^k})[r] \times E(\mathbb{F}_{p^k})[r] \to \mathbb{F}^*_{p^k}, \qquad e(P,Q) = \frac{f_P(\mathcal{D}_Q)}{f_Q(\mathcal{D}_P)},

where the rational function f_S has a divisor r(S) - r(\mathcal{O}) and \mathcal{D}_S is a divisor (S) - \mathcal{O}, for point on the curve S \in E(\mathbb{F}_{p^k}).

When Weil first proposed his pairing, algorithms to compute these special functions ran in exponential time in r , so to compute the pairing for a cryptographically suitable curve was impossible.

It wasn’t until 1986, when Miller introduced his linear time algorithm to compute functions on algebraic curves with given divisors, that we could think of pairings in a cryptographic context. Miller’s speed up can be thought of in analogy to how the “Square and Multiply” formula speeds up exponentiation and takes \log(r) steps. Similar to “Square and Multiply”, the lower the hamming weight of the order r, the faster Miller’s algorithm runs. For a fantastic overview of working with Miller’s algorithm in practice, we recommend Ben Lynn’s thesis.

Tate and Optimised Ate Pairings

The Weil pairing is a natural way to introduce elliptic curve pairings (both historically and conceptually), but as Miller’s algorithm must be run twice to compute both f_P and f_Q, the Weil pairing can affect the performance of a pairing based protocol.

In contrast, Tate’s pairing t(\cdot, \cdot) requires only a single call to Miller’s algorithm and can be computed from

t(P, Q) = f_P(\mathcal{D}_Q)^{\frac{p^k-1}{r}},

leading to more efficient pairing computations. We can relate the Weil and Tate pairings by the relation:

e(P,Q)^{\frac{p^k-1}{r}} = \frac{t(P,Q)}{t(Q,P)}.

The structure of the input and output for the Tate pairing is a little more complicated than the Weil pairing. Roughly, the first input point P should still be in the r-torsion, but now the second point Q need not have order r. The output of the tate pairing is an element of the quotient group \mathbb{F}^*_{p^k} / (\mathbb{F}^*_{p^k})^r.

For the Weil pairing, the embedding degree was picked to ensure we had all r^2 of our torsion points in E(\mathbb{F}_{p^k})[r]. However, for the Tate pairing, we instead only require that \mathbb{F}_{p^k} contains the rth roots of unity, which is satisfied when the order of the curve point r divides (p^k - 1).6

Due to the Balasubramanian-Koblitz Theorem, for all cases other than k = 1, the Weil and Tate embedding degrees are the same. Because of this, for the rest of this blog post we will just refer to k as the embedding degree and think of it in Tate’s context; as finding one number to divide another is simpler than extending a field to ensure that the r-torsion has been totally filled!

The exponentiation present in the Tate pairing can be costly, but it can be further optimised, and for most implementations of pairing protocols, a third pairing called the Ate-pairing is picked for best performance. As with the Weil pairing, the Ate pairing can be written in terms of the Tate pairing. The optimised Ate pairing has the shortest Miller loop and is the pairing of choice for modern implementations of pairing protocols.

What Makes Curves Friendly?

When picking parameters for cryptographic protocols, we’re concerned with two problems:

  • The protocol must be computationally efficient.
  • The best known attacks against the protocol must be computationally expensive.

Specifically for elliptic curve pairings, which map points on curves to elements of \mathbb{F}_{p^k}, we need to consider the security of the discrete logarithm problem for all groups, while also ensuring that computation of the chosen pairing is efficient.

For the elliptic curve groups, it is sufficient to ensure that the points used generate a prime order subgroup which is cryptographically sized. More subtle is how to pick curves with the “right” embedding degree.

  • If the embedding degree is too large, computing the pairing is computationally very expensive.
  • If the embedding degree is too small, sub-exponential algorithms can solve the discrete log problem in \mathbb{F}^*_{p^k} efficiently.

Given a prime p and picking a random elliptic curve E(\mathbb{F}_p), the expected embedding degree is distributed (fairly) evenly between 0 < k < p. As we are working with cryptographically sized primes p, the embedding degree is then also expected to be cryptographically sized and the probability to find small k becomes astronomically small.7 Aside from the complexity of computing pairings when the degree is so large, we would also very quickly run out of memory even trying to represent an element of \mathbb{F}^*_{p^k}.

One trick to ensure we find curves with small embedding degree is to consider supersingular curves. For example, picking a prime p \equiv 3 \pmod 4 and defining the curve

E(\mathbb{F}_p): y^2 = x^3 + x,

we always find that the number of points on the curve: \# E(\mathbb{F}_p) = p + 1. Picking k = 2 , we can factor (p^2 - 1) = (p+1)(p-1). As the order of the curve \#E(\mathbb{F}_p) divides (p^2 - 1) , we see that the embedding degree is k = 2.

The problem with this curve is that ensuring the discrete log problem in \mathbb{F}_{p^2} is cryptographically hard would require a p so large, working with the curves themselves would be too cumbersome. Similar arguments with primes of different forms allow the construction of other supersingular curves, but whatever method is used, all supersingular curves have k \leq 6, which is usually considered too small for pairing-based protocols.

If we’re not finding these curves randomly and we can’t use supersingular curves, then we have to sit down and find friendly curves carefully using mathematics. In particular, we use complex multiplication to construct our special elliptic curves. This has been a point of research for the past twenty years, and cryptographers have been carefully constructing families of curves which:

  • Have large, prime-order subgroups.
  • Have small (but not too small!) embedding degree, (usually) between 12 \leq k \leq 24.

Additionally, the following conditions are considered “nice-to-have”:

Curves with these properties are perfect for pairing-based protocols and so we call them pairing-friendly curves!

A invaluable repository of pairing-friendly curves and links to corresponding references is Aurore Guillevic’s Pairing Friendly Curves. Guillevic is a leading researcher in estimating the security of pairing friendly curves and has worked to tabulate many popular curves and their best use cases.

A familiar face: BLS12-381

In 2002, Baretto, Lynn and Scott published a paper: Constructing Elliptic Curves with Prescribed Embedding Degrees, and curves generated with this method are in the family of “BLS curves”.

The specific curve BLS12-381 was designed by Sean Bowe for ZCash after research by Menezes, Sarkar and Singh showed that the Barreto–Naehrig curve, which ZCash had previously been using, had a lower than anticipated security.

This is a popular curve used in many protocols that we see while auditing code here at NCC, so it’s a perfect choice as an example of a pairing-friendly curve.

Let’s begin by writing down the curve equations and mention some nice properties of the parameters. The two groups G_1 \subset E_1 and G_2 \subset E_2 have a 255 bit prime order r and are generated by points from the respective curves:

E_1(\mathbb{F}_p) : y^2 = x^3 + 4, \\ \\ E_2(\mathbb{F}_{p^2}) : y^2 = x^4 + 4(1 + u), \qquad \mathbb{F}_{p^2} = \mathbb{F}_p[u] / (u^2 + 1).

The embedding degree is k = 12, G_T = \mathbb{F}^*_{p^{12}}, and the characteristic of the field p has 381 bits (hopefully now the name BLS12-381 makes sense!).

p = \tfrac{1}{3} (z - 1)^2(z^4 - z^2 + 1) + z, \qquad r = z^4 - z^2 + 1.

The low hamming weight of z reduces the cost of the Miller loop when computing pairings and to aid with zkSnark schemes, r-1 is chosen such that it is divisible by a large power of two (precisely, 2^{32}).

Ben Edgington has a fantastic blog post that carefully discusses BLS12-381, and is a perfect resource for anyone wanting to know more about this particular curve. For a programming-first look at this curve, NCC’s Eric Schorn has written a set of blog posts implementing pairing on BLS12-381 using Haskell and goes on to look at optimisations of pairing with BLS12-381 by implementing Montgomery arithmetic in Rust and then further improves performance using Assembly.

Something Practical: BLS Signatures

Let’s finish this section with an example, looking at how a bilinear pairing gives an incredibly simple short signature scheme proposed by Boneh, Lynn and Shacham.9 A “short” signature scheme is a signing protocol which returns small signatures (in bits) relative to their cryptographic security.

As above, we work with groups G_1, G_2, G_T of prime order r. For all practical implementations of BLS-signatures, G_1 and G_2 are prime order groups generated by points on elliptic curves and G_T = \mathbb{F}^*_{p^k} where k is the embedding degree. We could, for example, work with the above BLS12-381 curve and have k = 12.

Key Generation is very simple: we pick a random integer x, within the range 0 < x <r. The corresponding public key is a = [x]g \in G_1, where g is a generator of G_1.

To sign a message m, we represent the hash of this message H(m) as a point h \in G_2 on the curve.8 The signature is found by performing scalar multiplication of the point: \sigma = [x]h.

Pairing comes into play when we verify the signature \sigma. The verifier uses the broadcast points (a, \sigma) and the message m to check whether

e(g, \sigma) \stackrel{?}{=} e(a, h)

Where the scheme uses the bilinearity of the pairing:

e(g, \sigma) = e(g, [x]h) = e(g, h)^x = e([x]g, h) = e(a, h).

Picking groups

For pairing-friendly curves such as BLS or Barreto-Naehrig (BN) curves, G_1 has points with coordinates in \mathbb{F}_p while G_2 has points with coordinates in \mathbb{F}_{p^2}. This means that the compressed points of G_2 are twice as large as those from G_1 and operations with points from G_2 are slower than those from G_1.

When designing a signature system, protocols can pick for public keys or signatures to be elements of G_1, which would speed up either key generation or signing respectively. For example, ZCash pick G_1 for their signatures, while Ethereum have picked G_1 for their public keys in ETH2. Note that neither choice affects the benchmarks for verification; this is instead limited by the size (and Hamming weight) of the integers k, r and p.

A Bit on Bit Security

Infeasible vs. Impossible

Cryptography is built on a set of very-hard but not impossible problems.

For authenticated encryption, such as AES-GCM or ChaCha20-Poly1305, correctly guessing the key allows you to decrypt encrypted data. For asymmetric protocols, whether it’s computing a factor of an RSA modulus, or the private key for an elliptic curve key-pair, you’re usually one integer away from breaking the protocol. Using the above BLS signature example, an attacker knowing Alice’s private key x would allow them to impersonate Alice and sign any chosen message.

Cryptography is certainly not broken though. To ensure our protocols are secure, we instead design problems which we believe are infeasible to find a solution to given reasonable amount of time. We often describe the complexity of breaking a cryptographic problem by estimating the number of operations we would need to perform to correctly recover the secret. If a certain problem requires performing 2^n operations, then we call this an n bit-strength problem.10

As an example, let’s consider AES-256 where the key is some random 256-bit sequence.11 Currently, the best known attack to decrypt a given AES ciphertext, encrypted with an unknown key, is to simply guess a random key and attempt to decrypt the message. This is known as a brute-force attack. Defining our operation as: “guess a key, attempt to decrypt”, we’ll need at most 2^{256} operations to stumble upon the secret key. We therefore say that AES-256 has 256-bit security. This same argument carries over exactly to AES-128 and AES-192 which have 128- and 192-bit security respectively, or the ChaCha/Salsa ciphers which have either 128 or 256-bit keys.

How big is big?

Humans can have a tough time really appreciating how large big-numbers are,12 but even for the “lower” bound of 128-bit security, estimates end up comparing computation time with a galaxy of computers working for the age of the universe before terminating with the correct key! For those who are interested in a visual picture of quite how big this number is, there’s a fantastic video by 3blue1brown How Secure is 256 bit security and if you’re skimming this blogpost and would rather do some reading, this StackExchange question has some entertaining financial costs for performing 2^{256} operations.

Asymmetric Bit-Security

For a well-designed symmetric cipher, the bit-strength is set by the cost of a brute-force search across the key-space. Similarly for cryptographic hash functions, the bit-strength for preimage resistance is the bit-length of the hash output (collision resistance is always half of this due to the Birthday Paradox).

For asymmetric protocols, estimating the precise bit-strength is more delicate. The security of these protocols rely on mathematical problems which are believed to be computationally hard. For RSA, security is assured by the assumed hardness of integer factorisation. For Diffie-Hellman key exchanges, security comes from the assumed hardness of the computational Diffie-Hellman problem (and by reduction, the discrete logarithm problem).

Although we can still think of breaking these protocols by “guessing” the secret values (a prime factor of a modulus, the private key of some elliptic curve key-pair), our best attempts are significantly better than a brute-force approach; we can take the mathematical structure which allows the construction of these protocols and build algorithms to recover secret values from shared, public ones.

To have an accurate security estimate for an asymmetric protocol, we must then have a firm grasp of the computational complexity of the state-of-the-art attacks against the specific hard problem we consider. As a result, parameters of asymmetric protocols must be fine-tuned as new algorithms are discovered to maintain assurance of a certain bit-strength.

To attempt to standardise asymmetric ciphers, cryptographers generate their parameters (e.g. picking the size of the prime factors of an RSA modulus, or the group order for a Diffie-Hellman key-exchange) such that the current best known attacks require (at least) 2^{128}, 2^{192} or 2^{256} operations.

RSA Estimates: Cat and Mouse

As a concrete example, we can look at the history of estimates for RSA. When first presented in 1976, Rivest, Shamir and Adleman proposed their conservative security estimates of 664-bit moduli, estimating 4 billion years (corresponding to \sim 2^{77} operations) to successfully factor. Their largest modulus proposed had 1600 bits, with an estimated 130 bit-strength based off the state of the art factoring algorithm at the time (due to Schroeppel).

The RSA algorithm gave a new motivation for the design of faster factoring algorithms, each of which would push the necessary bit-size of the modulus higher to ensure that RSA could remain safe. The current record for integer factorisation is a 829 bit modulus, set Feb 28th, 2020 taking “only” 2700 core years. Modern estimates require a 3072 bit modulus to achieve at least 128-bit security for RSA implementations.

Estimating computational complexity

Let’s finish this section with a discussion of how we denote computational complexity. The aim is to try and give some intuition for “L-notation” used mainly by researchers in computational number theory for estimating the asymptotic complexity of their algorithms.

Big-O Notation

A potentially more familiar notation for our readers is “Big-\mathcal{O}” notation, which estimates the number of operations as an upper bound based on the input to the algorithm. For an input of length n, a linear time algorithm runs in \mathcal{O}(n), which means if we double the length of n, we would expect the running time to also double. A naive implementation of multiplication of numbers with n digits runs in \mathcal{O}(n^2) or quadratic time. Doubling the length of the input, we would expect a four-times increase in computation time. Other common algorithms run in logarithmic: \mathcal{O}(\log n), polylogarithmic: \mathcal{O}((\log n)^c) or polynomial time: \mathcal{O}(n^c), where c is a fixed constant. An algorithm which takes the same amount of time regardless of input length is known as constant time, denoted by \mathcal{O}(1).

In order to protect against side-channel attacks, constant time algorithms are particularly important when implementing cryptographic protocols. If a secret value is supplied to a variable-time algorithm, an attacker can carefully measure the computation time and learn about private information. By assuring algorithms run in constant time, side-channel attacks are unable to learn about private values when they are supplied as arguments. Note that constant does not (necessarily) mean fast! Looking up values in hashmaps is \mathcal{O}(1) and fast. Computing the scalar multiplication of elliptic curve points, given a fixed elliptic curve E(\mathbb{F}_q), for cryptographic use should be \mathcal{O}(1), but this isnt a particularly fast thing to do! In fact, converting operations to be constant time in cryptography will often slow down the protocol, but it’s a small price to protect against timing attacks.

As a final example, we mention an algorithm developed by Shanks which is a meet in the middle algorithm for solving the discrete logarithm problem for a generic Abelian group. It is sufficient to consider prime order groups, as the Pohlig-Hellman algorithm allows us to reduce the problem for composite order groups into its prime factors, reconstructing the solution using the Chinese remainder theorem.

For a group of prime order p, a naive brute-force attack for the discrete logarithm problem would take at most \mathcal{O}(p) operations. Shanks’ Baby-Step-Giant-Step (BSGS) algorithm runs in two stages, and like all meet in the middle algorithms, the time complexity of the algorithm can be reduced with a time/space trade off.

The BSGS algorithm takes in group elements P,Q \in G and looks for an integer n such that P = [n]Q. First we perform the baby steps, where M = \lceil{\sqrt{p}}\;\rceil multiples of R_i = [x_i]P are computed and stored in a hashmap. Next, the giant steps, where up to M group elements S_j = Q + [y_j]P are computed. At each giant step, a check for S_j = R_i is performed. When a match is found, the algorithm terminates and returns n = x_i - y_j. All in, the BSGS takes \mathcal{O}(\sqrt{p}) operations to solve the discrete logarithm problem and requires \mathcal{O}(\sqrt{p}) space for the hashmap. Pollard’s rho algorithm achieves the same \mathcal{O}(\sqrt{p}) upper bound time complexity, without the need to build the large hashmap, which can be prohibitively expensive as the group’s order grows.

L-notation

L-notation is a specialised notation used for many of the best-case time complexity approximations for cryptographically hard problems such as factoring, or solving the discrete logarithm for finite fields.

L-notation gives the expected time complexity for an algorithm with input of length n as we allow the length to become infinitely large:

L_n[\alpha, c] = \exp \left\{ (c + o(1))(\ln n)^\alpha (\ln \ln n)^{1-\alpha} \right\}

Despite the intimidating expression, we can get a rough picture by understanding how the size of \alpha and c effect expected computational complexity.

For improvements, small reductions in c marginally improve performance, while reductions in \alpha usually are linked with significant improvement of an algorithm’s performance. Efforts to reduce the effect of the o(1) term are known as “practical improvements” and allow for minor speed-ups in computation time, without adjusting the complexity estimate.

For cryptographers, L-notation appears mostly when considering the complexity of integer factorisation, or solving the discrete logarithm problem in a finite field. In fact, these two problems are intimately related and many sub-exponential algorithms suitable for one of these problems have been adapted to the other.13

The first sub-exponential algorithm for integer factorisation was the index-calculus attack, developed by Adleman in 1979 and runs in L_n[\frac{1}{2}, \sqrt{2}]. Two years later, Pomerance (who also was the one who popularised L-notation itself) showed that his quadratic number field sieve ran asymptotically faster, in L_n[\frac{1}{2}, 1] time. Lenstra’s elliptic curve factoring algorithm runs in L_p[\frac{1}{2}, \sqrt{2}], with the interesting property that the time is bounded by the smallest prime factor p of n, rather than the number n we wish to factor itself.

The next big step was due to Pollard, Lenstra, Lenstra and Manasse, who developed the number field sieve; an algorithm designed for integer factorisation which would be adapted to also solving the discrete logarithm problem in \mathbb{F}_q^*. Their improved algorithm reduced \alpha in the L-complexity, with an asymptotic running time of L_n[\frac{1}{3},c].

Initially, the algorithm was designed for a specific case where the input integer was required to have a special form: n = r^e \pm s, where both r and s are small. Because of this requirement, this version is known as the special number field sieve (SNFS) and has c = \sqrt[3]{\frac{32}{9}} . This is great for numbers such as the Mersenne numbers: 2^n - 1, but fails for general integers.

Efforts to generalise this algorithm resulted in the general number field sieve (GNFS), with only a small increase of c = \sqrt[3]{\frac{64}{9}} \simeq 1.9 in the complexity. Very quickly, the GNFS was adapted to solving discrete logarithm problems in \mathbb{F}^*_p, but it took another ten years before this was further generalised for \mathbb{F}^*_{q} and renamed as the tower number field sieve (TNFS), which has the same complexity as the GNFS.

As a “hand-wavey” approximation, problems are cryptographically hard when the input has a few hundred bits for \mathcal{O}(\sqrt{n}) complexity and a few thousand bits for sub-exponential complexity. For example, if the best known attack runs in \mathcal{O}(\sqrt{n}) time, we would need n to have 256 bits to ensure 128 bit-strength. For RSA, which can be factored in \sim L_n[\frac{1}{3}, 1.9] time, we need a modulus of 3072 bits to reach 128-bit security.

Estimating the bit security of pairing protocols

With an overview of pairings and complexity estimation for asymmetric protocols, let’s combine these two areas together and look at how we can estimate the security of pairing-friendly curves.

As a concrete example, let’s return to the BLS signature scheme explained at the beginning of the post, and study the assumed hard problems of the protocol together with the asymptotic complexity of their best-known attacks. This offers a great way to look at the security assumptions of a pairing protocol in a practical way. We can take what we study here and apply the same reasoning to any pairing-based protocol which relies on the hardness of the computational Diffie-Hellman problem for its security proofs.

In a BLS signature protocol, the only private value is the private key x which is bounded within the interval 0 < x < r. If this value can be recovered from any public data, the protocol is broken and arbitrary data can be signed by an attacker. We can therefore estimate the bit-security of a pairing-friendly curve in this context by estimating the complexity to recover x from the protocol.

Note: for the following discussion we assume the protocol is implemented as intended and attacks can only come from the solution of problems assumed to be cryptographically hard. Implementation issues such as side-channel attacks, small subgroup attacks or bad randomness are assumed to not be present.

We assume a scenario where Alice has signed a message m with her private key x. Using Alice’s public key a = [x]g_1, Bob will attempt to verify her signature \sigma = [x]h before trusting the message. Eve, our attacker, wants to learn the value of x such that she can sign arbitrary messages while impersonating Alice.

In a standard set up, Eve has access to the following data:

Additionally, using these known values Eve can efficiently compute

To recover the private key, Eve has the option to attempt to solve the discrete log problem in any of the three groups G_1, G_2 and G_T:

x = \log_{g_1}(a), \qquad x = \log_{h}(\sigma), \qquad x = \log_{s}(t)

The bit-security for the BLS-signature is therefore the minimum number of operations needed to be performed to solve any of the three problems.

Elliptic Curve Discrete Logarithm Problem

Given a point and its scalar multiple: P, Q = [n]P \in E(\mathbb{F}_q), the elliptic curve discrete logarithm problem (ECDLP) is to recover the value of n. Since Miller (1986) and Koblitz (1987) independently suggested using this problem for cryptographic protocols, no attacks have been developed for generic curves which are any faster than those which we have for a generic abelian group.14

Therefore, to solve either of the first two problems:

x = \log_{g_1}(a), \qquad x = \log_{h}(\sigma) ,

Eve has no choice but to perform \mathcal{O}(\sqrt{r}) curve operations to recover the value of x. This makes estimating the bit security of these two problems very easy. To enjoy n-bit security for these problems, we only need to ensure that the prime order r has at least 2n-bits. Remember that this is only true when the subgroup has prime order. If r was composite, then the number of operations would be \mathcal{O}(\sqrt{p}), where p is the largest prime factor of r, reducing the security of the curve.

Note: As G_1 is usually generated from points on the curve E_1(\mathbb{F}_p) while E_2(\mathbb{F}_{p^2}) is defined over the extension field, the CPU time to solve problem one will be quicker than problem two, as an individual operation on E_1 will be more efficient than that of E_2.

Finite Field Discrete Logarithm Problem

In contrast to the group of points on an elliptic curve, mathematicians have been able to use the structure of the finite fields \mathbb{F}_{q} to derive sub-exponential algorithms to solve the discrete logarithm problem.

Up until fairly recently, the best known attack for solving the discrete logarithm problem in \mathbb{F}_{q}^*, was the tower number field sieve, which runs in L_q[\frac{1}{3}, \sqrt[3]{\frac{64}{9}}] time. To reach 128 bit security, we would need a 3072 bit modulus q; for pairing-friendly curves with a target group \mathbb{F}_{p^k}^*, we would then need the characteristic of our field to have \sim{3072}/{k} bits. Additionally, if we work within a subgroup of \mathbb{F}_q^*, this subgroup must have an order where the largest prime factor has at least 256 bits to protect against generic attacks such as Pollard’s Rho, or BSGS.

Just as with the original SNFS, where factoring numbers in a special form had a lower asymptotic complexity, recent research in solving the discrete logarithm problem for fields \mathbb{F}^*_{p^k} shows that when either p or k has certain properties the complexity is lowered:

In pairing-friendly curves, the characteristic of the fields are chosen to be sparse for performance, and the embedding degree is often factorable, e.g. k=12 for both BLS12 and the BN curves. This means the bit-security of many pairing-friendly curves should estimated by the complexity of the special case SexTNFS, rather than the GNFS.

New Estimates for pairing-friendly curves

Discovery of the SexTNFS meant almost all pairing-friendly curves which had been generated prior had to be re-evaluated to estimate the new bit-security of the discrete logarithm problem.

This was first addressed in Challenges with Assessing the Impact of NFS Advances on the Security of Pairing-based Cryptography (2016), where among other estimates, they conducted experiments for the complexity 256 bit BN curve, which had a previous estimated security of 128 bits. With the advances of the SexTNFS, they estimated a new complexity between 150 and 110 bit security, bounded by the value of the constants appearing in the o(1) term in L_n[\alpha, c].

The simplification of o(1) = 0 was graphed in Updating key size estimations for pairings (2019; eprint 2017) and allows an estimate of expected key sizes for pairing-friendly curves. With the advances of the SexTNFS, we must increase the bit length of q = p^k to have at approximately 5004 bits for 128 bit security and 12871 bits for 192 bit security.

Diagram of bit strength estimates for TNFS, exTNFS and SexTNFS

In the same paper, they argue that precise analysis of the o(1) constant term must be understood, especially for the newer exTNFS and SexTNFS. By using careful polynomial selection in the sieving process, they estimate that the 256-bit BN curve has only 100 bit security, and other pairing-friendly curves will be similarly effected. They conclude that for 128 bit security, we should be considering pairing-friendly curves with larger primes, such as:

  • A BLS-12 curve over a 461-bit field (previously 381-bit)
  • A BN curve over a 462-bit field (previously 256-bit)

Guillevic released A short-list of pairing-friendly curves resistant to Special TNFS at the 128-bit security level (2019) looking again at the estimated bit security of curves and published a list of the best curves for each case with (at least) 128 bit security. Additionally, there is an open-source repository of SageMath code useful for estimating the security of a given curve https://gitlab.inria.fr/tnfs-alpha/alpha.

In short, Guillevic recommends

For efficient non-conservative pairings, choose BLS12-381 (or any other BLS12 curve or Fotiadis–Martindale curve of roughly 384 bits), for conservative but still efficient, choose a BLS12 or a Fotiadis–Martindale curve of 440 to 448 bits.

with the motivation that

The security between BLS12-446 and BLS12-461 is not worth paying an extra machine word, and is close to the error margin of the security estimate.

Despite the recent changes in security estimates, there are a wealth of new curves which promise 128 bit security, many of which can be dropped into place in current pairing protocols. For those who already have a working implementation using BLS12-381 in a more rigid space, experiments with Guillevic’s sage code seemed to suggest that application of SexTNFS would only marginally reduce the security to ~120 bit security, which currently is still a totally infeasible problem to solve.

Conclusions

TL;DR

Algorithmic improvements have recently reduced the bit-security of pairing-friendly curves used in pairing-based cryptography, but not by enough to warrant serious concern about any current implementations.

Recommendations

The reductions put forward by improvements in the asymptotic complexity of the SexTNFS do not weaken current pairing based protocols significantly enough to cause a mass panic of protocol changes. The drop from 128 bit security to ~120 bit security leaves the protocol secure from all reasonable attacks, but it does change how we talk about the bit-security of specific pairing friendly curves.

XKCD Security

Before a pairing-based protocol with “only” 120 or even 100-bit security is broken, chances are, some other security vulnerability will be exposed which requires less than a universe’s computation power to break. Perhaps a more looming threat is the arrival of quantum computers which break pairing-based protocols just like all cryptographic protocols which depend on the hardness of the discrete logarithm problem for some Abelian group, but that’s for another blog post.

For the most part, the state-of-the-art improvements which effect pairing-based protocols should be understood to ensure our protocols behave as they are described. Popular curves, such as those appearing in BLS12-381 can still be described as safe, pairing-friendly curves, just not 128-bit secure pairing curves. The newly developed curves become important if you wish to advertise security to a specific NIST level, or simply state 128-bit security of a protocol implementation. In these scenarios, we must keep up to date and use the new, fine-tuned curves such as Fotiadis-Martindale curve FM17 or the modified BLS curve BLS12-440.

Computational Records

As a final comment, it’s interesting to look at where the current computation records stand for cryptographically hard problems and compare them to standard security recommendations

We see that of all these computationally hard problems, the biggest gap between computational records and security estimates is for solving the discrete log problem using the TNFS. Partially, this is historical; solving the discrete log in \mathbb{F}_q^* has only really gained interest because of pairing-based protocols and this area is relatively new. However, even in the context of pairing-friendly curves, the most significant attack used the familiar Pollard’s rho algorithm to solve the discrete logarithm problem on the elliptic curve, rather than the (Sex)TNFS on the output of the pairing.

The margins we give cryptographically hard problems have a certain level of tolerance. Bit strengths of 128 and 256 are targets aimed for during protocol design, and when these advertised margins are attacked and research shows there are novel algorithms which have lower complexity, cryptographers respond by modifying the protocols. However, often after an attack, previous protocols are still robust and associated cryptographic problems are still infeasible to solve.

This is the case for the pairing-friendly curves that we use today. Modern advances have pushed cryptographers to design new 128 bit secure curves, but older curves such as the ones found in BLS12-381 are still great choices for cryptographic protocols and thanks to the slightly smaller primes used, will be that much faster than their 440-bit contemporaries.

Summary

  • Pairing-based cryptography introduces a relatively new set of interesting protocols which have been developed over the past twenty years.
  • A pairing is a map which takes elements from two groups and returns an element in a third group.
  • For all practical implementations, we use elliptic curve pairings, which take points as input and returns elements of \mathbb{F}_{p^k}^*.
  • Pairing-based protocols offer a unique challenge to cryptographers, requiring the generation of special pairing-friendly elliptic curves which allow for efficient pairing, while still maintaining the hardness of the discrete logarithm problem for all three groups.
  • Asymmetric protocols are only as secure as the corresponding best known attacks are slow. If researchers improve an algorithm, parameters for the associated cryptographic problem must be increased, resulting in the protocol becoming less efficient.
  • Increased attention on pairings in a cryptographic context has inspired more research on improving algorithms which can solve the discrete logarithm problem in \mathbb{F}_{p^k}^*.
  • The newly discovered exTNFS and SNFS, allow a reduction in the complexity of solving the discrete logarithm problem in \mathbb{F}^*_{p^k} using the SexTNFS.
  • For pairing friendly-curves generated without knowledge of these algorithms (or before their discovery) the advertised bit-strength of a curve may be lower than described.
  • Although research on TNFS and related algorithms is still ongoing, new improvements are only expected to happen in the o(1) contribution of L_n[\alpha, c], which could only cause small modifications to the bit complexity. Any substantial change in complexity would be as surprising as a change in the security of the ECDLP.
  • These modifications should only really concern implementations which promise a specific bit-security, from the point of view of real-world security, the improvements offered by the SexTNFS and related algorithms is more psychological than practical.

Appendix: Mathematical Notation

The below review will not be sufficient for a reader not familiar with these topics, but should act as a gentle reminder for a rusty reader.

Groups, Fields and Curves

Groups

For a set G to be considered as a group we require a binary operator \circ such that the following properties hold:

An Abelian group has the additional property that the group law is commutative:

Fields and Rings

A field is a set of elements which has a group structure for two operations: addition and multiplication. Additionally, the field operations are commutative and distributive. When considering the group multiplicatively, we remove the identity of the additive group from the set to ensure that every element has a multiplicative inverse. An example of a field is the set of real numbers \mathbb{R} or the set of integers modulo a prime \mathbb{Z} / p\mathbb{Z} (see below).

We denote the multiplicative group of a field k as k^*.

A ring is very similar, but we relax the criteria that every non-zero element has a multiplicative inverse, and operations need not be commutative. An example of a ring is the set of integers \mathbb{Z}, where every integer a \in \mathbb{Z} has an additive inverse: -a \in \mathbb{Z} but for a \times b = 1 for a \in \mathbb{Z} we do not have that b \in \mathbb{Z} (unless a=b=1 or a=b=-1).

Finite Fields

Finite fields, also known as Galois fields, are denoted in this blog post as \mathbb{F}_q for q = p^k for some prime p and positive integer k. When k=1, we can understand \mathbb{F}_p as the set of elements \mathbb{Z} / p\mathbb{Z} = \{0, \ldots, p-1 \} which is closed under addition and excluding 0, is also closed under multiplication. The multiplicative group is \mathbb{F}_p^* = \{ 1, 2, \ldots, p-1 \}.

When k > 1, the finite field \mathbb{F}_q has elements which are polynomials \mathbb{F}_p[x] of degree d < k. The field is generated by first picking an irreducible polynomial f(x) and considering the field \mathbb{F}_q = \mathbb{F}_p[x] / (f). Here the quotient by (f) can be thought of as using f(x) as the modulus when composing elements of \mathbb{F}_q.

Elliptic Curves

For our purposes, we consider elliptic curves over finite fields, of the form

E(\mathbb{F}_q) : y^2 = x^3 + Ax + B,

where the coefficients A,B \in \mathbb{F}_q and a point on the curve P(x,y) is a solution to the above equation where x,y \in \mathbb{F}_q, together with an additional point \mathcal{O} which is called the point at infinity. The field \mathbb{F}_{q} where q = p^k has characteristic p and to use the form of the curve equation used above, we must additionally impose p \neq 2,3.

The set of points on an elliptic curve form a group and the point at infinity acts as the identity element: P + \mathcal{O} = \mathcal{O} + P = P and P - P = \mathcal{O}. The group law on an elliptic curve is efficient to implement. Scalar multiplication is realised by repeated addition: [3]P = P + P + P.

  1. One year later, this attack was generalised by Frey and Ruck to ordinary elliptic curves and is efficient when the embedding degree of the curve is “low enough”. We will make this statement more precise later in the blogpost.
  2. As a recap, the decision Diffie-Hellman (DDH) problem is about the assumed hardness to differentiate between the triple (g^a g^b, g^{ab}) and (g^a g^b, g^{c}) where a,b,c are chosen randomly. Note that if the discrete log problem is easy, so is DDH, but being able to solve DDH does not necessarily allow us to solve the discrete log problem.
  3. The roots of unity are the elements \mu_r \in \mathbb{F}_{p^k} such that (\mu_r)^r = 1. As a set, the rth roots of unity form a multiplicative cyclic group.
  4. It can be proved that the Weil embedding degree k \geq 1 always exists, and that the r-torsion group E(\mathbb{F}_{p^k})[r] \subset E(\mathbb{F}_{p^k}) decomposes into \mathbb{Z}_n \times \mathbb{Z}_n Once k has been found, further extending to \mathbb{F}_{p^{k+n}} will not increase the size of the r-torsion.
  5. In an attempt to not go too far off topic, a divisor can be understood as a formal sum of points \mathcal{D} = n_1 P_1 + \ldots n_k P_k. Given a rational function f(x,y), the divisor of a function (f) is a measure of how the curve f(x,y) intersects with the elliptic curve E and is computed as \sum_{P \in E} \text{ord}_P(f)(P). For a very good and comprehensive discussion of the mathematics (algebraic geometry) needed to appreciate the computations of pairings, we recommend Craig Costello’s Pairing for Beginners.
  6. As another way to think about this, we have that (p^k - 1) = rN for some integer n. We can rephrase this as p^k \equiv 1 \pmod r and so the Tate embedding degree k is the order of p \pmod r
  7. The distribution of small k was formally studied in the context of the expectation that a random curve could be attacked by the MOV algorithm. R. Balasubramanian & N. Koblitz showed in The Improbability That an Elliptic Curve Has Subexponential Discrete Log Problem under the Menezes—Okamoto—Vanstone Algorithm that the probability that we find a small embedding degree: k < (\log p)^2 is vanishingly small.
  8. Hashing bytes to a point on a curve can be efficiently performed, as described in Hashing to Elliptic Curves
  9. This BLS triple is distinct from the BLS curves, although Ben Lynn is common to both of them
  10. The term operations here can hide a computational complexity. If your operation is addition of elliptic curve points, this will have an additional hidden cost when compared to an operation such as bit shifting, or multiplication modulo some prime. Some authors mitigate this by using a clock cycle as the operation.
  11. Although here we think of this key as a 256 long stream of 1s and 0s, we’re much more familiar with seeing this 32-byte key as a hexadecimal or base64 encoded string
  12. For fun, we include a tonuge-in-cheek estimate. Equipping every one of the world’s 8 billion humans with their very own “world’s fastest supercomputer”, the burning hot computer-planet would be guessing \sim2^{80} keys per second. At this rate, it would take 17 million years to break AES-128. For AES-256, you’ll need a billion copies of these supercomputer Earths working for more than 2 billion times longer than the age of the universe. 🥵
  13. In fact, this relationship between factoring and the discrete log problem even shows up in a quantum context. Shor’s algorithm was initially developed as a method to solve the discrete log problem and only then was modified (with knowledge of this close relationship) to then produce a quantum factoring algorithm. Shor tells this story in a great video.
  14. When an elliptic curve has additional structure we can sometimes solve the ECDLP using sub-exponential algorithms by mapping points on the curve to some “easier” group. When the order of the elliptic curve is equal to its characteristic (\# E(\mathbb{F}_p) = p), Smart’s anomalous curve attack maps points on the elliptic curve to the additive group of integers mod p, which means the discrete logarithm problem is as hard as inversion mod p (which is not hard at all!). When the embedding degree k is small enough, we can use pairings to map from points on the elliptic curve to the multiplicative group \mathbb{F}^*_{p^k}. This is exactly the worry we have here, so let’s go back to the main text!
  15. In this context, medium refers to a number theorist’s scale, which we can interpret as “cryptographically sized”. Although 5000 bit numbers seem large for us, number theorists are interested in all the numbers and the set of numbers you can store in a computer are still among the smallest of the integers!

文章来源: https://research.nccgroup.com/2022/02/03/estimating-the-bit-security-of-pairing-friendly-curves/
如有侵权请联系:admin#unsafe.sh