Submitting Staker Sets to the Bridge Contract Safely and Efficiently

Hey guys, I just wanted to get some extra eyes and some opinions on the best path forward regarding the issue of submitting staker sets to the Themelio bridge contract. As you may already know, the bridge contract (which will enable the movement of Themelio assets to Ethereum and back) is nearly finished, but there is still one last hurdle to its full implementation and that is that the current methods we have for validating staker sets (for the purpose of header validation) are not feasible at the moment due to gas limitations on running proofs of inclusion on, or building up, Blake3 hash trees in the current EVM (which does not have a Blake3 precompile).

For this reason, @nullchinchilla came up with the idea of replacing the stakes_hash root of the staker hash tree**, which is stored in every Themelio block header, with simply the hash of all StakeDocs which previously comprised that hash tree, serialized and concatenated together in some canonical ordering.

This, of course, would save us an enormous amount of gas considering that we go from O(nlogn) hashes for proofs of inclusion or O(logn) hashes for building up a tree to O(1) for hashing an array of StakeDocs only once. The only potential issue here is that, while hashing a large piece of data once is usually very inexpensive, it could present issues in the EVM due to the costs of memory expansion.

We tested our current Solidity Blake3 implementation with hashing ~6.25kb of data (100 serialized StakeDocs) in order to compare its gas consumption versus hashing several StakeDocs one-at-a-time and our result was:
[PASS] testBigHashFFI() (gas: 30713610)

While 30 million gas for 100 StakeDocs is nothing to scoff at (it’s right at the current Ethereum block gas limit), it fares much better than our previous methods which max out at around 30 hashes:
[PASS] testKeyedHashThirtyTimes() (gas: 34021295)

My opinion is that this performance increase is well worth making this change in an upcoming TIP.

Now, if we implement this change then we don’t have to worry as much about the contract spending too much gas on simply identifying complete and untampered staker sets, but the trouble is that we still have to verify staker signatures in order to be able to correctly count up votes for header verifications. This, unfortunately, is also very costly in the current EVM (as there is no precompile for signature recovery of ED25519 signatures) at around 700,000 gas per signature verification.

We currently are considering a few different solutions to this issue, namely:

  • Signature verification of only a random sample of stakers
  • Header verification in multiple transactions
  • Aggregation of staker signatures on the Themelio side so only one ED25519 signature verification has to take place on-chain
  • Keeping track of a Themelio header tree where the header with the most votes at a specific height is part of the main chain (similar to PoW chain consensus)

Each of these approaches obviously comes with it’s own set of advantages and disadvantages so I’ll go through them one-by-one.

Signature Verification of a Random Sample of StakeDocs
This route seemed good to me initially but then I thought about the difficulty of getting any kind of safety guarantee considering we don’t have information on how the ‘StakeDoc’ population and distribution may change in the future, so it’s difficult to design an algorithm which can be optimized for a particular size and shape of population. An example of this issue would be setting up a simple random sample in order to get the best approximation of the staker*** population, but in this scenario an attacker can submit many StakeDocs with the minimum sym amount required to the Themelio network to gain outsized voting influence in the bridge contract in what is essentially a Sybil attack. We can try solving this by implementing a stratified random sample or a probability-proportional-to-size sample instead to get a more representative subset of stakers or get stakers with higher votes on average, but I am not convinced that it would be enough to thwart these types of “long tail” Sybil attacks.

Pros:

  • We can have a predictable amount of gas consumption even in epochs with very large staker sets (except in the initial hashing of stakers, which is currently limited to less than 100 StakeDocs)

Cons:

  • Difficult to implement VRF on-chain without using oracles
  • Increases complexity and introduces new attack vectors which may be difficult to defend against

Header Verification in Multiple Transactions
This seemed to me to be one of the more obvious routes because of its relative simplicity, although I wanted to avoid it because it feels suboptimal. Implementation would include adding two fields to Header structs in the bridge contract: totalVotes and lastStakeDocIndex. In this strategy, a user would submit a header along with a StakeDoc array and an array of staker signatures. The StakeDoc array would be hashed to verify it and signature validation would commence, one-at-a-time until some gas limit is reached, after which the staker votes that were able to be verified will be stored in totalVotes and the last processed StakeDoc index will be saved to lastStakeDocIndex. Now we can further validate staker signatures in this way until we get to at least 2/3 of the total votes for the header’s epoch, at which point, the header is verified.

Pros:

  • Simple to implement and understand
  • Maintains the same level of security as the Themelio network itself

Cons:

  • Attackers can spam the bridge contract with headers which will be kept in storage, even if they don’t have many votes (seems expensive to do this, however)
  • Staker set will need to be hashed each time

Signature Aggregation on the Themelio Side
This option by far seems like the best possible solution to our problem as a multiple signature scheme allows for the construction of a single signed message by multiple private keys and the aggregation even of all of the public keys into one so that only a single signature verification is required to verify the signatures of many public keys on a single piece of data. This signature aggregation functionality could be implemented in a Themelio node, as suggested by @samadhi, where any client wanting to transfer funds using the bridge contract can request an aggregated public key for the staker set at a particular epoch and the aggregated signature for a particular header. A potential issue with this is the complexity involved and there have also been papers**** published demonstrating security flaws on many of these types of multiple signature schemes (although the paper’s authors did offer up a potential alternative).

Pros:

  • Possibly the most efficient and elegant solution to our problem

Cons:

  • Could be more difficult to implement as it is a relatively new technique and there is a dearth of documentation and few projects where this signature scheme has been successfully implemented, at least with regards to smart contracts

Store Header Tree; Header with Most Votes is Part of Main Chain
I believe this approach is definitely viable due to it having been implemented (in a slightly different way than we’d use it) in old BTCRelay smart contracts, though I believe it would need to be combined with the Header Verification in Multiple Transactions strategy to be viable (because you might still have too many stakers to verify in one transaction). The way this technique would work is that we would store every block header by default into a header tree, instead of fully verifying every block header submitted to the contract before storing it. Then, the stored header can be “voted on” by verifying the signatures of stakers who have signed it. The header with the highest cumulative vote count would then be considered part of the main chain.

Pros:

  • Don’t have to worry about fitting header verification into one transaction
  • Don’t even necessarily need 2/3 of staker votes, only enough to have the most votes at that height

Cons:

  • Adds more storage costs and a potential DoS attack vector (although the attacks may be expensive)
  • Increases bridge complexity as we need to implement time limits and locks on things like minting

Conclusions
Having considered the above options my current opinion is that the most optimal solution would be to adopt a multi-signature scheme on Themelio which we could use to amalgamate staker signatures to streamline the process of verification on Ethereum. This has obvious negatives in requiring the most changes on the Themelio side of all the strategies, as well as potentially being the most technically challenging (although, here at Themelio, we do like a challenge). I am also not aware of any difficulties that may be encountered in implementing this on the Themelio side which could preclude this as an option but which I’m sure @nullchinchilla will be able to inform me of, if they exist.

If this turns out to not be a tenable solution I think it would be good to fall back on the option of verifying headers via multiple transactions due to the simplicity of this design (not only in it’s implementation but also in not exposing more surface area for attacks).

I would love to hear back from the rest of the Themelio core development team as well as from Themelio community members on what they think about these strategies, including any new ideas, corrections, or extra information that they may have that could be useful to add to this post.

Thanks for reading!


Footnotes

** The stakes_hash field of a Header struct is the root hash of the tree of all active and future StakeDocs, which themselves represent coins staked from a start epoch to an end epoch, along with their associated public key.

*** For clarity, I will sometimes use staker and StakeDoc interchangeably, although they are not technically the same. One staker (with one public key) can submit multiple StakeDocs because they represent stashes of staked coins, and not stakers themselves.

**** On the Security of Two Round Multi-Signatures

3 Likes

I’m not a professional technician so I don’t really understand, but it looks like you did a really great job!
AWESOME! :+1: :+1: :+1:

1 Like