Wisdom of the Crowd as a Consensus Mechanism

Photo by CHUTTERSNAP on Unsplash

Wisdom of the Crowd as a Consensus Mechanism

A new approach to scaling a more energy-friendly blockchain?

(I previously published this article on medium)

Some time ago, while I was delving deeper into bitcoin/blockchain/time-chain fundamentals for one of my side-projects, I watched an awesome interactive presentation on Wisdom of the Crowd and things started to mix inside my head: could this be applied to a blockchain consensus mechanism?

At the time, there was a lot of debate on the scalability of Bitcoin, with arguments for increasing block size, adding signature segregation (SegWit), and the launch of the Lightning Network. I couldn’t help feeling they were all a bit palliative; stop-gap solutions for a fundamental problem that would not be fixed, but rather circumvented.

With last year's increasing interest in Bitcoin (again), the rise of NFT’s, and the long-overdue debate concerning blockchain energy usage, my interest in this idea was rekindled, and I decided to finish writing down my idea, and finally publish it.

Before you read on, let me be clear: I am by no means someone like Satoshi Nakamoto or Gavin Andresen, nor do I aspire to be one. I’m just here to offer you my — perhaps naive— view on things.

Also, i will probably use both ‘blockchain’ and ‘bitcoin’ a lot, while I’m referring to the same thing.

Last but not least: this is a long piece. Sorry about that.

Introduction

Note: feel free to skip this introduction if you’re already familiar with Blockchain fundamentals and the scaling debate. You can then skip to ‘The hypothesis’, further down the article.

Bitcoin, scaling and CAP Theorem

Scalability (scale·ability) of a blockchain is the ability to meet the demands of an increasing amount of transactions while maintaining the same level of availability and consistency.

CAP Theorem (Image taken from hazelcast.com)

Eric Brewer’s CAP Theorem states that a distributed network can’t simultaneously provide more than two out of the following three guarantees: consistency (C), availability (A), and partition tolerance (P).

Bitcoin has to deal with CAP Theorem. And since Partition Tolerance is inescapable when faced with the choice between consistency and availability, Bitcoin chooses the latter; so it clearly seems to have the ambition to scale.

“So, wait, are you saying Bitcoin is inconsistent?

Yes. Strictly speaking, it is. However, it is continuously working to get eventually consistent.

Bookkeeper analogy You can think of it as a bookkeeper, trying to keep up with a business operating 24/7: at any given moment the books are inconsistent and can’t offer a reliable overview of the business's current financial state, but if you only focus on past (thus fully processed) transactions, all records are accurate.

To accomplish becoming consistent eventually, Bitcoin needs to do perpetual bookkeeping, but while this is done, its clerks have to reach a state of consensus where they all agree on the current state of the blockchain.

But the method (or ‘Consensus Mechanism’ as it is known) Bitcoin uses to do so, is what ultimately caps (pun intended) its availability as the transaction rate increases.

Bitcoin’s Consensus Mechanism is called Proof-of-Work, or ‘mining’.

The problem with consensus through Proof-of-Work

Proof-of-Work is essentially the process that handles transaction- and block verification. To keep with the bookkeeping analogy: keeping track of account balances and transactions.

Amongst other things, this process dictates the pace at which blocks are created and secured, and thus directly impacts scalability and speed.

In the case of Bitcoin, it is slow and is deliberately kept slow by the fundamentals of the system. It is perhaps hard to comprehend, but even as mining capacity (or: processing power and thus energy consumption) of the network increases by many orders of magnitude, the network is actively working against this — by making mining more difficult — to keep its pace.

To compensate for increasing hardware speed and varying interest in running nodes over time,  the proof-of-work difficulty is determined by a moving average targeting an average number of blocks per hour. If they’re generated too fast, the difficulty increases.

From the original Bitcoin whitepaper. This ‘average number of blocks’ is ~6 per hour; roughly one every ten minutes.

Bookkeeper analogy (cont.) To finish the bookkeeping analogy: it’s like when your bookkeeper has to process 100 transactions a day. As the bookkeper gets better at his job, one day he finishes his work too quickly.
To slow him down, you tell him that you want all figures to have a leading zero, making sure his work takes just that little bit longer next time around for him to stay in the office until the working day ends.
You keep adding zeroes every time he consistently starts finishing too early, and you remove a leading zero if he starts working late (or you get a call from the union). But in the end, the main focus is that he only manages to process the 100 transactions, no more, no less.

While this seems strange at first glance, it is in fact integral to making Bitcoin secure and not waste power. But the major downside is that because blocks are of a fixed size, and thus a fixed transaction capacity, the transaction rate will always remain the same, regardless of increased usage.

This, in the end, leads to the conclusion that Bitcoin does not scale, or more specifically; not as long as it depends on Proof-of-Work.

Possible alternatives

Of course, there are artificial ways of increasing the transaction speed and/or capacity of Proof-of-Work, but in the end, they’re nothing more than evasive maneuvers.

So to effectively and fundamentally address scalability, you need a different consensus mechanism. While there are alternatives to Proof-of-Work, namely Proof-of-Stake (and variants thereof) and Byzantine Fault Tolerance, they all have potential drawbacks (such as unwanted centralization).

The(im)possibilities of changing consensus
At this point, it is important to realize that changing the consensus system of an already established blockchain — such as Bitcoin or Ethereum — is like performing open-heart surgery, with a plastic spoon, on an athlete that’s running a marathon. The constantly postponed implementation of Ethereum 2.0 (Serenity), which will make the switch from Proof-of-Work to Proof-of-Stake, is illustrative in this respect.

Basic ingredients of an alternative consensus mechanism

A blockchain consensus mechanism should at least meet these demands:

  • Consistency (eventually)
  • Security
  • Availability
  • Decentralization

Another important factor to keep in mind when coming up with an alternative is that the system must provide an incentive for participation: Proof-of-Work thrives on the fact that mining is rewarded — with bitcoins. In other words: participation in the network is financially encouraged. And while Proof-of-Stake consensus models also offer this incentive, they’re in fact much more like a lottery — with some participants buying more tickets than you can ever hope to afford.

With these points laid out, and the basics of the wisdom of the crowd in mind, I came up with the following:

Look, I even made a logo. Because, why not?

The hypothesis

(Or: the TL;DR part)

The way I envision it, Wisdom of the Crowd (referred to as WoC from here on out) consensus is a highly scalable and energy-efficient system for blockchain consensus that relies on the wisdom of the crowd, increasing valid outcome probability by using three degrees of recursion. It is fair, engaging, and more economical (power-efficient).

Key benefits:

  • Scalability: there is no set interval for block creation, and block verification should be fast: a matter of minutes, if not seconds, under ideal circumstances.
  • No mining: The system is far more economical as it does not use any (extra) energy performing ‘work’ (hashing/mining) to reach consensus.
  • Decentralization: due to the node selection process, the system is never in danger of being controlled by a minority such as miners (PoW), validation pools (PoS/dPoS/lPoS), or delegates (dBFT).
  • Broad financial incentive: All nodes participating in consensus will share the transaction rewards. And since all nodes in the network are eligible for participating in consensus, nodes will randomly receive profit, just for taking part in the network.
  • Security: the random selection process, coupled with rules for subsequent participation provides strong protection against collusion, a system of multiple signatures helps mitigate Sybil attacks, and the “Nodebase” seed provides additional bi-directional linking between blocks (more on that later).

Wisdom of the Nodes

The system’s basic principle in a nutshell:

Ask a random person in the street, who asks three other random people, who in turn also ask three random people, who in turn ask yet another three random people.

To best explain how this works in practice, and how it translates to a consensus mechanism, I have visualized the concept and will take you through the steps of WoC consensus.

WoC-Block-Consensus.png

  1. An active (online) node is randomly assigned the duty of assembling the next block; we’ll refer to this node as the block originator node.
  2. This block originator node constructs (‘proposes’) a new block from waiting transactions (the memory pool). Not much unlike how PoW mining works now, the block originator node determines the order and priority of transactions that are to be included and calculates the Merkle-root and hash for the block.
  3. Enter WoC: The block originator node will now establish validation connections to three randomly selected (online) nodes; these represent the first degree of consensus.
  4. These three nodes, in turn, establish their own validation connections to another three randomly selected nodes; the second degree, consisting of 9 nodes.
  5. Another, third and final degree of 27 nodes is created by the nodes in the second degree each establishing their own validation connections to yet another three randomly selected nodes.
    A total of 27 + 9 + 3 + 1= 40 nodes are now actively participating in this round of consensus.
  6. Now the block originator node broadcasts the constructed block to all 39 connected nodes at once, for verification.
  7. Upon receiving the block, each node checks it against its own copy of the blockchain and memory pool. If it agrees the block is valid, it signs the block and sends it back down through its validation connection. The receiving node verifies and combines all three signatures, adds its own, and passes the block on.
  8. Agreement on the validity of the block cascades all the way back to the originator, maintaining the basic rule of ≥⅔ consensus in each degree; if more than 2 out of 3 connected nodes agree on the block, it is considered valid.
  9. When the block is finally back at the originator, and there is ≥⅔ consensus on its validity, the block’s validation is considered conclusive, and it is then either added to the blockchain (if consensus is that it is valid) and broadcast to the entire network, or it is discarded (consensus is that it is invalid).
  10. The next round of consensus begins. Whether or not nodes are eligible for the next round of consensus is determined by the degree they participated in (see ‘Preventing collusion’).

Transaction consensus

The validation of transactions (a separate process) before they are added to the memory pool, operates mostly identically— as illustrated below:

WoC-Transaction-Consensus.png

Providing incentive

To provide a financial incentive for nodes to participate in consensus, they get to share the collected transaction fees, by the ratio of the degree they participated in:

  • 46% for the block originator node and the first level, 4 nodes in total, which comes down to 11.5% per node.
  • 27% for the second level (9 nodes, 3% per node).
  • 27% for the third level, (27 nodes, 1% per node).

The higher degree a node participated in, the sooner a node will be eligible for participating in consensus again (see Preventing collusion/corruption), so all nodes have equal chances at earning larger rewards.

WoC---Summary.png

Overview of the degrees and block rewards

This variable rewarding system is still up for debate: perhaps all nodes participating in consensus should always receive 100/40 = 2,5% of the collected fees? Or should it, as currently proposed, be a random chance at a reward between 11.5% –1%?

  • On the one side, there’s something to be said for a reasonably steady (as far as random selection can be called ‘steady’) flow of income for participating in the network.
  • On the other side, the possibility of receiving a large(r) part of the earnings will likely boost incentive, because: ‘Hey, stick around, and perhaps you’ll earn even more!

Dealing with collusion

To make it more impractical to collude (also see above: Wisdom of the Nodes, item 7), WoC consensus follows two basic principles when selecting nodes for consensus:

Node selection

  1. Node selection is based on a random seed in the previous block (I call this the ”Nodebase”), which is combined with a random new seed (aptly named ”Nodeseed”), and fed into a derivation protocol, not much unlike BIP32, that is built into the core, which outputs the addresses of the next set of nodes.
    Upon block completion, the current Nodeseed is stored in the block as its Nodebase. It is then immediately combined with a newly generated Nodeseed, kicking off a new round of consensus.
  2. While all participating nodes in the network are eligible for selection, nodes that previously participated in consensus can only re-participate a degree higher than the degree they previously took part in. After finishing a round of validation in the third degree, they are eligible for all degrees (or the originator role) again.

Some examples

  • A node that last participated in consensus in the second degree can only take part in consensus again in the third degree.
  • A node that was the originator node the last time it participated in consensus can only take part in consensus again in the first, second, or third degree.
  • A node that has last participated in the third degree will be eligible to participate (on any degree, or as originator node) again the next time it happens to be selected again. Though it should be noted it will be placed in a random degree regardless if it is selected.

This way, the system financially incentivizes nodes to maximize exposure to the network; as each potential participation in consensus will maximize financial gain.

Dealing with transaction malleability

With Proof-of-Work, blocks are secure (and get even more secure with each block added on) because any attackers that want to modify a transaction would not only have to perform the ‘work’ again for the block they’re trying to modify, they’d have to: 1) perform it for each block that came after it, 2) do it faster than the network’s fastest miner and 3) do it within the 10-minute timeframe. This is next to impossible, and this is why Proof-of-Work’s security is so adamant.

Aside from blockchain’s basic security features (chaining by hash), WoC provides security in several ways:

  • Each block has a ‘pointer’ to the nodes that approved the block that comes after it (as mentioned earlier, this is the “Nodebase”), this pointer is also included in calculating the block’s hash.

Overview of a WoC-based blockchain — with the process of initiating the next block addition on the right

  • This “Nodebase” links blocks bi-directionally; as it is based on the previous block’s “Nodebase” and dictates the next block’s validating nodes.
  • Each block has been signed by up to 40 nodes, using the Merkle-root + block hash + the validating node’s own private key.
  • Blocks are created and broadcast back-to-back, so the sheer speed of the network makes it increasingly harder to change finished blocks.

So how does this provide protection?

Let’s say you are of evil intent, have successfully managed to alter a block, and have 39 collaborating equally evil henchmen-nodes ready to sign it for you. You’d then need to modify the preceding block, as it contains a reference (the “Nodebase”) to the nodes that signed the block you’re trying to alter; if that doesn’t match the new signatures, your deceit will be exposed.

This requires you to randomize a new Nodeseed until, when fed into the derivation protocol, it produces a list of exactly all 40 of your collaborating nodes (including you). A chance of 1 in infinity.

For the sake of argument, say you manage to reverse-engineer the derivation protocol, allowing you to translate (your, and-) the henchmen-nodes’ addresses back into a correct Nodeseed. You’d then need to write this back into the previous block’s Nodebase and recalculate its… You might as well stop here. It’s all futile because now, this new Nodebase doesn’t match up with the Nodebase of the block before the block that is before the block we are trying to alter.

The bi-directional linking of the Nodebase effectively implies that to change one block, you’d need to change the entire blockchain.

Dealing with Sybil Attacks

Susceptibility to a Sybil attack is negated by multisig; all validating nodes are required to sign the block. This allows the originator node to:

  • Check if a block did in fact traverse all degrees, and;
  • ‘Look past’ each other if there is doubt whether a node is being honest (see illustration).

The random selection and scattering of nodes throughout the degrees should further mitigate any risks in this respect.

Verifying Trust — With apologies for the extraneous emoji’s

Dealing with failure & corruption

A node expresses consensus by passing the block it was given to verify onward, provided with its signature. As blocks eventually run past all nodes (except for the nodes in the third degree who only receive a block and pass it on), each node checks if the block that passes by is identical to the one it received itself. If not; the block has been corrupted somewhere, which is then regarded as an invalidation. It’s similar to how dBFT works.

Should a node fail the consensus process by either losing its validation connection, timing out (or going offline), or returning corrupt data, it will be regarded as an invalidation by that node.

Consensus fails, due to node failure (red = corrupted, brown = timed out/offline)

If more than 2/3rds of all participating nodes in a round of consensus fail by either returning corrupt data or timing out (let’s call it “technical knock-out”), consensus fails, the block is discarded, the transactions stay in the memory pool, and consensus immediately restarts with an entirely new selection of nodes.

The key: randomization

The random selection process that selects nodes is a key part of the WoC consensus model and is essential in providing security and limiting its attack surface.

As stated earlier: upon successful consensus, the block’s final transaction (the ”Nodebase”) contains the random seed (the ”Nodeseed”) that was used at the beginning of the consensus process to randomly select the nodes that took part in that block’s validation.

When a new round of consensus starts, or the current round fails, a new “Nodeseed” is generated, appointing a new block originator and an entirely new set of validating nodes.

The inclusion of the “Nodebase” seed in a finished block’s final transaction, makes sure its validation can be traced back to its validating nodes (and their signatures), including the degree (or originator role) to which they participated in the consensus process.

Possible questions

“So, why bother with all these ‘degrees’? Why not just select 40 random nodes?”

There are a couple of reasons for this:

  • Leveled/tiered rewards: they increase incentive; randomness is fair.
  • Cascading randomization: scattering nodes tightens security.
  • Simplicity/availability: apart from the originator, which has 3 (or the ability to contact a participating node directly), each node has only 4 connections to ‘maintain’.
  • Corrupt nodes are more securely ‘contained’: Instead of residing in a big pool of nodes, corrupted nodes lose influence/power the higher they are up in degrees, and the more they are scattered throughout. Furthermore; should corrupt nodes successfully manufacture a block, their corruptness will immediately be exposed when the block is broadcast across the network.
  • Who’d be the central ‘judge’ that enumerates the >66%?
  • Who’d be the central node that keeps track of all participating nodes’ progress?
  • Who’d construct/propose the block?

“How will the network cope with this potentially high block-creation speed?”

Bitcoin’s 10 minute block creation time is, among other things, set because the network needs time to broadcast blocks to all its nodes. The system aims to prevent nodes from spending unnecessary work on a new block, while a valid one has already been found, but has not found its way to all nodes.

WoC consensus does not require this, as block creation is a controlled process — by assigning an originator node. So nodes will not, can not, and don’t have to, voluntarily start working on new blocks themselves; no energy — or work — is wasted on fruitless block hashing (besides the fact that no energy is spent on blocks at all).

This means that block-creation is ‘one at a time’, and — when not partaking in consensus — nodes will just wait for blocks to be delivered to them through the network. Of course, there will always be nodes ‘lagging behind’, so the longest chain rule still applies.

Notes/thoughts/loose ends

  • Random node selection is based on
    (previous seed + random seed) > derivation_protocol = node_addresses
    But how do we check if nodes are available (online) and responsive? Perhaps the best way would be for the derivation protocol to accept a variable that dictates the number of nodes to return, and then set this higher than the 40 required nodes, to provide a fallback. However, the list should not be indefinitely long, as that would facilitate transaction malleability (see the example at ‘Dealing with transaction malleability’)
    My idea of it is something along these lines:

(Please excuse me while I pretend Bitcoin is like JavaScript (hint: it’s not))

Nodeseed = seed_generator();  
eligible_nodes = derivation_protocol(prevBlock.Nodebase + Nodeseed, 80);  
selected_nodes = [];
x = 0;  

while(selected_nodes.length < 40 && x < 80) {  
 node = eligible_nodes[x];  
 node.requestJoinConsensus();
 timer.start();

 while(!node.accepted) {  
  if (node.accepted) {  
   selected_nodes.push(node);  
   x += 1;  
  } else if (timer > 5000) {  
   x += 1;  
  }  
 }  

}
  • While transaction fees provide a financial incentive for faster block inclusion, they will not benefit the initial transaction consensus, as there is no financial gain to be made there. Instead, transactions are validated ‘for free’. To still provide some incentive, it could be considered that this builds nodes’ ‘track record’ for trustworthiness, but I’m not quite sure yet how this would/could translate to any benefits.
  • By having each node sign the block, a finalized block can, in theory, contain up to 40 signatures. All signatures can be condensed into a Schnorr multisig if so desired to free up more space in the block.
  • The time-out for consensus (and thus block creation time) has not yet been determined but it would be logical to base this on the network’s mean — or delta — data-transaction speed.
  • In essence, blocks can be created back-to-back: whenever one finishes, consensus on a new one starts (providing the memory pool contains enough transactions to fill one) immediately. This will greatly benefit the speed of the network (and strike at the heart of the issue: scalability).
  • This system will, or at least can, enable blocks faster than the current 10-minute timeframe, further increasing the capacity of the network.
  • WoC consensus does not rule out ‘click-on systems’ like the Lightning Network, which can still be added on top of this system. But why?
  • To push security to an insane level, another degree (or layer) could be added. Though the gain of ‘asking’ another whopping 81 nodes (making a total of 121 nodes involved with consensus) will most likely be disproportionate to the loss of fault tolerance and decimates rewards.
  • The rewards are slightly variable, in the respect that they can increase if other nodes in a degree fail to validate (by going offline or having other issues that prevent them from partaking in consensus).
  • Corrupt nodes must be penalized, probably in the form of a temporal/permanent ban on consensus participation.

Summary illustration

WoC.png

That’s it (or should I say: congratulations for reaching the end).

Please let me know your thoughts! There are likely a million things I haven’t considered, or rushed by in my complete ignorance; please point them out!

Acknowledgments

Thanks to Michael Feith for indispensable feedback on my grammar and writing skills, Bart Suichies for being Bart — pointing out the length of this piece before it got way out of hand (it still did, sorry), and Kalin Nicolov for providing valuable feedback when the idea was still in its hatchling-state (2018)