An engine

The day is nigh when blockchains function like well-oiled machines! Photo by Hoover Tung on Unsplash.

[This is part of a multi-part series on the key ingredients to a better blockchain. I recommend starting with part one, Tech and Protocol. See the full list of articles in the series.]

The landscape of blockchain platforms is becoming more and more crowded. While the technology is developing apace and lots of interesting experimentation is happening in areas such as scaling, usability, and governance, no platform can yet claim to have built a usable, scalable platform that ordinary humans know or care much about. Blockchain technology, networks, and communities today are where social networks were in the heyday of Myspace and Friendster, i.e., pre-Facebook.

Of course where we stand today depends on your criteria and success is highly subjective. Simplistic indicators such as age, transaction volume, or market cap are insufficient: they are gameable, for one thing, and anyway don’t even begin to capture the things that make these projects so important and high-potential.

In an effort to evaluate various projects, subjectively and for my own sake as I decide how to allocate my own time, I’ve found it very helpful to have a clear set of criteria by which to measure present and potential success. My loyalties lie not with any specific platform but rather with a set of values, principles, goals, and ideals. One framing of the list that follows is that it is an attempt to articulate those goals and ideas, to myself and to the community, and to survey how well Bitcoin, Ethereum, and other blockchain platforms serve those goals at present.

Another possible framing is that what follows is a set of criteria for designing, launching, and operating a successful blockchain platform. Bitcoin and Ethereum have gotten many things right, but like all platforms they also have many shortcomings. I claim that their appeal to date has been limited because they haven’t satisfied enough of the following criteria. Future platforms can and should learn from these mistakes, take heed of these criteria, and violate them at their peril.

Three important caveats. The first is that, as with all complex technology, blockchains are rife with tradeoffs. For this reason, maximizing one criteria may come at the expense of others, a Goldilocks principle applies, and the best projects will attempt to choose a reasonable spot on the solution space. Where possible, I’ve highlighted where such tradeoffs apply.

The second caveat is that, while I’ve chosen relatively objective, uncontroversial criteria where possible, many of these criteria are subjective, and this rubric is not afraid to be normative where necessary. I also make no claims of completeness. What points do you agree or disagree with? What have I missed? What have I overemphasized? Please share your thoughts in the forum thread below.

The third is what I mean by “blockchain”: I’m intentionally using the term in the most generic way possible. I’m not in love with this term but it’s the best, most straightforward term we’ve got at present. I’m referring to all distributed ledger technologies (DLT) including those that aren’t strictly a chain, such as those built on a DAG and those with multiple chains. (I refuse to use the term “DLT” because acronyms suck and the expanded version is too long and obtuse.) And I’m referring to not only the underlying software and technology but also to the broader platform ecosystem and community. My thoughts apply primarily to public blockchains, especially regarding topics such as community and governance, but some of these ideas are likely relevant to permissioned blockchains as well.

I’ve divided these criteria into several categories, and will present them as a series of questions for each. This first post dives into one of the most important: Tech and protocol.

Part I: Tech and protocol

At the core of any blockchain platform is the technology. How well does the tech function, how much can you do on chain, and how well is the protocol designed?

  • Is the protocol incentive compatible? Incentive compatibility is probably always desirable and should be noncontroversial! It’s the idea that network participants can achieve what’s best for themselves by following the rules of the protocol, i.e., by doing what’s best for the entire network, rather than by trying to game the system. Even relatively well known systems such as Bitcoin mining rewards have been proven not to be strictly incentive compatible.
  • Is the protocol simple and easy to understand? There is a trend towards increasing protocol complexity as blockchain platforms attempt to innovate and differentiate along lines of protocol and technology. Other things being equal, a protocol should be as simple as possible. More complex protocols require more time and resources to design and ship. What’s more, teams designing and building more complex protocols face an uphill battle to communicate their work, convincing developers to work on their platform and users to trust it. There is a tradeoff here, since achieving many of these other criteria may require some degree of protocol complexity. The degree to which the protocol is understandable has as much to do with how well the protocol is documented and explained as with its design.
  • Does it allow permissionless participation? If you believe, as I do, that the raison d’etre of a blockchain is to enable people everywhere to join more open, fairer value creation networks, then permissionlessness is non-negotiable. This refers to both the supply and demand sides: mining or performing validation, as well as transacting (sending funds, deploying applications, using applications, etc.). In the ideal case, any user should be able to perform a small unit of objectively valuable work for the network, such as running their CPU for a few hours, and immediately be rewarded with tokens that give them full access to the network’s services. While proof of work mining is permissionless, and this was the case in the early days of Bitcoin and Ethereum, it’s no longer true as mining rewards in these networks have been completely captured by professional miners. Proof of stake-based networks aren’t truly permissionless because someone must sell you tokens and give you a validator slot before you can become a validator, meaning that > 50% of the network could, in theory, be permanently captured by a single, wealthy actor or cabal, a fact that may never be discovered.
  • Does the virtual machine allow developers to do interesting things easily, cheaply, and safely? There are lots of tradeoffs here, and experts differ on what’s optimal. Bitcoin Script severely limits the sort of applications that can be built on Bitcoin by design, in exchange for safety. In contrast, the Ethereum Virtual Machine is Turing complete, which means that, in theory, it can run applications of arbitrary complexity, but in practice running even relatively basic computation such as cryptographic functions today is prohibitively expensive. A standard VM like WebAssembly may be desirable because it plugs into an existing compiler toolchain (LLVM, in the case of Wasm) and can thus make use of a more mature ecosystem of tools (compilers, analyzers, debuggers, etc.), but has the tradeoff that it introduces a much broader attack surface.
  • Does it encourage you to do too much on chain? In addition to security concerns, another downside to a powerful, expressive VM is that global consensus is slow, expensive, and inappropriate for the vast majority of use cases. The limited functionality and high cost of a VM such as Bitcoin’s may be viewed as a feature rather than a bug in this respect too, provided that good tools exist to move application logic to layer two, but still allow consensus-critical code paths, such as token transfers, to touch layer one. Platforms such as Holochain and SSB (not blockchains because they eschew the notion of global consensus) sit at one, minimal, extreme on this spectrum, whereas Turing complete VMs such as Ethereum’s sit at the opposite, maximal, extreme. By contrast, Blockstack steers a reasonably central course.
  • Does the consensus engine produce fast confirmation and finality? To the extent that the primary function of a blockchain is to achieve consensus on a canonical set of transactions or state transitions, it’s reasonable to measure the success of the protocol partly by how quickly and reliably it achieves that consensus. Proof of work-based consensus mechanisms such as those in Bitcoin and Ethereum never provide explicit finality (the guarantee that a transaction will not be reversed), but they do provide probabilistic finality: the guarantee that, after a certain number of confirmations, the likelihood of a chain reorg invalidating a given transaction rapidly approaches zero. Waiting an hour for this high degree of certainty in Bitcoin is a pain. Ethereum provides a similar degree of certainty on the order of minutes, which is more palatable. Next generation, proof of stake-based systems such as Ethereum Serenity and Polkadot may go one step further and offer economic finality (in best case scenarios, e.g., low latency and no network attack) on the order of minutes—the best case scenario for Eth2 is one epoch, or 6.4 minutes—or even seconds.
  • Can a user pay more for more secure or faster consensus? Since a transaction may have a greater or lesser value to its sender, not every type of transaction needs the same priority or degree of security. In platforms such as Bitcoin and Ethereum it’s possible to pay a higher fee to ensure that a transaction gets mined more quickly, but consensus is all or nothing: either your transaction is eventually confirmed by the entire chain, or it isn’t. It would be very nice to have a form of tiered consensus where transactions can achieve a higher degree of security, such as being confirmed by more validators, if they pay more for a higher order of consensus. This is theoretically possible using layer two technologies today (it’s effectively the difference between sending funds via a state channel, such as Lightning Network, versus sending the transaction directly on mainnet), but it may be a desirable property of future layer one protocols as well.
  • Does the platform have high throughput? Bitcoin and Ethereum both top out on the order of 10 tx/sec right now, which likely isn’t enough throughput as evidenced by network congestion on both networks in late 2017. I suspect that around 1000 tx/sec is a reasonable goal for now, as consensus should be tiered (see previous bullet point) and not every individual transaction needs to be confirmed at layer one, although additional layer one throughput may quickly be consumed if it’s cheap (see induced demand). Higher value, higher priority transactions can pay more to be confirmed at layer one while other transactions can be confirmed at higher layers, or else confirmed at layer one more slowly, in batches.
  • Does the platform enable easy interoperation? Platforms such as Cosmos and Polkadot enable varying degrees of cross-chain interoperability out of the box. In theory, any platform with a Turing complete VM, such as Ethereum, can interoperate perfectly with any other, but in practice verifying transactions from other chains can be very expensive. Interoperability can also require makeshift bridges which are costly to operate and maintain. As one example, Ethereum’s BTCRelay service is unmaintained and unused. Platforms with a less expressive VM, such as Bitcoin, require relatively clumsy, capital inefficient solutions built on more expressive platforms such as Ethereum. One way to make interoperability easy is to make writing a light client easy, for instance, by making the proof of work mechanism easy to verify (for PoW chains).
  • Does the platform offer primitives other than consensus? The most essential service offered by a smart contract blockchain is in-protocol computation: consensus that a particular piece of code was executed according to the rules of the protocol with a given set of inputs, outputs, and, possibly, side effects (in other words, that a particular state transition occurred according to the protocol). However, robust, user friendly applications will rely on other services such as messaging and storage. To continue to rely on centralized service providers for these services is to throw the Web3 baby out with the bath water, yet storing data directly using Bitcoin or Ethereum transactions is prohibitively expensive for all the but simplest applications. Projects such as Whisper and Swarm from Ethereum, and Filecoin from Protocol Labs, plan to offer these missing primitives, but none are yet production ready. Whether such services are offered in- or extra-protocol, straightforward integration into applications and smart contracts is desirable.
  • Is the protocol efficient with its use of data? Size is a factor in all sorts of places: the size (in bytes) of a block header, the length of a transaction ID or block hash, the size of a transaction, the size of signatures that need to be verified, the size of the data that must be passed among nodes, etc. Other things being equal, the more efficient the protocol is with respect to its use of data, the better: less data means that nodes can communicate more cheaply and efficiently, and that the blockchain data structure will experience less bloat over time. Of course, there are tradeoffs here as well, since more expressive transactions and better cryptography may require more bytes. One way to reduce bloat in the size of the state, which every full node must store, is to ensure that storing state is expensive (in terms of gas).

This article is part of a multi-part series on the key ingredients to a better blockchain. Check out the other articles in the series:

Special thanks to Alexey Akhunov, Alex Beregszaszi, Greg Colvin, Casey Detrio, Aviv Eyal, and Shahar Sorek for valuable feedback on an earlier draft of this article.