We humans have learned to build big, impressive things, but we’re less good at building things that last. Even the most mature digital technologies have survived barely one generation in human terms. Blockchain has the potential for real longevity, but to achieve this, we must do a much better job of designing for the long run. (Photo by Vitto Sommella on Unsplash)
Table of Contents
- Onboarding and Enfranchisement
- Avoiding Capture
- Data Bloat
- Scope Creep
- Open Source Readiness
- Base Layer Complexity
- A Healthy Ecosystem
- Conclusion: Where We’re Headed
The previous article in this series discussed the critical transition that occurs as a platform and community move from the R&D phase into the production phase with the launch of a mainnet. While there are many challenges associated with this transition, things do not magically get easier once it’s over.
To use a more concrete metaphor, launching a production network is like piloting an airplane during takeoff. It’s by far the most dangerous phase in the journey. However, it’s not the case that, once airborne, all danger has passed. What’s changed is the nature of the danger and what must be done to avoid it. Staying airborne is more about finding and maintaining equilibrium, continuing to make steady improvements, and making progress along a charted course while dealing with unexpected challenges.
An airplane requires a fantastic array of systems and technologies to stay safely airborne, and a blockchain is no different. This article discusses things you can do to maximize the sustainability and longevity of a project and its community. This article is about the long run.
All ships need fuel to keep flying. For blockchain, that fuel comes in the form of ongoing investment of capital—financial, intellectual, and social capital—needed to fund security, infrastructure R&D, and other common goods. As with nation states, economic policies have important implications on a network’s long-term sustainability, such as the degree to which it’s able to attract outside sources of capital and maintain an adequate degree of ongoing investment in public goods. The two most important questions are where this funding comes from and how it’s distributed.
Every network needs an initial pool of capital for bootstrapping. This capital is generally provided by a small, centralized, core R&D team and the investors that back them. This centralized model works well because it’s efficient: funding flows freely, decisions can be made relatively easily, and the network can be designed, built, and launched relatively quickly.
Once an initial source of capital has been secured, the economic firepower it provides can then be extended through issuance. Investment implies a valuation for the network and its token, meaning that some service providers should accept the token in exchange for providing services to the network. Like a traditional firm compensating employees with issuance of stock and other forms of equity-based compensation, the network and community can fund investment via inflation, which can come in the form of a premine (issuance that occurs before network launch) or ongoing issuance (e.g., block rewards/subsidy).
The first thing that must be funded is network security. While Bitcoin and other blockchains plan to eventually transition to an entirely fee-based model with less and less issuance, no major blockchain has yet successfully made this transition and doing so remains complex and risky. Funding security therefore almost certainly requires a subsidy. Most blockchain communities also decide to fund R&D work and other public goods through a premine and/or by capturing a portion of such rewards in an ecosystem fund (see, for instance, the Zcash dev fund).
However, there are limits to the amount of inflation that’s acceptable and sustainable. We need look no further than countries such as Argentina and Zimbabwe to see the long term consequences of unsustainable economic policy and inflation that’s too high relative to a country’s real output. In practice even the most fiscally liberal blockchain communities maintain at best constant issuance (i.e., issuing a fixed number of new tokens per block), meaning that inflation falls over time in proportional terms.
Paying for public goods such as security and R&D through inflation means redistributing intrinsic capital (i.e., capital that is already present within the network): for instance, paying a subsidy to miners redistributes value away from all other tokenholders to fund security. In order for a network to continue to grow and develop, it also needs to attract extrinsic capital, i.e., capital that’s currently deployed elsewhere. One signal this is happening is appreciation of the value of a network’s token against benchmarks like USD or BTC. In the best case scenario, the value of the network token will gradually appreciate so that less issuance is necessary to sustain a given level of real public spending.
The question of how to attract extrinsic capital to a project is thorny and beyond the scope of this article, but a good starting point is to clearly articulate what a project stands for, and to put in place inclusive economic policies and institutions. For more on this topic, see Economics and To Share or Not to Share?.
Onboarding and Enfranchisement
Openness is not a priority for every project. Some projects are intended to have niche appeal, only solicit contributions from big companies, or otherwise want to limit the number of stakeholders. Given that decentralization and permissionlessness are central to blockchain technology and the things it enables, however, openness is highly valued by many blockchain projects and communities.
Projects that do value openness and intend to have broad appeal should take steps to be as open as possible to new contributors, users, and other community members and stakeholders for as long as possible. It’s critical to understand that openness is not a default state of a project and doesn’t come for free. On the contrary, as projects mature, as more stakeholders appear, and as the stakes increase, early stakeholders and those in positions of influence naturally tend to take advantage of those positions, an economic phenomenon known as rent-seeking. In the process, projects tend to become less welcoming towards new arrivals. It takes a great deal of self awareness, foresight, planning, and hard work to prevent this from happening and instead to “lock the door open” to newcomers.1
It should be easy for new arrivals to learn about the project and community, to get up to speed and begin contributing quickly, and to earn a stake in the network: not only an ownership or economic stake but also a voice in governance. Networks where most ownership and influence remain in the hands of a small number of elite stakeholders will struggle to appeal to a wider audience. After all, isn’t this how the world already works, and isn’t the purpose of blockchain to do better by embodying values such as decentralization and equality?
Meaningfully distributing ownership, economic stake, and influence is not easy. Blockchain appeals today to a relatively narrow segment of society, one that’s already well-off by global standards, and reaching outside this demographic is challenging. But it’s not necessary to enfranchise a billion people overnight. The most important thing is to ensure that, rather than entrenching existing power structures, we instead make room at the table for those who arrive later. A good place to start doing this is with balanced, multipolar systems of governance with division of responsibility and checks and balances. It’s also a good idea to set aside a significant chunk of ownership and influence, in the form of tokens or other digital property, for future network participants, and to make it as easy as possible for them to earn a stake through mechanisms that are open, permissionless, and credibly neutral.
Distributing decision-making authority away from a centralized party also goes a long way towards promoting the longevity of a project. One reason is to ensure that a project avoids capture (see next section) and can continue to develop indefinitely. Another reason is to make sure that most community members and project stakeholders feel that their voice is being heard and considered, so that they don’t feel disenfranchised and lose interest or faith in a project. A third reason is to avoid “key person” risk, overreliance on a single person or organization. Distributing governance makes room for a greater diversity of voices and preferences, which tends to lead to a better platform over time, and makes room for more and more diverse stakeholders. A robust system for considering the needs of all project stakeholders, and for distributing funding fairly, is one key task of project governance.
While a small set of private investors and centralized management of resources including funding is probably sufficient to launch a network, and has the benefit of efficiency as discussed above, requirements will change post-genesis. Over time successful projects will experience an increase in both the number and diversity of stakeholders. As this happens, centralized provision and management of funding may become unsustainable. This is normal and happens for a variety of reasons: priorities change and some early project backers may decide to invest in other things. Different stakeholders and different backers have different preferences and will tend to form coalitions. There’s no way that any single backer can win the trust and support of everyone in a large, diverse community, and there’s no way that they can fund a project forever no matter how deep their pockets.
To prepare for this eventuality, it’s very helpful to establish other sources of funding as early as possible. This may include funding from private individuals and organizations with an interest in the network’s success, in the form of ongoing token sales, grants, bounties, or sponsored seats for developers or researchers. It may include donations from community members (see Gitcoin, a platform for managing fundraising campaigns and donations across many chains). It may include an ecosystem fund. Having diverse sources of funding helps ensure that the network and community has the means to survive regardless of what happens to any individual person or organization.
Governance is of course complex and thorny and there is no single model appropriate for every project. For much more on this topic, see Governance.
When determining how to govern a blockchain, one of the first and most important questions to contemplate is, who is this platform for? As discussed above, some projects are intended for the use and benefit of a niche group, such as a consortium of companies. Others have more ambitious aims and regard themselves as truly public and open. Such platforms are designed for anyone, anywhere to use, and intend to host a wide variety of applications.
In spite of our best intentions, however, sometimes a system intended for the use and benefit of one group of people falls under the control of a different group of people. The term “capture” refers to state capture, a phenomenon all too common in real world politics whereby a small group of people, institutions, or special interests gain a great deal of power and influence over the governance of a country or other political entity and exploit that power and influence to benefit private interests, at everyone else’s expense. This often involves misuse of power such as ensuring that favorable laws are passed or that funding is directed towards private interests. A key characteristic of state capture is that it cannot be discovered or remedied through normal, legal means because the institutions that would ordinarily undertake investigate and prosecution (e.g., police, courts, electoral process, legislative and executive powers) are likely to have been subverted by the captors.
It may seem exaggerated to talk of state capture in the context of blockchain, since there isn’t nearly as much at stake in blockchain as in global politics. However, blockchains and states have a lot in common. Nation states and public blockchains both have millions of stakeholders. Both have governance that is, nominally, open and participatory. Blockchains do a lot of the things that states do: they have economic policy, they mint currency and collect taxes (transaction fees), they fund public goods, and they enact regulation (the protocol and other standards). At least in economic terms, there is in fact a lot at stake in blockchain: Bitcoin has a market capitalization larger than the GDP of all but 26 countries, putting it right between Nigeria and Belgium in economic terms. And they are both, unfortunately, subject to capture.
A blockchain may be captured in many ways, some more visible and obvious than others. A proof of stake chain may be captured permanently by a cartel of validators that control more than half of the chain’s overall stake, something that may be invisible if that stake is divided into many small, pseudonymous pieces. A similar thing can happen in a proof of work chain, when one party or cartel controls a high enough percentage of mining. Capture can also happen by social means, such as when one organization or cartel controls most of the funding or influence in a community. A corporation or even a state actor could capture a blockchain, openly or secretly, by accumulating a majority of hash power or validator slots or by corrupting key actors in a community.
Avoiding capture is extraordinarily difficult, and is a highly contentious topic in the blockchain community. While some feel that all explicit governance structures or processes are subject to capture, and that blockchain governance should therefore remain informal, informal systems of governance are subject to yet another especially insidious form of capture, the tyranny of structurelessness.
As discussed in the previous section, good governance is key for sustainability. In order for governance to be sustainable, it must be resistant to capture. In this, as in so many other things, we should look to best practices from the pre-blockchain, offline world: the best way to avoid capture is to design open, participatory institutions of governance and law that are transparent and accountable to the broader population, and that are subject to checks and balances. See Governance for more on this topic.
There seems to be a rule that, over time, the throughput and hence the amount of bandwidth and data processing capacity required to fully validate a blockchain increases monotonically in the best case, and exponentially in the worst. It certainly never gets easier! Even with extremely limited throughput, as of this writing the Bitcoin blockchain is around 317gb and increasingly rapidly, at around 25% per year. The disk space required to fully sync the Ethereum blockchain has reached nearly 600gb and due to more demanding resource requirements this process can take weeks, or sometimes not finish at all, even on good hardware. While it used to be possible to run an Ethereum full node on consumer-grade hardware on a home internet connection, that’s rapidly becoming untenable. Home mining, which was possible in the early days of both projects—and, indeed, was the way many early stakeholders earned their first BTC and ETH tokens—is now all but impossible. In this respect, popular blockchains such as Bitcoin and Ethereum have become victims of their own success.
While the problem of ever-rising resource requirements cannot be completely avoided, there are things that a project can do today to partially mitigate the issue. The most important thing is simply to do the math: sensitivity analysis will help you calculate best, middle, and worst case scenarios for the projected size of the database and the resources required to operate a full node down the road. If these requirements are too high to support your planned use cases, the best time to make changes is prior to genesis. Even shaving a few bytes off the size of a transaction or a signature, or allowing nodes to prune some types of data, can have a profound impact on resource requirements in the future.
Firstly, as a rule of thumb, blockchain projects should strive to make data encoding as efficient as possible. Different cryptographic signature formats produce signatures of varying sizes. As one concrete example, ECDSA signatures allow the public key associated with a signature to be extracted implicitly from the signature,2 meaning that a transaction does not need to explicitly contain the public key of its sender. Different serialization algorithms also clearly have an impact on data size.
Secondly, give careful consideration to precisely which data a full node most retrieve and store, and how long it must be stored. While core data such as blocks and transactions may need to be stored indefinitely, auxiliary data such as receipts and logs can likely be pruned after some time. It should be clear which data a node must store indefinitely, which data may be pruned, and when and how to do the pruning.
A third strategy is division of labor: rather than requiring every node to fully synchronize and validate the entire blockchain, it may make sense to allow individual nodes to be responsible for syncing and validating only a portion of the chain, such as through sharding. Another example of division of labor is the archive node, which stores not only all blocks, transactions, and auxiliary data like other full nodes, but also every intermediate state. This makes certain types of data queries much faster and easier, but requires substantially more disk space than an ordinary full node (around 6tb as of now for an Ethereum archive node).
A fourth strategy is better economics and incentives. In most blockchains, a user only needs to pay a single, upfront fee to write some data to the blockchain. The data must then be stored forever by all full nodes, which receive no ongoing compensation for storing the data. As a result, as many as 95% of the smart contracts stored on the Ethereum network are never used or are used only a few times,3 so from the perspective of the network this space is wasted and there is no way to reclaim it.4 A better economic model involves charging the owner of the data for this storage service per unit time, either directly through the payment of “storage rent” or “state fees” or else indirectly by requiring them to lock some tokens into a “storage bond.” This properly incentivizes use of scarce storage space and prevents such waste.
Additionally, there are a number of experimental strategies that should someday reduce the data storage burden on blockchain nodes. One such strategy, known as statelessness, shifts the burden of data storage from full nodes to users by requiring users to store their own data (or, alternatively, pay someone else to store it for them). Another experimental strategy involves using recursive zero-knowledge proofs to collapse the entire blockchain state at a given point in time down to a single proof (see Mina and Halo). While I’m not aware of any projects having launched these strategies in production, this is an area of active, ongoing research and I expect that, eventually, they will be put to good use.
Data and resource requirements aren’t the only aspect of blockchain that tends to grow larger over time. Software, like regulation, tends to monotonically grow more complex. While this may be a good thing for some types of software, this is arguably not the case for a system like a blockchain. As base layer infrastructure, the goal of blockchain should be to grow more stable and reliable over time, not more complex. As Antoine de Saint-Exupery famously said,
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
Blockchain software would do well to take this admonition to heart. Every blockchain should have a social contract with its users and developers that sets out the things it aims to achieve. Once those things are achieved, the only things that should change are emergency fixes and “under the hood” performance improvements. Beyond this, new features increase the risk of upgrade problems, new attack vectors, and new security vulnerabilities, and thus amount to dangerous bloat.
For a good example we can look to Bitcoin, which has arguably already achieved its most important goals: a censorship-resistant, seizure-resistant, non-sovereign form of sound money. There are many features that could be added to increase scalability or enable various use cases, but not without compromising on Bitcoin’s implicit social contract and promises of security, trustlessness, and hard issuance cap. This had led to a norm of “no hard forks” on the part of Bitcoin Core and in the Bitcoin community more broadly, and a roadmap with very few new features. The few features on the roadmap are mostly designed to improve L2 solutions such as Lightning Network, where most innovation in Bitcoin is happening today.
As an even better, more mature example, we can look to standards such as Ethernet, IPv4, and ASCII. These are foundational building blocks of modern computing and the Internet that developers can safely rely on because they’ve stopped evolving. These technologies have achieved their goals and have thus evolved to a final, complete state. This is sometimes referred to as Kelvin versioning: rather than using incremental versions as most software does, we can imagine some standards cooling towards an absolute zero, final state. This is how standard infrastructure like blockchains, VMs, and ISAs should be designed and engineered.
To be clear, this does not mean that innovation should stop or slow, but rather that it should be pushed to higher layers in the technology stack. While IPv4 hasn’t changed meaningfully since 1981, new standards such as QUIC have been built on top of it. The same is true of blockchain: the base layer provides primitives such as P2P/gossip, the database, consensus, transactions, and a VM, and applications at layer two and above can make use of these primitives to do more complex, innovative things. The base layer should continue to evolve until it’s mature, reliable, scalable, and secure, at which point innovation should be pushed up to these higher layers.
While the term scalability can be interpreted in several ways, most commonly it refers to the number of transactions that a blockchain can process per unit time. It’s one of the thorniest, most complex, and most important topics in blockchain, and is an area of active research and development. A network that doesn’t scale won’t be able to handle high transaction throughput, won’t be able to support many production applications and use cases, and will struggle to attract users and ongoing investment, negatively impacting sustainability.5
There are many approaches to scaling blockchains. However, they can all be reduced to one fundamental trilemma: security vs. transaction throughput vs. decentralization. Other things being equal, adding more nodes to a network increases decentralization but decreases transaction throughput since more nodes need to exchange more messages in order to reach consensus and stay in sync. And, other things being equal, requiring nodes to do more work to produce or validate blocks makes the network more secure but decreases transaction throughput since that work takes time.
These tradeoffs occur because of fundamental limits imposed by physics and thermodynamics and there is no way around them. For the same reason, you cannot increase throughput without reducing one or both of security and decentralization. Every blockchain that wishes to increase throughput must therefore make difficult tradeoffs.
One approach to scalability is to limit the number of nodes that participate in consensus by, e.g., requiring that they be large, powerful computers with very fast and reliable internet connections and/or that potential validators apply for permission to join the network. This approach, which trades decentralization for scalability, has been adopted by projects such as Solana and Hashgraph. Another approach is to divide the blockchain into a set of smaller logical chains, and to have fewer nodes validate the transactions on each of these smaller chains. Depending on whether and how they share security, the sub-chains may be called shards or parachains. This approach, which trades security for scalability, has been adopted by projects such as Eth2 and Polkadot.
While there’s no way around these tradeoffs in the chain topology, there are other distributed ledger topologies with different tradeoffs. DAG-based protocols such as Kadena and Spacemesh are able to achieve higher throughput without sacrificing security or decentralization by adding multiple blocks at each block height. However, these protocols are more complex than blockchain protocols and are unproven in production.
There are many competing architectures for scaling and none have so far emerged as dominant. As is always the case in design and engineering, each architecture has certain strengths and weaknesses. Each project should study the options, consider its needs and preferences, and pick one. While transaction volume may be low in the beginning and thus scalability may not feel like a priority, note that it tends to increase suddenly and exponentially so, if scalability matters, it’s important that you have a plan in place for when it does increase. You can always start off with a single shard or otherwise limited throughput, and increase it later as needed.
Open Source Readiness
While a small, focused, experienced team of developers may be able to build and ship a production blockchain platform, at some point they’ll probably want to open the project for contributions from outside developers. There are several reasons for this: funding may be limited, complexity increases over time and they’ll need help, having many independent client implementations is good, good ideas come from many places, and it gives the community a sense of ownership in the platform, to name a few.
However, as any developer who has tried knows all too well, soliciting outside contributions to a project, especially one as complex as a blockchain, involves a lot more than making the repository public! There are many open source projects competing for developers’ limited time and attention, and there are only so many developers with the skills necessary to contribute meaningfully to the development of full node software (skills such as systems programming, concurrency, networking, and languages like Rust and Go). There are certain steps you can take to make your project as inviting as possible to outside contributors.
These include picking the right license (and making sure that all prior contributors sign off on the license), improving documentation and tooling, and adopting good engineering hygiene such as making sure code is clean, well documented, and easy to read. It means improving software engineering workflow: clearly tagging issues (“Help Wanted” and “Good First Issue” are especially helpful here), moving as much communication as possible from internal channels into public ones such as issues and pull requests, communicating clearly, having a clear workflow for reviewing and merging code changes, and sticking to this workflow. It means responding quickly and politely to outside contributions and questions. Many teams of developers, used to working only among internal team members and communicating on private channels, may find these changes uncomfortable in the beginning.
It also requires thinking outside the box, escaping one’s comfortable, narrow frame of reference, and adopting a stance of patience and empathy: things that might “just work” for the core team—because, e.g., everyone is running the same OS, or has the same tools or libraries installed—may cause headaches for potential contributors using different tools or platforms. (Of course, this is another reason why it’s good to have outside contributors: your code should work on as many platforms and be compatible with as many development environments as possible!)
To some extent, soliciting contributions from outside developers is also a marketing challenge. It’s important to articulate clearly what your project stands for and why it’s worth contributing to. Writing blog posts and speaking at conferences, about your project specifically but also about general technical challenges you’ve encountered and how you’ve solved them, can help a lot with attracting developer interest.
Base Layer Complexity
In every application stack there is a tradeoff between, on the one hand, keeping complexity in the base layer versus, on the other hand, pushing it further up the stack. Other things being equal, complexity in the base layer is nice because it ensures a more consistent experience for all apps and all users, and better tooling. Everyone relies on the same base layer, and if everyone is using the same base layer tools more can be invested in those tools and they will get better, faster.
On the flipside, a more complex base layer presents several important challenges. It’s harder to make changes when something is more complex. A more complex base layer presents a larger attack surface and increases the likelihood of bugs and exploits, reducing security. In the case of blockchain, changing the base layer usually requires a network upgrade hard fork, which is costly to coordinate and cannot be done too frequently. Mistakes at the base layer are also very costly, since the network will probably be burdened with them forever. And, as with politics, it’s impossible to make everyone happy with protocol changes. For this reason, other things being equal, it’s better for the base layer to be as simple as possible. This allows more “localized” innovation and experimentation to happen further up the stack.6,7
This is a tricky balancing act and every network and protocol has to choose a point on the spectrum. Optimize very far in one direction and you get Bitcoin: a simple, stable, secure base layer and cryptocurrency with mature tooling, but one that’s arguably too simple to do many really interesting, useful things. Go very far in the other direction and you get Ethereum: a protocol that changes often in ways that sometimes seem arbitrary and unpredictable, that’s less secure as a result, and that has a plethora of immature, experimental tooling.
The “Goldilocks point” is a base layer that’s as simple as possible, but no simpler—one that’s secure, reliable, and that changes in predictable ways, and enables lots of interesting innovation at higher layers. Many different apps, use cases, and narratives should ideally be able to share a single base layer and exchange data and value. Such a blockchain would be extremely sustainable because changes to the base protocol would become vanishingly rare over time.
A Healthy Ecosystem
What ultimately makes platforms like Windows, Mac, iOS, Android, Linux, and the Web so valuable and dominant is not just their technical merit. It’s a virtuous feedback loop whereby the firms and organizations behind those platforms kickstart them by investing heavily in their success and recruiting stakeholders and the first few applications, leading to network effects, leading to a growing ecosystem of developers, users, companies, and other applications relying on the platforms, leading to yet more investment and development, and so on.
While all of the above areas, from economics to governance to scalability, are essential for sustainability, none is sufficient by itself. Simply having the best technology, documentation, governance, or even people is not enough. Platforms that want to be competitive and sustainable over the long term need to invest in an ecosystem: this, above all, is why blockchain communities and platforms such as Bitcoin and Ethereum continue to receive the lion’s share of attention and investment, even as more technologically sophisticated platforms continue to launch.
Developing and maintaining a healthy ecosystem is the hardest aspect of sustainability. There is no simple recipe since each successful project and platform ecosystem can and should be quite different, and because many different kinds of ecosystems can prosper. Compare, for instance, the Windows ecosystem and the Linux ecosystem. Both are vast, mature, and hugely successful. Both include large numbers of companies, developers, and applications. But these two ecosystems look almost nothing alike—one corporate and highly centralized, the other open source and highly decentralized—and there is very little overlap between them (or, at least, there wasn’t until recently).
While ecosystem development tends to be chaotic, organic, and difficult to control, there are a few concrete steps that a project’s founders and stakeholders can to do to promote the development of a healthy ecosystem. Community culture, constitution, and good governance are absolutely essential starting points as they attract the right ecosystem participants, put off the wrong ones, and help align interests. Another starting point is to kickstart the virtuous flywheel described above by investing in early wins: good documentation, reliable infrastructure and tooling, and a few killer apps. It’s also essential to cast a wide net and build bridges with different classes of stakeholders, since an ecosystem by definition requires a diverse set of participants. In addition to developers and users, this should include designers, investors, students, and those with experience managing community and public relations. Investment in education can also help a lot: bootcamps, accelerators, and online courses are a great way to attract more ecosystem contributors.
Conclusion: Where We’re Headed
An enormous amount of funding has flowed into blockchain recently. As a result, the landscape has become crowded and the field has grown quite competitive. This is good news from a social perspective because it has validated the idea of blockchain and cemented it in the minds of many people as a meaningful, high potential technology that’s here to stay. It also means that, among the many designs being tried, it’s likely that one or more winning designs will find product market fit, survive, and create value for many people. However, intense competition naturally makes it harder for any single platform and community to differentiate.
As the technology behind blockchain continues to develop, disseminate, and become commoditized, the best projects and teams will differentiate themselves in a way that has less to do with technology and more to do with mission, vision, values, and culture. Fostering a healthy culture is as important for the longevity of a project as any of the other topics discussed above. It’s also one of the hardest because, unlike technology, culture is hard to see, hard to define, and hard to protect.
A project’s social fabric is its contributors, community, and the mission, values, and principles that bind them together. A community that’s united primarily by a profit motive, or even by a concrete deliverable such as a particular piece of infrastructure, is not a community that will endure hardships or make it through the long haul together. Giving thoughtful attention to culture and adopting a long-term attitude and posture from the project’s earliest phases will maximize its sustainability and longevity. This is both because these actions and ideas really do have a direct impact on a project’s and community’s health and longevity, but equally because investing in these things sends an unambiguous signal that “this community is here for good.” It attracts the right kind of people, those interested in the long-term health of the network and community. (For more on this topic, see Community, Constitution and Crypto Has a Purpose Problem.)
To continue with the metaphor introduced in the introduction, blockchain is like a powerful airship that can take us quite far. The question is, where do we want to go? In the vector that is progress, technology gives us magnitude, but it doesn’t give us direction. What good is an airship, or any machine for that matter, if we don’t know where we want it to take us?
It’s very early days for blockchain as an idea and a technology, and I’m not sure that any team, project, or community yet has a compelling answer to this question. Even the stablest and most mature projects, such as Bitcoin, continue to face very real challenges and uncertainties. I don’t know which platforms, technologies, and communities will ultimately prevail, but I know one thing for sure: the ones that do will be the ones that have a clear, compelling answer to this question. In other words, they’ll be the ones that boldly and proudly know where they’re going and why, and aren’t afraid to talk about it.
As you think about the long term sustainability of your project, this is as good a place as any to start: where are you headed, and why? While I can’t articulate exactly what our destination is, I have a feeling that, even as we take very different paths, we’re all headed there together. I look forward to seeing you there. Godspeed.
This article is part of a multi-part series on the key ingredients to a better blockchain. Check out the other articles in the series:
- Part I: Tech and protocol
- Part II: Decentralization
- Part III: Community
- Part IV: Constitution
- Part V: Governance
- Part VI: Privacy
- Part VII: Economics
- Part VIII: Usability
- Part IX: Production Readiness
- Part X: Sustainability
This happens throughout human society. It’s the reason that people in a country built by immigrants can arbitrarily decide one day that they don’t want to allow any more immigration. And it’s easy to convince yourself why this is rational: because resources are limited, because we should prioritize those who are already here, or because there just isn’t room for more—nevermind the fact that, in countries as in blockchains, immigrants create enormous value over the long term. ↩
Source: Oliva, G.A., Hassan, A.E. & Jiang, Z.M.. An exploratory study of smart contracts in the Ethereum blockchain platform. Empir Software Eng 25, 1864–1904 (2020). https://doi.org/10.1007/s10664-019-09796-5 ↩
Bitcoin and Ethereum have proven remarkably sustainable in spite of (and certainly not because of) their inherent lack of scalability. However, it’s important to note that we don’t have a counterfactual: if these networks could handle 10x or 100x the throughput, how much more popular would they be? How many more applications and users would they boast? Other things being equal, greater scalability and more throughput is always better. ↩
As a concrete example, the Ethereum community is currently considering a proposal, EIP-2593, that introduces an in-protocol bidding parameter that allows a user to specify a transaction fee that is automatically escalated over time. While this initially sounds like a good idea, upon further reflection, it feels to me like complexity that doesn’t belong at the base layer. It has the downside of making bidding more complex for everybody, and some users may be better served by an entirely different bidding strategy which they can already deploy on their own (using, e.g., better wallet software). ↩
There are some interesting real-world parallels here as well. Companies and countries face a version of this tradeoff. When they move complexity, in the form of regulation or policy, into the “base layer” (i.e., the top, executive, or national level), they gain efficiency at the cost of more localized innovation, and making the entire structure more fragile. When they devolve this complexity, they make the opposite tradeoff: more local innovation and more resiliency at the cost of standardization and efficiency. ↩