CelerNetwork技术社区 ·


              热度: 44758


              Celer Network技术社区官方翻译自Vitalik最新博客

              Layer 1 Should Be Innovative in the Short Term but Less in the Long Term



              One of the key tradeoffs in blockchain design is whether to build more functionality into base-layer blockchains themselves (“layer 1”), or to build it into protocols that live on top of the blockchain, and can be created and modified without changing the blockchain itself (“layer 2”). The tradeoff has so far shown itself most in the scaling debates, with block size increases (and sharding) on one side and layer-2 solutions like Plasma and channels on the other, and to some extent blockchain governance, with loss and theft recovery being solvable by either the DAO fork or generalizations thereof such as EIP 867, or by layer-2 solutions such as Reversible Ether (RETH). So which approach is ultimately better? Those who know me well, or have seen me out myself as a dirty centrist, know that I will inevitably say “some of both”. However, in the longer term, I do think that as blockchains become more and more mature, layer 1 will necessarily stabilize, and layer 2 will take on more and more of the burden of ongoing innovation and change.

              在区块链设计中,我们往往需要考虑一个很重要的取舍:到底是把更多的功能放在底层公链中(所谓的“第一层”),还是将更多的功能放到更容易演进和修改的链下的协议当中去(所谓的“第二层”架构)。这种取舍问题的一个常见的具体实例就是关于区块链扩容方案的争论。扩大区块容量(以及分片【1】)算是第一层的代表,Plasma和广义状态通道【译者注:Celer Network正是这类代表】则是第二层解决方案的代表。从某种程度上来?#25285;?#21306;块链治理也算是第二层协议,具体的例子包括代币丢失和盗窃的恢复解决方案,类似于DAO硬分叉(the DAO fork【2】)或者是对这类解决方案的提案如EIP 867【3】,抑或是其他的解决方案类似于可逆ETH转账(Reversible Ether (RETH)【4】)。那么哪种方案最终来?#35789;?#26356;好的呢?了解我的人都知道,我属于典型的中庸派【5】,所以我显然会?#25285;?#25105;们要两手抓,两手都要硬。但是从长期来看,我确实认为随着底层公链越来越成熟,第一层会逐步地趋于稳定,那么第二层将会?#20013;?#22320;承担越来越多的创新和改变。







              There are several reasons why. The first is that layer 1 solutions require ongoing protocol change to happen at the base protocol layer, base layer protocol change requires governance, and it has still not been shown that, in the long term, highly “activist” blockchain governance can continue without causing ongoing political uncertainty or collapsing into centralization.


              To take an example from another sphere, consider Moxie Marlinspike’s defense of Signal’s centralized and non-federated nature. A document by a company defending its right to maintain control over an ecosystem it depends on for its key business should of course be viewed with massive grains of salt, but one can still benefit from the arguments. Quoting:

              举一个非币圈的例子来?#25285;?#22823;家可以考虑看看Moxie Marlinspike’s 的文章defense of Signal’s centralized and non-federated nature【6】【译者注:Signal是一个类似于Telegram的消息服务】。当然,这篇文章是一个为了保护对自己生意所依赖的生态的?#34892;?#21270;控制而写的文章,所以软文的嫌疑不小,但是里面的一些论断还是有价值的。这里引用如下:



              " One of the controversial things we did with Signal early on was to build it as an unfederated service. Nothing about any of the protocols we’ve developed requires centralization; it’s entirely possible to build a federated Signal Protocol-based messenger, but I no longer believe that it is possible to build a competitive federated messenger at all. "




              " Their retort was “that’s dumb, how far would the internet have gotten without interoperable protocols defined by 3rd parties?” I thought about it. We got to the first production version of IP, and have been trying for the past 20 years to switch to a second production version of IP with limited success. We got to HTTP version 1.1 in 1997, and have been stuck there until now. Likewise, SMTP, IRC, DNS, XMPP, are all similarly frozen in time circa the late 1990s. To answer his question, that’s how far the internet got. It got to the late 90s.

              That has taken us pretty far, but it’s undeniable that once you federate your protocol, it becomes very difficult to make changes. And right now, at the application level, things that stand still don’t fare very well in a world where the ecosystem is moving … So long as federation means stasis while centralization means movement, federated protocols are going to have trouble existing in a software climate that demands movement as it does today."

              他们反驳我?#25285;骸?#36825;不是傻X么,你觉得互联网如果没有一个可以共治的协议层的话,能到达今天的样子?”我仔细想了想,觉得他们说的完全没道理。拿IPv4做例子,二十年前我们第一次把IPv4产品化,过了二十年,我们把IPv4升级到IPv6的努力基本上没成功。再拿HTTP 1.1来?#25285;?997年就出现了,然后我们就卡在这个标准直到现在。类似的例子还有很多,SMTP,IRC,DNDS,XMPP,这些协议的演进都停留在了90年代。“能达到今天的样子?”我看今天的样子和90年代末没区别。


              Celer Network译者按:这段话说成大白话的意思其实就是,如果一个协议层已经获得了广泛的共识,再去修改他其实是很难的,如果需要快速的迭代,就要从哪些还处在初期的,没有获得广泛共识的协议去入手。

              At this point in time, and in the medium term going forward, it seems clear that decentralized application platforms, cryptocurrency payments, identity systems, reputation systems, decentralized exchange mechanisms, auctions, privacy solutions, programming languages that support privacy solutions, and most other interesting things that can be done on blockchains are spheres where there will continue to be significant and ongoing innovation. Decentralized application platforms often need continued reductions in confirmation time, payments need fast confirmations, low transaction costs, privacy, and many other built-in features, exchanges are appearing in many shapes and sizes including on-chain automated market makers, frequent batch auctions, combinatorial auctions and more. Hence, “building in” any of these into a base layer blockchain would be a bad idea, as it would create a high level of governance overhead as the platform would have to continually discuss, implement and coordinate newly discovered technical improvements. For the same reason federated messengers have a hard time getting off the ground without re-centralizing, blockchains would also need to choose between adopting activist governance, with the perils that entails, and falling behind newly appearing alternatives.






              Even Ethereum’s limited level of application-specific functionality, precompiles, has seen some of this effect. Less than a year ago, Ethereum adopted the Byzantium hard fork, including operations to facilitate elliptic curve operations needed for ring signatures, ZK-SNARKs and other applications, using the alt-bn128 curve. Now, Zcash and other blockchains are moving toward BLS-12-381, and Ethereum would need to fork again to catch up. In part to avoid having similar problems in the future, the Ethereum community is looking to upgrade the EVM to E-WASM, a virtual machine that is sufficiently more efficient that there is far less need to incorporate application-specific precompiles.







              But there is also a second argument in favor of layer 2 solutions, one that does not depend on speed of anticipated technical development: sometimes there are inevitable tradeoffs, with no single globally optimal solution. This is less easily visible in Ethereum 1.0-style blockchains, where there are certain models that are reasonably universal (eg. Ethereum’s account-based model is one). In shardedblockchains, however, one type of question that does not exist in Ethereum today crops up: how to do cross-shard transactions? That is, suppose that the blockchain state has regions A and B, where few or no nodes are processing both A and B. How does the system handle transactions that affect both A and B?


              The current answer involves asynchronous cross-shard communication, which is sufficient for transferring assets and some other applications, but insufficient for many others. Synchronous operations (eg. to solve the train and hotel problem) can be bolted on top with cross-shard yanking, but this requires multiple rounds of cross-shard interaction, leading to significant delays. We can solve these problems with a synchronous execution scheme, but this comes with several tradeoffs:

              · The system cannot process more than one transaction for the same account per block

              · Transactions must declare in advance what shards and addresses they affect

              · There is a high risk of any given transaction failing (and still being required to pay fees!) if the transaction is only accepted in some of the shards that it affects but not others

              目前的答案【14?#21487;?#21450;到非同步的跨片通信。这样的架构对于传输资产和一些其他应用是够了,但是对于其他很多应用还是不够的。同步的操作(?#28909;?#35299;决train and hotel problem【15】)可以通过“跨片拖拽”(cross-shard yanking【16】)的方式?#35789;?#29616;,但跨片拖拽往往需要多轮跨片互动,从而导?#24405;?#39640;的延迟。我们可以通过一个同步运行策略(synchronous execution scheme【17】)来解决这些问题,但这个策略有如下几个取舍需要:

              · 这个系统对一个账户,只能在一个区块里面处理一个transaction。

              · Transaction必须实现定义好他们会影响那个分片和地址

              · 任何一个transaction都有很高的风险失败,而且失败的transaction仍然要支付?#20013;?#36153;。这种失败情况出现在如果这个transaction被?#25215;?#20998;片接受了但被另外一些拒绝了。


              [14] https://github.com/ethereum/wiki/wiki/Sharding-FAQs#how-can-we-facilitate-cross-shard-communication




              It seems very likely that a better scheme can be developed, but it would be more complex, and may well have limitations that this scheme does not. There are known results preventing perfection; at the very least, Amdahl’s law puts a hard limit on the ability of some applications and some types of interaction to process more transactions per second through parallelization.

              当然,很可能有?#26085;?#20010;跨片拖拽更好的机制,但这种更好的机制肯定会更加复杂,而且说不定反而会有一些这个机制没有的局限性。同时,一些众所周知的结论也使得分片无法达到完美;举个最基本的常识,Amdahl’s law【18】告诉我们,通过并行化来提高TPS对于一些应用来?#25285;?#26159;有一个无法突破的上限的。【译者按:Amdahl’s law是并行计算里面的一个重要常识,基本的意思就是?#25285;?#22312;并行计算系统当中,只要有一部分计算是不可并行的(在分片区块链当中就是跨片transaction),那么增?#30828;?#34892;度(分片数目)对效率的提升的增益是不断下降的,如下图】



              So how do we create an environment where better schemes can be tested and deployed? The answer is an idea that can be credited to Justin Drake: layer 2 execution engines. Users would be able to send assets into a “bridge contract”, which would calculate (using some indirect technique such as interactive verification or ZK-SNARKs) state roots using some alternative set of rules for processing the blockchain (think of this as equivalent to layer-two “meta-protocols” like Mastercoin/OMNI and Counterparty on top of Bitcoin, except because of the bridge contract these protocols would be able to handle assets whose “base ledger” is defined on the underlying protocol), and which would process withdrawals if and only if the alternative ruleset generates a withdrawal request.

              那么我们如何创造一个环境使得更好的机制能被测?#38498;?#37096;署呢?答案是(Justin Drake提出的这个说法):第二层执行引擎。用户可以将资产发送到一个“桥接?#26174;肌?#24403;中,这个“桥接?#26174;肌?#20250;使用类似于interactive verification 【19】和 ZK-SNARKs【20】的技术来非直接的计算state root,计算的方法则是根据一些别的区块链处理规则。你可以类比的bitcoin中的一些第二层“元协议?#20445;?#20363;如Mastercoin/OMNI 【21】和 Counterparty【22】,但有点不一样的是,由于这样的桥接?#26174;?#30340;存在,这些协议可以直接处理底层协议上面的资产,并?#25233;?#20250;处理根据这些链下执行引擎规则所产生的赎回请求。







              Note that anyone can create a layer 2 execution engine at any time, different users can use different execution engines, and one can switch from one execution engine to any other, or to the base protocol, fairly quickly. The base blockchain no longer has to worry about being an optimal smart contract processing engine; it need only be a data availability layer with execution rules that are quasi-Turing-complete so that any layer 2 bridge contract can be built on top, and that allow basic operations to carry state between shards (in fact, only ETH transfers being fungible across shards is sufficient, but it takes very little effort to also allow cross-shard calls, so we may as well support them), but does not require complexity beyond that. Note also that layer 2 execution engines can have different state management rules than layer 1, eg. not having storage rent; anything goes, as it’s the responsibility of the users of that specific execution engine to make sure that it is sustainable, and if they fail to do so the consequences are contained to within the users of that particular execution engine.


              In the long run, layer 1 would not be actively competing on all of these improvements; it would simply provide a stable platform for the layer 2 innovation to happen on top. Does this mean that, say, sharding is a bad idea, and we should keep the blockchain size and state small so that even 10 year old computers can process everyone’s transactions? Absolutely not. Even if execution engines are something that gets partially or fully moved to layer 2, consensus on data ordering and availability is still a highly generalizable and necessary function; to see how difficult layer 2 execution engines are without layer 1 scalable data availability consensus, see the difficulties in Plasma research, and its difficulty of naturally extending to fully general purpose blockchains, for an example. And if people want to throw a hundred megabytes per second of data into a system where they need consensus on availability, then we need a hundred megabytes per second of data availability consensus.

              从长期来看,底层公链不会主动的去在所有这些效能的提升上面去竞争;它的使命是提供一个稳定的底层平台,让layer2的创新去成长。那?#35789;?#19981;是?#25285;?#24213;层的创新就彻底别要了,?#28909;紓?#20998;片就是不?#31185;?#30340;,我们应该保持区块链的尺寸和状态很小,?#35789;?#21313;年前的老计算机还能处理所有人的transaction呢?绝对不是这样的。?#35789;?#25191;行引擎部分或者全部迁移到layer2,对于数据?#25215;?#21644;可用性的共?#24230;?#28982;是一个高度通用化和必要的功能;如果没有一个底层的高扩展性的数据可用性共识机制,layer2的执行引擎也是很难搞的,?#28909;鏟lasma research【23】中的各类【24】挑战 ,以及将plasma自然地拓展到通用智能?#26174;?#21306;块链的挑战【25】。再说如果人们希望每秒对100MB的数据做可用性共识,我们就需要这么高速的底层共识机制做支撑。





              Additionally, layer 1 can still improve on reducing latency; if layer 1 is slow, the only strategy for achieving very low latency is state channels, which often have high capital requirements and can be difficult to generalize. State channels will always beat layer 1 blockchains in latency as state channels require only a single network message, but in those cases where state channels do not work well, layer 1 blockchains can still come closer than they do today.




              Hence, the other extreme position, that blockchain base layers can be truly absolutely minimal, and not bother with either a quasi-Turing-complete execution engine or scalability to beyond the capacity of a single node, is also clearly false; there is a certain minimal level of complexity that is required for base layers to be powerful enough for applications to build on top of them, and we have not yet reached that level. Additional complexity is needed, though it should be chosen very carefully to make sure that it is maximally general purpose, and not targeted toward specific applications or technologies that will go out of fashion in two years due to loss of interest or better alternatives.


              And even in the future base layers will need to continue to make some upgrades, especially if new technologies (eg. STARKs reaching higher levels of maturity) allow them to achieve stronger properties than they could before, though developers today can take care to make base layer platforms maximally forward-compatible with such potential improvements. So it will continue to be true that a balance between layer 1 and layer 2 improvements is needed to continue improving scalability, privacy and versatility, though layer 2 will continue to take up a larger and larger share of the innovation over time.


              关于译者Celer Network:

              Celer Network以分层架构建立了一套链下扩容体?#25285;?#25552;出了广义状态通道和侧链灵活结合的cChannel,在不牺牲信任与安全保障的情况下不仅加速简单支付,同?#24065;?#25903;持智能?#26174;?#21644;各类复杂应用的加速扩容。Celer 团队同时提出了第一个最优化的链下支付和状态路由算法cRoute,一个简单易用的应用开发框架和用户移动端接口cOS作为区块链应用的新入口。Celer Network拥有极强的普适性,可以广泛兼容主流区块链。在技术创新的同时,Celer Network独家首创了第一个基于博弈论和拍卖理论的链下扩容加密货币经济学和代币模型,系统和完整地提供了链下扩容平台中的核心激励和安全保障机制。

              作为链下扩容解决方案代表, Celer Network希望用自己的强大技术实力,和对行业生态的把握,来推动区块链链下生态的成熟与发展,真正让区块链走入千家万户。

              关键字: V神 区块链创新点