ethereum devcon-0: ethereum 1.x: on blockchain interop and scaling
Technical presentation covering Ethereum 1.x protocol features, focusing on blockchain interoperability and scaling solutions for the growing network.
Transcript
[00:14] SPEAKER_00: So welcome to day number four of DevCon. We use zero indexing here. We're a civilized company. Yeah, no, the first day was day zero. So today we're going to be talking about scaling and interoperability. Well today in general we're going to be talking about Ethereum 1.1 and Ethereum 2.0 and the general blockchain protocol related stuff. So first presentation we have is on scaling and interoperability. So I guess to start off first of all, what is scalability? What do we want out of scalability? Sure.
[00:58] SPEAKER_01: So in an ordinary blockchain what you have is that every node processes every transaction and updates every full node anyways processes every transaction. And so the scalability of the system is kind of limited by the fact that every node needs to do everything. So what you want to do is relax that assumption but still have the properties of consensus, like non-repudiation, like source authentication, contract calls and et cetera. And so scalability solutions are kind of trying to make it so that not every node processes every transaction but that we still have security in some way.
[01:36] SPEAKER_00: So right now with, just for statistics, right now, Bitcoin, one transaction per second, MasterCard processes 2,000 transactions a second. If Bitcoin goes up to that level then we'll basically have full nodes processing about 1 gigabyte every 3 seconds. So not exactly sustainable for any kind of normal computer.
[02:00] SPEAKER_01: So we'll have 10 full nodes?
[02:02] SPEAKER_00: Yeah, 10 full nodes. And they probably won't be run by Blockchain.info and Coinbase, they'll be run by Amazon and Google, who people seem to consider as being more evil than good local companies. So yeah, what we want is we basically want a system that works. You know, as Dominic Williams from Pebble put it, we want a system that works by scaling out and not by scaling up. So instead of making these big, instead of making the system continue working by having full nodes be more and more powerful, you figure out some way for it to work even if no single node processes more than a small portion of all the transactions.
[02:39] SPEAKER_01: Another way to see that is instead of just stacking blocks on top of themselves in one chain, we hope to maybe have like another architecture where not all blocks go in the same chain.
[02:49] SPEAKER_00: So right now what we have as far as scalability goes is the scale up approach, which is we say, okay, the blockchain is going to grow really big, but we're going to make sure that clients can still be secure and that kind of works until of course it gets, until of course the whole thing just gets way too big. There might not even be like it might not even be possible for a single node to process all the crypto transactions that people potentially might want to do. Especially for your start thinking about crypto not just as a payment system, but also for a whole bunch of these.
[03:22] SPEAKER_01: Dapps and also for like, you know, not just people, but also for programs and hardware. So there's basically three broad classes of scalability approaches. There's building on top, kind of scaling one chain by sharding this state space and having multiple, multiple blockchains and interfacing them to gain some of the properties of one blockchain. So we're just going to kind of go through them and talk about kind of the state of research on those topics and what the trade offs are and maybe what the disadvantages and advantages of each approach is. So as far as scaling on top.
[04:03] SPEAKER_00: So the idea with building on top is basically, you know, keep the blockchain exactly as it is, but we try and figure out some ways for as much stuff as possible to happen off the chain but still be secured by the chain in the long term in some fashion. So this is one, the one sort of the first practical example is, well I guess micropayment channels is actually one that you don't even need Ethereum for. You can do it on plain old Bitcoin. And the way that works is there is a sort of special two party protocol that you can use to create a sort of what's called a micropayment channel going from A to B. And the idea is there's sort of two, there's a two party protocol for updating the channel. So the channel starts off containing some quantity of Bitcoin. Let's say one BTC and A initiates the channel. It starts off with A, A puts the Bitcoin into the channel. The channel starts off giving the entire Bitcoin back to A. And then there's a sort of off chain update protocol where the channel updates to let's say 0.99 to A, 0.01 to B, 0.98 to A, 0.02 to B and so forth. And the idea is you have a network with a bunch of these channels and if you want to make a payment you just sort of find the pathway through all the channels and update all the channels along the way and to use the blockchain only for eventual settlement. When a channel fills up probabilistic payments.
[05:22] SPEAKER_01: Yeah, so probabilistic payments basically work, right?
[05:24] SPEAKER_00: Okay.
[05:25] SPEAKER_01: Say for example for a file storing application, instead of making a payment on the blockchain, for every file transfer that you do, every upload or every download, what you do is make a payment provably with some probability so that the people still have the same expected return from securing those files. But you don't put as many things in the blockchain. Kind of issue with that is that increases volatility of the return but over time it should kind of be the same and there's much less stuff in the blockchain.
[05:51] SPEAKER_00: And just as a, so that's, so those two approaches work for payments. There's also a third category which is this idea of off chain auditable challenge response computation. So just as an example of that, in some cases there's going to be a lot of computations that need to have the security of the blockchain but which might be too expensive to do on the blockchain itself. So the property here is, so the data involved in the computation is not necessarily too large, but the computations are very large. So zero knowledge proofs, one example. So something like you know, verifying SCRYPT takes 6 milliseconds inside of C or C++ might end up taking you know, 60 milliseconds inside of the Ethereum JIT VM. Another example, Truthcoin is actually another really good example because what they need to do is they basically, Truthcoin is like this decentralized oracle that like does sort of Schelling coin like stuff but for many bets, many decisions at the same time. And the way it works is that it uses these matrix algorithms to try and figure out who's sort of which voters are more sort of globally compliant with the entire consensus so they can reward them more in order to. And doing that matrix math as you know, takes O of N cubed operations which you know for very large matrices is pretty expensive.
[07:15] SPEAKER_01: Yeah, so basically the idea there is that the blockchain acts as something for punishing these oracles if they don't come up with the right result then you can basically check it on blockchain and remove a security deposit from them. And if they do then you don't need to do anything and the computation happened off chain. So basically on chain everyone does everything. And so you only really need that for when things go wrong. That's kind of the paradigm there.
[07:39] SPEAKER_00: Yeah. So it's sort of by default you trust, but you have some period during which anyone can challenge. If someone audits and they find something wrong, they can sort of pull the computation down on the blockchain. And if that ends up being wrong, security deposit's lost. Cool.
[07:54] SPEAKER_01: So we're gonna move to sharding.
[07:57] SPEAKER_00: Sure. So sharding. Sure.
[08:00] SPEAKER_01: So one idea for blockchain scaling is to take kind of one blockchain and take this state space and split it up into subspaces. Still have the same digital asset on all the subspaces and have a protocol for moving funds and doing contract calls between the subspaces, as well as a protocol for making sure that all subspaces have enough nodes in order to not introduce any faulty state transitions.
[08:23] SPEAKER_00: So the general sort of intuition is that if you, you could have, make a new pad. So the intuition is you have, you have a bunch of, a bunch of sort of substates. You can think of them as being vertices and they're arranged in this, in some kind of, you know, dense graph structure. So you know, we talked, we talked a bit about graphs yesterday, but. And then you have a sort of, sort of header chain in the middle. And so the idea is, is that you as, if you are a miner on this kind of system, what you're actually doing is you're mining an edge. So when you mine an edge, what it means is that you can process transactions that happen inside of here. You're processing transactions that happen inside of here, and you're also processing the movement of messages going from here to here and from here to here. So let's say if you have a sort of multi chain dapp or if you, that has some state in this, in this little bit sector and some state in this little sector, then it sends off a message and the message gets stored in the outbox here. Eventually someone mines this edge. When someone mines this edge, it gets kicked out of the outbox here, it gets moved to the outbox here. When it mines this edge, it gets moved from the outbox here to the outbox here and eventually sort of makes its way over here and then for the header chain here. The idea is that basically all of these sort of mined edges all make their way into the header chain. And the header chain just sort of maintains the global order of the entire thing just by keeping track of all the headers.
[09:50] SPEAKER_01: And so the reason that the header chain is really important for this type of architecture is that because everyone uses the same digital asset, if you were to have say, one of these substates not really coordinated with the other ones. You have one substate that has much lower security than the other ones. And then you would basically, by introducing a multistate transition in that substate, impact all the other ones because they're all using the same digital asset. So, the fact that everyone shares some piece of consensus is important because that's what you use to make sure that you don't have insecure sub states, occurs, right?
[10:28] SPEAKER_00: Yes.
[10:29] SPEAKER_01: So we call it this, kind of like the fragility problem. If something goes wrong in one substate and you, not every node is processing everything. So for any given node's perspective, you'll never know 100% that something didn't go wrong in one of the sub states that you didn't process. What you can do is kind of challenge response protocols, right?
[10:44] SPEAKER_00: So that's the first approach. So the idea behind challenge response protocols is that if you think about a block, you block, you have a header, then you have some set of transactions, and then you have a state tree over here. Then there's some subset, there's going to be some subset of state nodes that the block ended up modifying while processing each one of these transactions. So what you realize is that if a block is invalid, then what that means is, first of all, it could just be that the block is badly formatted, which is just very obvious to detect. But it could also mean that one of these transactions at some point, has some state transition that's invalid. And so the idea is that, if an attacker makes, or if an attacker makes a block on one chain and that block is invalid. So first of all, if the block is actually invalid, then what you can do, what some good guys can do, is they can basically provide a Merkle tree proof of, the exact set of changes that the transaction was supposed to make. And people can see that the changes that the attacker made are not the same as the changes that the attacker is supposed to make. Now the other problem, of course, is, well, what if the attacker just publishes a block but does not publish all of the data? So that's going to be, that's going to be very, very obvious to people on that chain. But the problem is if the data is not there, then there's no way to come up with a direct proof that it's invalid because there's just sort of no data. Like theoretically, you know, theoretically, if Bill, if Bill Gates turned out to give the, to be generous and give the attacker $100 million. There could theoretically be something legitimate that happens to give the attacker $100 million. And but if the attacker doesn't supply the data, then there's sort of no way to know. So that's where challenge response comes in.
[12:28] SPEAKER_01: And so the idea would be that if you issue a challenge of some particular state transition and the response is never provided that you kind of would assume that that state transition is not valid. But kind of the issue with that is that you basically require people to be giving these challenges and looking for where it went wrong. But if there's no cheating, then there's no incentive to do that. So kind of the natural delivery ends up being that there's a non zero amount of invalid state transitions.
[12:59] SPEAKER_00: That's one problem. The other problem is this sort of fragility issue which is that if an attacker DDoSes the network, then the default state is for blocks to get processed and therefore blocks might end up surviving whatever the entire challenge period is without getting challenged, even if there's actually something wrong.
[13:18] SPEAKER_01: And then if you have an invalid transition that's only discovered much later than the effects of that transition could have propagated to many, many things and you know, we might have to roll back like 100,000 blocks if you want to kind of fix that. And so that's kind of like that.
[13:32] SPEAKER_00: Yeah. So somewhat, the somewhat more stable algorithm for dealing with this problem is this jury selection approach. So idea behind jury selection is that let's say that okay, I mine this edge, then what the heterochain. So then what the protocol says is okay, from every, from randomly, from all of these all of these little substates together, I'm going to randomly select you know, 200 nodes based weighted by stake. So this is a proof of stake mechanism. There's actually no way to translate it into a proof of work paradigm. Which is actually by the way another reason why proof of stake is superior. But so the idea is you weighted by stake, you randomly choose some 200 nodes and, and basically a majority of those 200 nodes have to sign on the sign for the block's validity. And the way that those 200 nodes know if the block is valid because there might be a node from here and that node might not be keeping track of the state at all. The way they do that is that the voter provides the block with the state route transactions. But right now in Ethereum over the network the thing that gets sent is block header plus transactions. But here the thing that's going to get sent is block header plus transactions plus whatever subset of the state ends up being manipulated during that block. And then this whole chunk by itself can be validated even by someone who has no prior information at all.
[14:54] SPEAKER_01: And kind of theoretically that has no cheating in the natural equilibrium because it's basically forcing a challenge response on every block.
[15:02] SPEAKER_00: Kind of, yeah. So the idea is you have, in order for an attacker to be able to successfully cheat on this thing, just purely by statistics, the attacker has to have something like 30 or 40% of the entire active stake in the network. So you have sort of the benefit of only 200 nodes validating each block, but without the cost of there only being sort of 200 nodes protecting the system, because every node is statistically protecting the system, even though it's not actually protecting each and every time.
[15:31] SPEAKER_01: So we have a probabilistic rather than deterministic guarantee. But it's still quite good because you can increase the number of nodes and that will make the amount of stake required to attack it.
[15:42] SPEAKER_00: Basically the probability of the things going wrong is like, is negatively exponential in the number of nodes involved. So that's one. So that's the sort of. So the last thing with the hypercube approach is need a protocol for growing and shrinking the thing. So, so, you know, you could just fix it to 12 dimensions and say, okay, there's 4,096 substates. But the problem is that, you know what, first of all, initially that 4096 might be way too sparse, and then eventually it might end up not being enough. And so you need one thing that needs to be figured out is some kind of mechanism for growing the graph. So one option is to just, you know, if it gets big enough, then eventually you add a dimension. So then if you just sort of keep on adding dimensions, the choice that you have is, first of all, when you add a new dimension, do you just, do you make the new cubes empty by default and sort of just use incentive mechanisms to try and subsidize new people being on those cubes instead of on other. Or on those vertices instead of on other vertices?
[16:44] SPEAKER_01: Or you could wait for each substate to have twice as many nodes as necessary and split each one into two and basically, add another dimension to the hypercube?
[16:53] SPEAKER_00: Yeah, yeah. So the choice is, you know, do you start a V1 empty or do you actually try and split them in half? Problem with split. So splitting them in half feels more elegant and automatic in some respects. The problem with it is that you might have dapps that you know, were very tightly connected but now they're suddenly much less connected and so their gas costs suddenly go way, way up. Which is kind of annoying for if you're, if the, because the contracts are kind of stupid autonomous agents and they have no way of, you know, figuring out how to deal with the problem. Cool.
[17:24] SPEAKER_01: So we're going to move to multi chains.
[17:26] SPEAKER_00: Yep.
[17:27] SPEAKER_01: So the idea with multi chain scaling solutions is that instead of taking kind of one consensus group with one digital asset and try to split it without losing the properties of it, what we do is we take many consensus groups with many different assets and try to interoperate them in order to gain some of the properties of scalability. So basically there's different types of interoperability. You can have this type of interoperability called atomic interoperability, which is basically based on this Tier Nolan primitive, which is this kind of really cool idea where you'd basically make a contract on one of the chains that says if X such that the hash of X is equal to some Y that's in the contract is provided, then you would do something and then otherwise if some time passes, then do something else. So the idea is that, so if Alice and Bob wanted to exchange tokens between two blockchains, and they kind of know each other and they know that they want to do this but they don't trust each other so much that like you would just send. Alice would just send Bob tokens on her chain and then kind of trust that Bob would send her back tokens on the other chain. This kind of forces either both trades to go through or neither. So the way it works is basically Alice will make a contract that says if the preimage of some hash is provided then send money to Bob and she makes that hash. So she knows the preimage. Otherwise if some amount of time passes they'll send the money back to myself and then Bob will see this contract and make another contract on Bob's chain, chain B, and do use the same hash he does another preimage, send money to Alice if that preimage is provided and otherwise it'll send money back to Bob. So basically what will happen is if Alice wants the trade to go through, she'll provide the preimage of the hash and to take her funds on Bob's chain and then Bob will therefore have the preimage of the hash and be able to take his funds on Alice's chain. So basically without having to either the consensus groups do kind of any SPV proofs of the other consensus groups, we're able to do kind of functionality that will only happen on one if it happens on the other by using this kind of common secret that if it is provided some functionality and the contract is unlocked. So this type of interoperability is called atomic interoperability.
[20:14] SPEAKER_00: So the other kind of interoperability is the protocol level interoperability. So the difference is with atomic level interoperability of things that you're trying to do is trying to mediate sort of cross chain interactions between users. So decentralized exchange is one example or potentially like cross chain operations where something happens on this chain only if something happens on that chain. A protocol level interop is where you would have some kind of dapp or some kind of protocol that actually on one chain that actually needs to get services from the other chain. As a protocol on the whole. So one example, one example of that is let's say that you would have some, let's say that you have some kind of, you know, you have Ethereum on Ethereum you have a contract and then that, and that contract tries, is basically is a stable coin and it tries, you know, it uses some of that sort of contract for difference interest, whatever interest rate target mechanism to try and keep it. Keep a dollar value of. One problem is it has no idea what the exchange rate of a dollar to, you know, any of its to whatever to its volatile coin is. So what it does is first of all it would maintain an internal decentralized exchange to figure out the exchange rate of its own volatile coin to Ether. And then you need the exchange rate of Ether to the US dollar. But the situation is let's say that you have over here you have Truthcoin. And Truthcoin just happens to be a decentralized oracle because it's the decentralized oracle network and it has a much larger set of users to vote. So it has more security. So this dapp over here might want to ask Truthcoin what is the price of, what is the price of Ether in US dollars? And the idea is that this chain needs to have a way of sort of directly asking this chain some question. So the way that you generally do that is you would have the Ethereum here maintain basically a light, actually maintain internally inside the chain a light client of Truthcoin. And then in Truthcoin you would have in Truthcoin you would have some active, you know, set some active set of voters that are voting, that are just continuously voting on the question of you know, what's the value of Ether in dollars. Then if the Truthcoin blockchain records some particular results then a light client protocol of their lite client protocol can determine what the result is. So the result would be stored in the state tree and then basically the light client protocol would ask some node to provide a Merkle tree proof of what is the value of a dollar. And this Merkle tree proof inside of transaction data. The light client protocol would reward whoever provides the proof and then that's how it would know what the value of Ether to a dollar is. And then the other thing that you can do is that based on who provided who set the transaction or other based on, so the other thing you can do is you can also peek into the Truthcoin blockchain, you can determine who contributed to this particular vote and then you can give them Ethereum assets inside of Ethereum. So you sort of, you're just, you're sort of buying services from Truthcoin users inside of Ethereum and you're also, and you're getting back this truthful feed of what the result is and then given the results the dapp would be able to do whatever it needs to do with the price of the dollar.
[23:40] SPEAKER_01: So that's kind of an example of, so that's kind of an example of sub protocol level interoperability where you build in an Ethereum contract an SPV client of some other chains that people on Ethereum will be able to know some facts about people on another chain. But if you wanted to have chains actually share security you would need to build that in on the protocol level where not just kind of nodes who know what these subcontracts are doing would know what it is but the actual consensus level protocol would know. So the idea with protocol level security sharing is that you would build the consensus of another chain into your own chain's consensus so that you could for example buy checkpoints, buy timestamps. You could use another chain of services in order to.
[24:31] SPEAKER_00: So the general idea with sharing security I think is more, is more that let's say you have 10 chains, chains C1 through C10, you know, each, each one of those chains has 500, has 500 users and $500 million of capital. The problem is how do you make each. If you just do this multi chain approach by default if you, you know, if the whole thing has $5 billion of capital then if you have more applications, if you go from 10 to 100, then each of them is down to $50 million of capital and each of them is down to $50,000 worth of security and so forth. So problem is, how do you have chains that are only processed by a few, that are only processed by relatively few nodes without having this problem that each chain only has a small amount of security. Right.
[25:16] SPEAKER_01: But also. So one thing that we haven't mentioned yet, and one of the benefits of multi chain scaling is that if a chain is compromised it won't necessarily affect other chains as much. And you can provide services on chains at a given level of security, which means that at lower cost to some users who don't need as much security on that chain. So if you want to provide a lower cost and still have high security, what you can do is basically buy security from other chains. And to do that you basically need to interoperate your chain with other chain on the protocol level and basically have kind of like clients and full clients agree that they will use some checkpoint on some other chain in order to authenticate the point on this chain.
[26:03] SPEAKER_00: So challenge response, jury selection. We already just, we already discussed those in the context of hypercube, in the context of multi chain. Actually well, challenge response in my opinion is not nearly as good as jury selection. So probably talk more about how jury selection so actually works in multi chain. So the idea is you would have a big A chain and then on that chain you would have something, you would have this kind of you know, Schelling coin, consensus thing and you'd have, and you have a, and you have a bunch of users that have some that you know, that participate in it. And by participating we basically just mean having a security deposit on here. Then you have some, you know, some chain X. And that chain X wants to leverage the solo contract for its security because X is big enough by itself. So what it would do is basically this consensus contract. It's kind of like Truthcoin. It's this decentralized massively multi Schelling coin type oracle. Except the thing that it's voting on is it's voting on the first. So what it provides is it provides a block and the block provides you know, state, it's got a state route, it's got transactions and it's also got a timestamp and from the transaction tree. It's got a Merkle tree here and it's got a Merkle tree here. So the question that this thing is voting on is, it'll take the AND of three things. Number one is, does the timestamp on the block equal to the actual time? And number two is, number two, is data available. So by is data available, do we mean if you descend from the Merkle root, is, is there data floating around somewhere in the network corresponding to all of the, all of the leaves of this going all the way down on the bottom level. So the idea is that this big, this big thing uses votes on this statement is, did the block come at the right time? At the time that said that it came at. And is, and does the block have data available? And so then what you have is you would have, you would have a trustworthy, a trustworthy source of blocks that have data available and they came at a particular time. Now given that if you assume that that mechanism is trustworthy, then it turns out consensus becomes trivial because your consensus algorithm basically is that if you imagine blocks that are at some particular height, your algorithm is a block at, a block at a particular height is only valid if it is the first block at, if it is the first valid block at that height. So you know, if it's over here and then some new one comes along, this one automatically gets rejected, even if it eventually grows longer. And the way that you defend that is you. Well, the way that you determine, the way you sort of make this work is that, first of all, you know that. So the way you determine the time is by checking the oracle to see if, to see what the timestamp is. The way that you determine validity is first of all, if the data is not available, then you say it's invalid. Now it could be that the data is available, but the block is invalid. But then because the data is available, anyone can audit the block and anyone can come up with a Merkle tree proof that it's invalid if it is invalid.
[29:17] SPEAKER_01: So this is kind of like full client security. This does all of the security needs of a blockchain. One thing to note is that full clients and light clients have different amount of information that they can use to authenticate different things. And so that you could, if you have security for full clients, not for light clients, then you can do something kind of much less than this in order to help the light clients find the authentic chain. So basically, one thing that's important for multi chain solutions in my view. And this is kind of like my vision for Ethereum 2.0 is that we would have like one light client per protocol, at least for all of these chains, so that a single light client can access services from all these chains. And so for example, if we have some chain that's kind of secure as far as all the full clients are concerned, but the light clients can't tell whether some fork came later or not because there's no cryptographic proof of that. What you could do is basically have the people on this chain, whenever there's a fork or every so often buy a checkpoint from a Schelling coin game on another chain which would basically, instead of going on this block and checking the timestamp, it would basically just ask, you know, what is the consensus block 1,000 blocks ago? Because you know, that's kind of longer than the fork length. And so basically it. And then, and then light clients who are on, who somehow managed to get on here can find a checkpoint on this one which you can then use to authenticate the current state there. So in this kind of model we can make like a tree of checkpoints so that light clients who only has one or a handful of checkpoints could authenticate the current state of other chains, even though if those chains are secure for full clients just by themselves.
[31:00] SPEAKER_00: So yes, the point is here is there's a bit of a trade off between to what extent do you support light clients and what level of standardization do you want? So the nice thing about this kind of data availability protocol is that here you actually, here you actually need to standardize almost nothing for all these chains to share security. The only thing that you need to standardize is the Merkle tree protocol because you need to have a way of voting on data availability. Everything else is potentially completely open. Like this might, you know, you're not standardizing whether you're using proof of work or proof of stake. You're not standardizing whether it's whether you're using GHOST or some other mechanism. You're not standardizing any kind of state transition rule. But if he wants to go into supporting more sort of like client functionality, then you might need to end up standardizing more things like checkpoints.
[31:46] SPEAKER_01: Yeah, so for example, if you wanted kind of ultimate light client functionality, what you would do is have all of the chains be EVM and have the consensus algorithm be embedded in EVM code so that all the light clients can just use EVM to authenticate the state transitions and to authenticate anything that they need. And basically that require makes their code base smaller than say if they had to interoperate between chains that have different protocols.
[32:09] SPEAKER_00: Right.
[32:09] SPEAKER_01: If you had a light client to interoperate Bitcoin and some other chain who pack a service from both, it'll need more code than if Bitcoin or EVM because then it would just kind of use the same code base to authenticate transitions on both chains. So the more you standardize, the more light client friendly you can be. Which we got already.
[32:30] SPEAKER_00: Yeah, yeah. So I guess the, so the last points here are first of all this idea of checkpointing and so the question is, in this multi, in this multi chain context, can we come up with some kind of light client protocol that allows proof of stake clients to still be secure or to still securely determine the state of some, the current state of some particular chain, ideally with a very low amount of information? Yeah.
[32:57] SPEAKER_01: So something else that we haven't talked about yet is, so one cool idea is that you might be able to use not the checkpoint between the chains at the protocol level, to satisfy the light client protocol, but actually the interface chain that's on top of. Actually have we talked about interstates?
[33:24] SPEAKER_00: No.
[33:24] SPEAKER_01: So basically if we want to do atomic interoperability between two chains, so basically those two chains have the Tier Nolan primitive, what you can do is you can make a third consensus. That consensus group that includes is basically a side chain for both of those consensus groups and that can be used to pair up different people who want to do interoperability using this channel and primitive. And so I'm basically thinking and hoping that we can use the exchange here to help light clients figure out where to go. Because light clients are going to, in a multi chain context will probably not keep their tokens on all of these chains. You keep your tokens on a small number of chains which are more liquid, more stable and then you kind of sell them for the tokens that you need in order to purchase services from some chain. And you have to go through an exchange to do that anyway. So you need to authenticate that exchange and so then you might be able to use that exchange which isn't even on the protocol level in order to authenticate which fork on these chains to choose as a light client. So basically if we build the interoperability kind of on top using atomic interoperability, it's less fragile than if you were to extend one of these consensuses to include the other one. Because if this chain died in this case, it wouldn't affect this chain. This chain doesn't know about this chain. It just knows about some Tier Nolan contracts that it has on itself and it knows what chain that's interoperating with. But if this chain's consensus was actually extended to that chain's consensus and this chain died, this chain now have to switch from buying services from this chain to buying services from another chain, which is kind of a whole other problem. One way to handle that that I've kind of thought of is, if you don't have contract creation on an EVM chain and all the contracts are in the genesis block, then two chains with exactly the same contracts in the genesis block, are going to be providing the same service. And so you would be able to switch from asking a service blockchain to another chain that has the same objects. Kind of another paradigm with this multi chain stuff is that instead of having kind of Ethereum like general purpose chains, you'd have Ethereum like application specific chains where basically only some contracts will be there. You wouldn't be able to create contracts. And the idea with the application specific chains to then basically, you know, if you know that your application doesn't need any more interoperability with anything else, then you could have consensus groups just around those contracts and you don't need to have the same consensus group do all these other contracts. Yeah, yeah.
[35:45] SPEAKER_00: I think that's basically all we wanted to discuss. So. Questions? Yeah. Yes. So could we get the camera real quick? Sorry. Hold that thought. Thank you for your understanding.
[36:16] SPEAKER_02: Okay, so in the very important special case where the state is just a bunch of account balances, there's a common and very old idea, for scalability is that instead of atomic, transactions which take some value from here and credit some other account, you split these transactions into two into a debit and a credit transaction. And whatever service needs providing in exchange for this transfer, you make it only conditional on the debit transaction. And then you settle the credit transactions later when you have time. And this way you can do interoperability between different chains because on one chain if you compartmentalize by accounts, then you can basically generate debit transactions as fast as you need to and then you can do the debits. And one of the genius ideas of Bitcoin, which I think didn't receive as much attention, is that what they did, is that they put together the credit and the debit, but the other way around. So it's not atomic in the sense that something is taken from here and deposited there, but the other way around they noticed that you don't need the credit transaction before you actually want to spend that amount. So what a Bitcoin transaction actually does is that it deposits the amount and immediately takes it away. So they don't even have the explicit balance state. So in the Bitcoin network you don't have, in the state you don't have the explicit balance. And I think that in Ethereum, since the balances are part of this state, maybe we should consider, you know, splitting up transactions that modify more than one variable. In particular in case of transfers, they modify two balances, to split them up into two transactions and make one transaction conditional on the balance and make the other transaction condition on the success of the first transaction.
[38:36] SPEAKER_01: So that is kind of what the atomic protocol does. It basically make sure that one transaction can only go through if the other one can go through. But it does it without requiring any information from one consensus group in the other consensus group. Except for the preimage hash, which is kind of more lightweight than having this chain check if some transaction went through on the other chain. Because all you need to know is whether or not something, something that hashes for this is provided. Oh, but that's interactive.
[39:02] SPEAKER_02: But here you can carry the proof with you. So basically if you have a successful deposit, a successful, successful debit on one chain, if it gets confirmed, it's a finite package of information that you can carry away and you can present it on that chain and it gets deposited without.
[39:22] SPEAKER_01: So that's either a sub protocol level or a protocol, protocol level thing.
[39:26] SPEAKER_00: That's more like the sort of intra chain scale sharding type approach.
[39:30] SPEAKER_01: Yeah, sure, but you can do that also by extending consensus in between chains.
[39:34] SPEAKER_00: Yeah, you could, but. Yeah, so it fits more in the sort of single chain paradigm. Yeah, so yeah, it's a sharding approach. It has I guess a similar, similar properties to other sharding approaches. The main issue of course is that what if you have, what if you have a debit, what if you have a debit somewhere and then you try and use that as a proof to generate a credit twice or no, what if you. Inevitable spending and so forth.
[40:02] SPEAKER_02: But that's a local thing. So if you want, if you want to debit twice using the same, you want to credit twice using the same debit transactions. The, you know, the debits of that target account, they're on that chain. So you prevent double deposits and there you prevent double withdrawals instead of, you know, globally.
[40:26] SPEAKER_01: Yeah.
[40:30] SPEAKER_02: Double spends, you prevent double withdrawals and double.
[40:34] SPEAKER_00: So what if on some local chain I make a withdrawal and then I immediately double spend that withdrawal locally?
[40:40] SPEAKER_01: Well, forks are, we have to talk about forks in this context. The forks always basically lead to you having to wait for interoperability functionality, because anything can be rolled back in a fork. So that's why having non forking chains, if they're possible, would really help interoperability solutions and blockchain scaling solutions. One thing that I haven't mentioned yet, which I really like to mention, is this idea that one of the things I find really helpful for thinking about blockchain scaling is that it's so the client is serving the user and the client uses chains to serve the user. It's not that the client is serving the chain. And so if a user has a consensus group here, I think they would prefer to not have SPV proofs of some other consensus group. They're not interested in being injected into their consensus. And that's kind of why I prefer this idea, of atomic level kind of interoperability on top rather than interoperability in protocol because it kind of lets the client have more specific narrow consensus that it's keeping. And it is kind of more restricted. There's less you can do if you don't have this kind of proof of information about what happened in other chains. But the insight is basically that clients can get SPV proofs from both chains, without having to get them from one chain.
[42:06] SPEAKER_00: Right.
[42:07] SPEAKER_01: So any client that's looking at both chains will know whether something might throw on the other chain. But that doesn't mean that you necessarily need to force every client that's on one chain to know about what happened on the other chain. Some of the idea of these interoperability kind of interface chains that know about both chains is that those clients that are watching for both chains can form their own consensus group in order to interoperate these two chains without actually injecting information from the consensuses into other consensuses, which I kind of regard as inconsiderate unless it's necessary. I haven't found many people to agree with me on that though, in particular in light of kind of like, you know, Blockstream sidechain two way pegging stuff which you haven't talked about yet.
[42:47] SPEAKER_00: Shall we do the sidechains discussion separately?
[42:53] SPEAKER_01: Oh, okay.
[42:58] SPEAKER_00: How do you see the upgrade or migration path from Ethereum 1.0 to 2.0 specifically in regards to original blockchain and so Ethereum 2. So for 1.0 to 1.1 first of all 1.1 is just going to introduce some moderate things like proof of stake and event trees for that the approach is to have a, it is to have a sort of mechanism of voting on, voting on protocol upgrades. So if you. So it's not going to be a miner based mechanism because it's not. Because you know it's not. It's going to be in miners interest to keep proof of work forever but something you know, stakeholder driven where if you have your coins in a contract that has a state of storage at some particular key correspond to some particular value then you're basically voting for you know, the protocol to get upgraded. And once there's some majority in favor of an upgrade then I think the idea is that the original chain was sort of suicide after some particular amount of time. So all blocks after block number say 1.3 million will just be invalid and the new chain clients would have time to download a new client and a new client would support the new chain in some fashion.
[44:03] SPEAKER_01: And so that would work for Ethereum going from 1.0 to 1.1 and 2.0 if we're using sharding.
[44:08] SPEAKER_00: Yes.
[44:08] SPEAKER_01: But if we using multi chain interop.
[44:10] SPEAKER_00: Well if we're using multi chain interop then Ethereum 1.1 will be Ethereum 2.0. The only difference between 1.1 and 2.0 will be just a set of tools for.
[44:17] SPEAKER_01: Yeah, and also kind of like client protocol or full client protocol that helps you you know serve all these chains or provide services from all these chains. So then basically that's the direction that we're going then Ethereum 2.0 is kind of an interface layer between all these blockchains kind of imagine you know Ethereum if a Mist browser let you buy services from a whole host of chains where most of those chains are actually application specific as in you know specific some dapp. Rather than being this kind of general purpose chain where you kind of put everything in.
[44:50] SPEAKER_00: Yes.
[44:52] SPEAKER_01: Is the chain sharding and the atomic exact atomic multi chain. Are those, are those two.
[45:05] SPEAKER_00: There's sort of competing approaches I would say. Can, can you, could you have both?
[45:11] SPEAKER_01: Yeah, you could, could you have the shard, the sharded chain and then use this for.
[45:18] SPEAKER_00: Yeah, is it as Rob saying yes you could.
[45:22] SPEAKER_01: Okay, yeah. And I think that that might well be a good idea because if clients are going to want to keep their digital assets in some stable token, then we're probably going to want them to be able to do a lot of transactions on the chain where they hold their tokens.
[45:42] SPEAKER_00: Right. If you want to have a stable currency across the entire system. The other approach is that we don't necessarily need the units to actually be transferable. We just need a whole bunch of stable coins and we need to make the stable coins all adhere to the same standard and then it'll just be an exchange. It'll be a floating exchange rate, but the floating exchange rate will just always happen to be one.
[46:03] SPEAKER_01: But then like some chains will kind of have any more stable than those.
[46:06] SPEAKER_00: Yeah.
[46:08] SPEAKER_01: And so people will tend to keep their money on those.
[46:10] SPEAKER_00: Right, true.
[46:11] SPEAKER_01: Because most of my clients don't want to be exposed to speculating on the success of some dapp. They just want to be using it. And so they can. Another nice thing about atomic transaction that we haven't mentioned is that you could pay for a contract call or some functionality on another chain using fees on one chain. So basically someone would see that you want to call some contract and they'd be like, yeah, okay, I want tokens on the chain that you're paying me on. And then they'd run the contract for you in exchange for those tokens. So you might not actually need to change to buy tokens on some dapp's chain or use the dapp. You could just pay for it on your chain and have the contract happen on the back chain, which is kind of neat. Can people running on the other chain provide that service for you?
[46:55] SPEAKER_00: Basically they would have to interact on your chain as well.
[46:59] SPEAKER_01: They would just have to. The clients would have to watch it. The consensus wouldn't necessarily. So only the clients that are providing that service would have to watch it basically because they have to like can get the hash value on the provider.
[47:13] SPEAKER_00: But they have to want your chain's currency. That's right.
[47:21] SPEAKER_02: Or you can even do something more primitive. For example, if you form a special contract where as soon as it gets deposited the right amount, then a new chain is started, where know there's no mining, you're just using the that deposit and you can only unlock that contract in the original chain if all the tokens are destroyed in the, in the first chain in the small chain.
[47:50] SPEAKER_01: And how would that other chain know like that.
[47:53] SPEAKER_02: Well there's a proof that you can.
[47:55] SPEAKER_00: Right.
[47:58] SPEAKER_01: And then I don't have to wait for confirmations which is okay. So I'm not sure that's necessarily more primitive. It seems a little bit more sophisticated.
[48:16] SPEAKER_02: Well it doesn't require an exchange.
[48:18] SPEAKER_00: This kind of approach is nice because it's also a way of doing cross chain exchange without ever having actual basic. Without ever actually like having an exchange of different tokens that are. That's across different chains at the same time. As you can expect on chain exchange to be very efficient. But then interchange stuff will probably have higher fees. But here it's between the same currency and here it's the same currency. So because the rate will always be like one, then the spread will probably be very low.
[48:50] SPEAKER_01: So you said the same currency.
[48:51] SPEAKER_00: Well no. So I'm saying you have A here, you have B here. Right. And you have A here and you have B here. Exchange between these two is just on chain, so it's very efficient. Here the exchange rate is always one. So there's, there's an opportunity for like variable risk arbitrage and. Same here.
[49:08] SPEAKER_01: Another thing you can do though is just do have security deposits and mapping here and basically require people to be online when they do the exchange. So you can do atomic actions here rather than peg transactions here.
[49:18] SPEAKER_00: We can do security deposits for everything, can't you?
[49:20] SPEAKER_01: Yeah, you can. So security deposits are the best because it lets you make a Nash equilibrium really strong because you can basically say that if you deviate from it in a detectable way, your security deposit goes away. Which is neat. And we'll probably talk about that during the proof of stake panel. Yep, definitely.
[49:41] SPEAKER_02: Okay.
[49:42] SPEAKER_01: Anything else?
[49:44] SPEAKER_00: That's it. Cool.