Ethereum is gradually moving towards the rollup-centric roadmap. One of the EIPs, which will help in scaling Ethereum is EIP-4844, popularly known as “Proto-Danksharding.” In this blog, we will dive into one pre-requisite of this EIP, the KZG Ceremony.

The Ethereum network, like most blockchain networks, currently faces limitations in terms of scalability. As more and more users join the network and the amount of data on the blockchain grows, the network's ability to process and validate transactions in a timely manner becomes increasingly constrained. This can lead to delays in confirmation times, high transaction fees, and a lack of overall network efficiency. To address these scalability concerns, the concept of sharding came around.

Sharding is a method to split data so that it is easier to handle that data. With time, the overall size of the blockchain increases because blocks keep getting added up to the existing chains. So, in Ethereum, the idea is to split up the amount of processed data into multiple concurrent chains. So as to increase the total throughput of the blockchain. 

Initially, these shards would not just hold data but were also going to perform computation on that data set. Still, the approach introduced some other complexities, such as how cross-shard communication will work in the ecosystem. As the data could have been siloed so, how will the consensus work? So the idea of how to scale Ethereum was altered and now we have Proto-Danksharding, in which instead of having multiple chains, we have one chain, and the node operator is not responsible for the whole chain; he is responsible for part of the chain. 

Furthermore, the computation won’t be taking place on these shards. Instead, rollups will take care of that, as per the rollup-centric roadmap. 

Danksharding

Danksharding is the initial proposal that is too complex to discuss in this blog. So, let’s have a look at the main innovation that came with Danksharding, i.e., the merged fee market. According to this, “one” proposer node is responsible for choosing all transitions and the data to be added in the shards. This will increase the hardware requirements for the node operator. Hence, another concept of propose/builder separation was introduced.  

This requires more time to implement as it has a lot more technical complexities, so now we have a more simplified version, Proto-Danksharding that will be upgraded to danksharding in the future. However, in the short term, this solution will still drastically help with the data constraints we currently have.

Proto-Danksharding

Proto-Danksharding is a forward-compatible implementation for smoothening the implementation of Danksharding. The use of rollups and the merged fee market can improve scalability by offloading computation and data storage to other entities. 

With Proto-Danksharding, we will have a new type of transaction, i.e., “Shard Blob Transaction”, which is like a normal translation with the exception that it will have space for blobs. EVM cannot access the blob data, it can only access commitment to the blob. A blob is a collection of 4096 units of information, each consisting of 32 bytes. With Proto-Danksharding, a maximum of 16 blobs can be stored in each block. With Danksharding, this number increases to 256 blobs, which limits the total number of data transactions that can be processed in a block. This allows for a target block size of approximately 1 MB and a maximum of 2 MB.

Currently, In Ethereum, users pay a one-time fee for some data to be stored, and then the blockchain keeps the data on it forever. One of the ideas behind Danksharding is to give this responsibility to someone else so that the validator nodes don’t have to store this huge amount of data in perpetuity. So, what will happen is the validator will be assigned data that it must download and make it available for everyone. L2 solutions can access this data and download all the relevant portions that they need, and after a certain period, validators will discard this data.

Coming back to the validators, the important thing here is that individual validators won’t be worrying about this anymore. And as more scalability is added to the blockchain, the data that validators need to store will only increase. So, now entities who need the data are going to be responsible. But this is something rollups and protocols can take care of. 

What about normal users? Normal users obviously have the option to trust the sequencers if they have their transactions on L2s. But in the case where they don’t trust the sequencer, they can store data locally. There are more options we can explore about this. On a side note, we will probably see different solutions coming forward to download and keep this data in their own archives nodes, and users will then be able to query data from those archive nodes.

KZG Ceremony

KZG ceremony is a prerequisite for Proto-Danksharding. It will provide the cryptographic foundation via the trusted setup. The KZG ceremony is taking place to create a structured reference string for the Proto-Danksharding. The ceremony is to be held by the Ethereum community and will provide the necessary security for the network.

It uses multiparty computation, as different individuals first set their secret values, each mixed with some randomness. Then each contributor runs a computation to mix it with previous contributions. The output is made public and passed to the next contributor. The ceremony will be live for two months. Anyone can contribute in that time frame.

Why do we need commitment in Proto-Danksharding? So, we already know by now that the blob transaction types will have the data blob in it, which is kind of like a sidecar to the actual data block. To refer to this “sidecar,” Proto-Danksharding uses KZG commitments.

KZG commitment also enables erasure coding, which is useful in case a percentage of the data is lost. 

The KZG ceremony is live for a duration of 2 months. In the KZG ceremony, we have an entity called the sequencer and as the name suggests, is literally to sequence who's next. So, if we have many people trying to contribute, how do we decide the sequence?

Since the computation is sequential and cannot be parallelized, so the sequencer sees if there is a free slot. Then the sequencer gives you the file, and then you add your secret value to the sequence. It combines your Randomness with the randomness of the people that came

before you, and then you send your file back to the sequencer, and then the sequencer checks that you didn't try to lead someone else's secret or do other inaccurate things. Then the file is sent to the next person.

The sequencer can basically see what the ceremony looked like after each contribution, but that is what everyone sees, so it's not like the sequencer has more access to data that could be used to break it. All sequencer has is more control over who goes next.

Sequencers can prevent someone from participating. But even that can be countered, as users can do their computation and send it back to the sequencer. But, again, the sequencer can reject the file by claiming that the file is not correctly computed. The sequencer has to send you the file back with their signature on it. Now, users can take the file to other sequencers and prove that the sequencer is censoring the user.

To avoid this, the Ethereum foundation put the sequencer through some audits. 

Participating In The Ceremony

The Ethereum community has several ways to participate in the creation of the Common Reference String (CRS) that is used to create zk-SNARKs for private transactions on the Ethereum blockchain, as proposed in EIP-4844. The methods for participation range in technical difficulty, from simple browser interfaces to writing your own implementation.

Browser interfaces

One of the easiest ways to participate is through a browser interface, such as ceremony.ethereum.org. To prevent spam contributions, participants will need to provide an Ethereum address (which has sent at least 4 transactions as of 2023/01/13) or GitHub account.

Command line implementations

If you're comfortable using a command line, you can check out some of the CLI implementations to contribute from your local machine.

Generating Entropy

You can generate some randomness using a unique and creative method and use one of the above methods to add it to the ceremony.

Writing Your Own Implementation

For those who want to convince themselves that the secret hasn't been leaked, writing your own implementation using BLS12-381 for the ceremony is possible. There are resources available to make this process as simple as possible.

Note that the sequencer will reject wrong calculations.

If you participate through the browser method. There is no queue. There is a general lobby, and accounts are picked at random from the lobby. This makes it nicer that you just show up to participate. But, the trade-off is that you don't know exactly when you're going to get included. So, the lobby could have thousands of participants and you may have to wait for a while. 

By being in the lobby what essentially happens is time to time, your computer asks a sequencer for the file, then the request is denied or accepted based on the file’s availability status. The file being unavailable just means that someone else is contributing. 

The participation process is quite easy. The trust assumption of the ceremony is based on the concept of "1-of-N," where the security of the entire ceremony relies on the fact that only one participant is honest and hasn’t given out the secret. To compromise the ceremony, each and every participant would have to work together to extract and combine their secrets, or there would have to be a flaw in every single implementation.

In summary, Proto-Danksharding, as outlined in EIP-4844, is a promising solution for addressing scalability limitations on the Ethereum network. Its implementation will greatly improve the scalability and efficiency of the network by splitting data into multiple concurrent chains and utilizing rollups. You can participate in the KZG ceremony for the next 2 months since 13th January 2023.

In this article we will explore the concept behind Layer 2 Solutions and the problems they are solving in blockchain.

Introduction

According to the CAP theorem (also known as Brewer's theorem) first proposed in 1998 by Eric Brewer before Seth Gilbert and Nancy Lynch propounded it in 2002, a distributed system cannot attain consistency, availability, and partition tolerance simultaneously. This same opinion holds sway among blockchain experts for blockchain protocols. The belief often referred to as blockchain trilemma suggests that blockchain cannot achieve three of its core principles: security, scalability, and decentralization simultaneously.

By implication, the blockchain trilemma said, a protocol can achieve decentralization and security while sacrificing scalability and vice versa. The blockchain trilemma provided an answer to why centralized networks can boast thousands of transactions per second and the blockchain networks like bitcoin and Ethereum can only afford a few tens of transactions per second. In that light, the trading system sacrifices decentralization while achieving high throughput, secure and scalable network. To scale up blockchain protocols, developers began looking to salvage the situation.

So far, to solve the trilemma belief, several approaches are taken. The proposed solutions to achieving scalability are Layer 2 and Layer 1 solutions respectively.

Layer-1 and Layer-2 Solutions

Although this article focuses on Layer 2 solutions, it will be necessary to lay a background that includes Layer 1 solutions. It will highlight several Layer 1 and Layer 2 solutions as well as references to top Layer 2 implementations you should know about.

Layer-1 Solutions

Often referred to as on-chain solutions, Layer-1 solutions are the scalability solutions that require redesigning the underlying protocols of the base protocol. Look at the Layer-1 solution as say, redesigning Ethereum or Bitcoin protocols to increase throughput and reduce fees. For instance, Visa, MasterCard, and other payment processors process an average transaction per second of 5000 while Bitcoin and Ethereum process 4 and 15, respectively. Going by the current design of these blockchain networks, as users of the networks grow, the TPS will keep reducing and transactions keep getting unnecessarily slow, hence, the need for a redesign. The Layer-1 solution entails redesigning the underlying protocols of the networks to allow for throughput, energy efficiency, and cheaper transaction fees. 

There are thus several methodologies employed to redesign the base protocols. Although some of them are still at their experimental stage, they include: 

Consensus-Based Protocol Redesign

This consists of redesigning the consensus protocol of the base protocol to scale transactions and efficiency. The leading blockchain networks like Bitcoin and Ethereum have leveraged PoW consensus that allows miners to solve cryptographic puzzles to validate and verify blocks thereby making it energy-demanding and tedious. Nonetheless, PoW systems are secured but often characterized by high transaction fees and low throughput when there is network congestion. To mitigate this risk and achieve a scalable network, PoS consensus becomes a good choice. Instead of miners solving cryptographic puzzles using enormous energy, users stake coins on the blockchain.

PoS consensus is set to cut down the high cost of transaction and throughput of the PoW networks. It is yet in its experimental stage, but some protocols are already developing on it. Among the top projects are Solana, Avalanche, and Ethereum. Ethereum termed its proposed PoS version Ethereum 2.0. From a frontier phase, Ethereum will be going full serenity next year by launching a Proof-of-Stake (PoS) consensus algorithm. Unlike the high cost of transaction and low TPS of Ethereum 1.0, Ethereum 2.0 is expected to dramatically and fundamentally increase the capacity of the Ethereum network while increasing decentralization and preserving network security.

Sharding

Also in an experimental stage, sharding is adapted from distributed databases as one of the Layer-1 scaling solutions. Employing a Sharding Layer-1 scaling solution means breaking the state of the base protocol into distinct datasets called "shards". Here, tasks are managed by shards, simultaneously processed in parallel and they collectively maintain the entire network.

Each node in a network represents a shard instead of maintaining a copy of the entire main chain to allow scalability. Each shard across the network provides proofs to the mainchain and interacts with one another to share addresses, balances, and general states using cross-shard communication protocols. Although in an experimental stage, awaiting its launch in 2022, Ethereum 2.0 is exploring the implementations of shards.

Layer-2 Solutions

Instead of implementing the changes of the parent protocols of the blockchain, Layer-2 solutions took scalability to a whole new height. Layer-2 solutions are those scalability solutions that entail adding a layer to the base protocol to increase throughput. They take transactions off the main chain, hence, are called off-chain solutions. 

The off-chain solution doesn't allow base protocol structural changes since the second layer is added as an extra layer. For that reason, Layer-2 scaling solutions have the potential to achieve high throughput without sacrificing network security.

Layer-2 solutions consist of smart contracts built on top of the main blockchain. Those secondary layers are for scaling payments and off-chain computation. Layer-2 solutions can be achieved in various ways. For example;

Rollups

Rollups are one of the Layer-2 scaling solutions built on the Ethereum blockchain. Unlike the Layer-1 solutions, they are secondary layers that allow users to perform transactions off the main Ethereum chain (Layer-1). It is designed to post transactional data on Layer-1 thereafter, hence, inheriting the security of the base protocol. Rollups possess the following properties:

  1.  Executes transaction outside Layer-1.
  2. Proofs transactions on Layer-1, thereby improving the security of Layer-2.  
  3. Using the transactional data on Layer-1, rollup smart contract in Layer 2 enforces correct transaction execution on it. 
  4. Operators stake a bond on the Rollups smart contract which they get incentivized to verify and execute transactions correctly. 

Rollups can either be zero-knowledge or optimistic Rollups. They both differ in their security model:

Optimistic Rollups

Optimistic rollups is a Layer 2 solution designed to enable autonomous smart contracts using the Optimistic Virtual Machine. By default it doesn't perform any computation, hence, can offer up to 10-100x improvements in scalability depending on the transaction. It sits parallel to the main Ethereum chain on Layer-2. Transactions on Optimistic rollups are written on the main Ethereum chain in form of call data thereby further reducing the gas cost. 

As stated ab initio, Optimistic rollups do compute transactions outside of the main layer in the form of batches and submit only the root hash of the block to the main chain. Hence, the need for a mechanism (fraud proofs) to ensure transactions are legitimate That way, when someone notices a fraudulent transaction, the rollups initiate fraud proofs before running a transactional computation using available state data. By implication, Optimistic rollups take significantly longer to confirm transactions than zero knowledge rollups. 

There are currently multiple implementations of Optimistic rollups that you can integrate into your dApps. They include; OptimismOff-chain Labs Arbitrum RollupFuel NetworkCartesiOMGX

Zero-Knowledge Rollups

This is a type of rollup on the ethereum blockchain. It bundles hundreds of transactions off-chain and generates a cryptographic proof known as Succinct Non-Interactive Argument of Knowledge (SNARK), often called validity proof.

The ZK-rollup smart contract maintains and updates the state of all transfers on Layer 2 with validity proof. Instead of the entire transactional data, the ZK Rollups needs only the validity proof, which goes on to simplify transactions on the network. Validating a block is quicker and cheaper in ZK Rollups because less data is included.

There are multiple implementations of ZK-rollups that you can integrate into your dApps. They include; LoopringStarkwareMatter Labs zkSynczkTubeAztec 2.0, and so on. 

Channels

A State Channel is a Layer-2 scaling solution that facilitates two-way communication between the participants which will allow them to perform transactions off the main blockchain. Typically, for a recurring payment State Channel does not require a recurring validation by nodes of the Layer-1 network to improve overall transaction capacity and speed. The underlying blockchain is sealed off via a set of smart contracts or multi-signature seals off. Leveraging the smart contract pre-defined by participants, they can directly interact with each other without the need of the miners. Upon the completion of the transaction or batch of transactions on a state channel, the final “state” of the “channel” and all its inherent transitions are recorded to the underlying blockchain. Some projects including Liquid Network, Celer, Bitcoin Lightning, and Ethereum's Raiden Network are currently deploying state channels scaling solutions.

Sidechain

A Sidechain is a secondary blockchain linked to the main blockchain via a two-way peg. Like most layer 2 scaling solutions, it uses an independent consensus and contracts to optimize throughput. On the sidechain, the main chain takes up security roles, confirming batched transaction records and resolving disputes.  

They are somewhat similar to channels, however, it differs in how they process transactions and the security impacts. Transactions are recorded publicly on the ledger, unlike the private records of the channels. Sidechains enable tokens and other digital assets to move back and forth freely from the main chain. When the sidechain completes a transaction, a confirmation is relayed across the chains, followed by a waiting period for added security. Due to their allowance to move assets around freely on the new network, a user who wants to send the coins/assets back to the main chain can do that by simply reversing the process.

Plasma

Plasma is a secondary chain on the Ethereum blockchain, proposed by Joseph Poon and Vitalik Buterin in their paper Plasma: Scalable Autonomous Smart Contracts. It comprises Merkel trees and smart contracts which create unlimited smaller versions of the main chain (Ethereum), called child chains. Integrating these child chains enables fast and cheap transactions off the main Ethereum blockchain into child chains.

Users can deposit and withdraw plasma chain funds, enabled by fraud proofs. For such a transaction to go on, there has to be communication between the child chains and the root chain, secured by the fraud proofs. Users deposit by sending the asset on the smart contract, managed by the plasma chain. Then the plasma chain proceeds to assign a unique ID to the deposited assets while the operator generates a batch of plasma transactions received off-chain at intervals. On the other hand, the contract initiates a challenging period during which anyone can use the Merkle branches to invalidate withdrawals if they can. 

Conclusion

Like the CAP theorem in distributed systems, the blockchain trilemma suggests that blockchain cannot achieve scalability, security, and decentralization simultaneously. However, the Layer-2 scaling solutions have come to challenge the thought system. It allows the mainchain to take care of security while maintaining scalable networks in its additional layers.

Also Read Arbitrum: Scaling without Compromise

A new technology of blockchain, serving and acting as an optimistic roll-up called Arbitrum, just surfaced in the cryptocurrency world. This system allows Ethereum holders, users, protocols, and participants to participate and settle all transactions on the Ethereum mainnet. This serves as linkup loops to the main Ethereum crypto body. 

Arbitrum, therefore, serves as a Layer 2 cryptocurrency platform. By implication, the security and protection of the Arbitrum interface and network come from Ethereum itself. 

Generally, this makes the transactions scalable, faster, and interoperable, enabling compatibility and bonding of the Ethereum based applications with the Ethereum Cryptocurrency market. 

What is Arbitrum

Arbitrum, based on development, has passed and served as a system that ensured efficiency in the management and marketing of Ethereum amongst other layer 2 solutions. It is achieving several goals through the combination of Virtual machine crypto-architecture, networking design, and incentives. 

It has 4 significant benefits. They include; 

Scalability 

During the regular operation of Arbitrum, decentralized apps (DApps) using Arbitrum only have to navigate the main startup catalog or a chain of startups when they make transactions outside of Arbitrum. This allows ease of expansion and upgrades based on the general demand of the server network, unlike other blockchains. 

Generally, this is an advantage of enabling easy transmission of information from the user network to the server's network. It further lengthens the time and duration of the transactions without issues of connection or 'interface error'. This might arise due to an increase in the level of traffic on the Ethereum mainnet within or beyond the proxy counts. 

Privacy

Only the validated participants can be granted an entry and exit on the DApps, and only such participants need to know what is in the DApps code and storage margin. 

This system of upgrade to Arbitrum has enabled a user-network secure network, and all that is being published within this cryptocurrency margin are recognition of the DApps state. The users enabled sector such that messaging, recordings, messages, and currencies have end-to-end encryption between the network and the user's interface. 

The DApps creator also has free will to allow the user to see the internal server information. This is based strictly on the user demand, and the Arbitrum network verifying information and disclosure is purely optional.

Trust Guarantee

Arbitrum is unlike many other cryptocurrency channels for trading, storage, and transaction of coins such as state channel, sidechain, blockchain, main wallet, or private chain solution. It guarantees an exact, precise, and accurate execution as long as the validator of a DApp, which is usually the user acts honestly. 

No system upgrade allows liquidation of funds or turndown in the rate of transactions and amount of transactions made using Arbitrum.

Interoperability With Ethereum 

Arbitrum is an interoperable and interchangeable system that allows the open-source Arbitrum compiler to generate Arbitrum-ready code. You can also transfer Ether or any other Ethereum based token back and forth within the Ethereum and Arbitrum network. 

Interoperation scaling is enabled as the Ethereum system now runs as a significant interface in the Ethereum network system. This gives an overall boost to the Ethereum mainnet. Consequently, it reduces the cost of operation and gas fee that comes with the rush in the Ethereum mainnet by network users. 

Arbitrum Deployment on Ethereum 

Arbitrum platform is technically designed and centralized, making it better and more reliable than most blockchains. Proof of work platform by leveraging on general accessibility to the public and a lower costs of user-network leverage, this innovation of Arbitrum supports standard EVM contract deployment allowing standard Solidity smart contracts to be deployed on Arbitrum Chains using existing developer tools; an entire interface network of cryptocurrencies and tokens could be deployed but this deployment tool is set on the Arbitrum roll-up only and not the Ethereum.

Arbitrum uses roll-ups (a setup tool) to record batches of transactions on the Ethereum mainstream. The chain and execution of these transactions are on a scalable sidechain, while leverage is placed on the Ethereum network for security and result.

The major reason for Ethereum deployment on Arbitrum is to achieve better throughput and make transactions on the ethereum blockchain cost-efficient. 

This led to the advanced improvement of Ethereum by the community to make it more scalable and deployed on other scaling-solution channels like Arbitrum.

Arbitrum enhancement in Crypto-market

In recent times, cryptocurrencies have gained popularity in the world's exchange market. Unlike the stock exchange market, the crypto market is almost entirely online, and coins, tokens, and artifacts are being traded by various merchants worldwide. However, Arbitrum has solved significant problems in some of this retrospect. Insecurity and lack of fast servers have posed disturbance in the trading and merchanting of cryptocurrency for years till today. This is one of the many problems the Arbitrum helps in the general overview and boost of the system. 

Arbitrum is faster gaining popularity as it now scales 80% of all hurdles posed on using the Ethereum mainnet. This has not only given Ethereum the boost in its exchange as a cryptocurrency but has also helped to increase the general value of the Ethereum coin. 

Infrastructure 

So, many discussions have been done on the centralization and tactical approach of the Arbitrum network in helping to scale Ethereum during an increase in the demand for Ethereum on the main site leading to various cases such as an increase in gas fee and a slower network; however certain infrastructure has been put in place to allow transactions on the Arbitrum Scaling Solution, this infrastructure is basically by the creation of Arbitrum Virtual Machines (VMs). 

The Arbitrum Virtual Machines are first-class actors that perform specific functions logged into the Ethereum network, this form a send-receive network which helps to send and receive funds and messages as well as perform calculations and store data offline according to the code program on the network, generally this is a mechanism that helps reduce gas fee either during an increase in traffic on the Ethereum site or a crashing of the site. 

This infrastructure has made the Arbitrum VMs more scalable and private than the conventional way of implementing smart contracts on other scaling solutions such as Polygon and Optimism. 

Arbitrum manages the VMs off-grid the mainnet with minimal activity online to ensure correct execution. When someone creates an Arbitrum VM, they select a set of operatives responsible for executing the VM. The set of operatives are called Managers. They are responsible for the execution of the VM. Arbitrum thereby guarantees the correct and exact execution [even if other selected managers are corrupt]; because of the low on-chain work, Arbitrum VM is made more private. 

Comparison between Arbitrum and Other Layer 2 Solutions 

Arbitrum has posed many advantages to cryptocurrencies, many of these advantages are listed below; 

To Wrap It Up

Arbitrum has, over some time, gained publicity as a network operative system of Ethereum, leveraging over several system setups that have placed it above many layer 2 solutions and serving as an alternative route during network effect on the Ethereum mainnet.

Arbitrum is not just a solution to the problems posed by the Ethereum mainnet. It is a scaling option that has diverted and enhanced the usage and navigation of Ethereum, geometrically boosting the system by almost 100% and enabling many onsite users of the Ethereum.

Cryptocurrency traders and merchants are advised to engage in the Arbitrum network as more than an alternative but a new phase of Ethereum. 

Also read Casper: The Future-Proof Blockchain

Introductions

Before the Ethereum Blockchain could reach its potential, it needed several transformations. Such transformations include migrating to Ethereum 2.0 also known as Serenity. ETH 2.0 is the much-awaited Ethereum upgrade that allows a more scalable, cheaper, decentralized, and secure network.

Ethereum has since chanted the course to migrating from the POW to POS consensus for cheaper transaction costs and better decentralization. Accompanying the upgrades are various hard forks promising various Ethereum Implementation Proposals, EIPs.

By the way, what is a fork, and what exactly is a hard fork? The fork is the process of copying and improving on an existing protocol, similar to the traditional software upgrades.

A fork can be soft, hard, accident, and intentional. It is a hard fork when it changes the rules of the blockchain protocol so that the old blockchain and the resulting blockchain are incompatible.

A hard fork is a radical upgrade that can make previous transactions and blocks either valid or invalid and requires all validators in a network to upgrade to a newer version. It’s not backward-compatible. A soft fork is an upgrade to the software that is backward-compatible and has validators in an older version of the chain that sees the new version as valid.

When two or more miners find a block at the same time, an accidental fork occurs but it is intentional when the rules of the network are being modified.

Ethereum Hard Fork And Others

Similar to other blockchain networks with active communities, Ethereum Blockchain has undergone soft and hard forks over time. For a brief, we need to reference other non-Ethereum hard forks before explaining Ethereum hard forks in detail.

So far, Bitcoin, the first implementation of blockchain launched by Satoshi Nakamoto, a pseudo-anonymous entity, has also undergone several hard forks. The most prominent Bitcoin hard forks are; Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited, Segregated Witness (SegWit), Bitcoin Cash, and many others. From records, one common thing about the various hard forks, both Ethereum and Bitcoin hard fork, is that they are geared towards protocol upgrades which are done by network consensus. 

Why Fork Ethereum Blockchain?

Similar to other network and software upgrades, Ethereum concerns birthed the various hard forks. It ranges from security, centralization, fees, scalability, and other Eth 1.0 limitations. For instance, despite having a good run in Q1 and Q2 2021, Ethereum had its highest fees that scare developers. A simple swap on the Uniswap for example is as high as $100 while others could be $16-20. 

While we are set to discuss Ethereum Hard forks fully, it is important to note the Ethereum journey so far and link them to the hard forks accordingly. The journey as referred to here is the Ethereum developmental roadmap. 

Ethereum Developmental Upgrade And Associated Hard Forks

Ethereum has a four-stage development roadmap. They include; frontier, Homestead, Metropolis, and Serenity. Recall that, unlike most POW networks, Ethereum is way beyond currency and has to measure up to accommodate varying use cases and features. Hence, the need for a roadmap. 

Below explains the various journey so far; 

Frontier

The frontier is the first developmental roadmap of the Ethereum blockchain. It went live on July 30, 2015. Although it went live as a beta, it performed better than expectations. Developers began writing smart contracts and decentralized applications to deploy on the Ethereum Blockchain. Shortly after its launch, it experienced a hard fork called Ice Age. 

Ice Age, also known as “Frontier Thawing”, was the first (unplanned) fork of the Ethereum blockchain aimed at providing security and speed updates to the network.

Homestead

While the Frontier phase of Ethereum laid the groundwork for experimenting in Ethereum, the Homestead steps it up to its first production release. Homestead, the second major version of the Ethereum release, comes with several protocol changes and a networking change that provides the ability to do further network upgrades. 

The upgrades changes was activated at Block >= 1,150,000 on Mainnet

Block >= 494,000 on Morden

Block >= 0 on future testnets. The homestead hard forks include:

EIP-2: Main homestead hard fork changes

EIP-7: Hard fork EVM update. DELEGATECALL

EIP-8: devp2p forward compatibility. 

Ethereum Classic Hard Fork

The Ethereum Classic hard fork is a child of necessity after the homestead hard fork. It was in 2016 when hackers exploited DAO, one of the most notable Ethereum projects. As a result, developers initiated the Ethereum Classic hard fork to mitigate the DAO loss. 

The DAO, also called Distributed Autonomous Organization, raised $150m in Ether in a public crowd sale. 

The DAO, in principle, was to operate as a form of decentralized venture capital fund where investors would send Ether to the DAO to receive voting rights, whereafter those who had invested (and voted) would democratically decide on which projects to which the DAO should disperse those funds.

Contrary to the arrangement, the DAO was unable to complete its vision when millions of Ether vanished.

As a response, the Ethereum community moved to recover the funds by voting to change Ethereum’s baseline code to recover the lost funds and reimburse investors. As a result of the majority vote in the favor of this proposal, a hard fork and two separate blockchains were created. 

EtherZero Hard Fork

EtherZero is the second intentional Ethereum hard fork that took place in 2018. The hard fork went live at 4936270 block on 29 Jan 2018. Unlike the Ethereum Classic hard Fork, it was started by a group of tech geeks looking to provide a better platform for creating decentralized applications (dApps) and smart contract deployment. Contrary to other forks, it has no specific interest to speed up transaction rates. Rather, the development team was determined to make transactions completely free.

Metropolis

This is the third phase of the Ethereum upgrade and one of the notable hard forks. It is the forerunner to Serenity in the sense that it lays the background for early proof of stake (POS). Metropolis upgrade includes Byzantium, Constantinople, and early serenity. Byzantium is a backward-compatible upgrade aimed at integrating zero-knowledge protocol and delay of the network difficulty bomb. On the other hand, Constantinople is a non-backward compatible upgrade.

It represents a hard fork deployed to solve a security weakness allowing hackers to access users' funds and integrates a smart contract functionality that enhances the verification process as well as the reduction of gas price. Lastly, as a forerunner to serenity, it made the first attempt of implementing POS and account abstraction.

Serenity

Also known as the Ethereum 2.0, serenity is the latest Ethereum upgrade. It builds and improves on the successes of the previous upgrades and hard forks. The major improvement of the Serenity upgrade is porting from POW to POS fully. 

By implication, serenity increases its transaction capacity, changing gas fees and achieving scalability while achieving more eco-friendly coin generating and validating networks. 

The launch of the Beacon chain is Serenity's first step to revolutionizing the Ethereum network. From the Beacon chain, it pushes through to; Berin hard fork, London hard fork, and the upcoming Shanghai hard fork. 

Berlin Hard Fork

The Berlin hard fork is a forerunner to the London hard fork. It is named after the host city of the inaugural Ethereum Devcon convention. Berlin hard fork incorporates several EIPs which addresses gas price and introduces new transaction types. 

The Berlin hard fork went live at 12,244,000 on April 15 and proposes several EIPs. The EIPs include; EIP 2565, EIP 2718, EIP 2929 and EIP 2930. 

Before the Berlin hard fork went live, several delays were citing possible vulnerabilities and centralization concerns. Some believed the Berlin hard fork will be less impactful in the short term, but will further pave the way for the upcoming London hard fork. 

London Hard Fork

After the Berlin Hard Fork comes London hard fork, scheduled for July before being delayed to August 4. In preparation for the ETH 2.0 launch in 2022, London hard fork makes significant changes to Ethereum’s transaction fee system, which has long been a contentious subject.

Ethereum's London hard fork introduces two new known Ethereum Improvement Proposals (EIP), namely: EIP-1559 and EIP-3238. EIP-1559 is a proposed change to the way users pay gas fees on the Ethereum network. It also proposes a new transaction pricing mechanism that will create a base fee for each block. Usually, users enter a bid to pay their gas fees, but the EIP-1559 allows miners to prioritize transactions based on the fee added and use the fee as a reward for adding it to a block. Now, each block will have a fixed, associated fee instead. The EIP design allows the blockchain to burn the fee, reducing the overall supply of Ether (ETH). Thereby creating deflationary pressure on the cryptocurrency.

Ethereum 1.0 has a difficulty in mining called the difficulty time bomb. As miners reach the difficulty time bomb, it takes longer to mine a new block, and thus reward gets lower as well as slower transaction. To motivate users and encourage them to move to Ethereum 2.0 upon launch, EIP-3238 will delay the time bomb to enable the network to incentivize validators to Ethereum 2.0’s Proof of Stake consensus model at the correct time. It is suspected that if there is no consensus to move to the awaiting Ethereum 2.0, the scenario of Ethereum Classic will happen. Delaying the time bomb will lead to a 30-second block time ice age around Q2 of 2022, therefore, enabling "The merging" of Ethereum 1.0 with Ethereum 2.0.

Shanghai Hard Fork

The upgrade didn't stop at Berlin and London hard Fork. It proceeded to the Shanghai hard fork scheduled for OCTOBER 2021. Shanghai hard fork is promised to include the following EIPs; 

The new opcode BEGINDATA indicates that the remaining bytes of the contract should be regarded as data rather than contract code and cannot be executed.

Conclusion

Ethereum Blockchain so far has been a work in progress. It started from a four developmental roadmap Viz: frontier, homestead, metropolis, and serenity to achieve what we will call ETH 2.0 2022. Every upgrade of Ethereum accompanied an associated hard fork. The major Ethereum hard forks are Ethereum Classic, Shanghai, London, Berlin, Homestead, Constantinople, and Ice Age hard forks. It is expected that the Ethereum network will attain scalability with eco-friendly network fees and better decentralization. The ETH 2.0 is promised to provide a sustainable blockchain network that doesn't compromise any of the above features. 

Decentralized exchanges have been gaining more and more traction. It has been observed that a total of $1.48 billion in trading volume occured in UniswapV3 alone. Although protocols such as Uniswap, Curve, and SushiSwap significantly provide the exchange services within the Ethereum ecosystem. However, these exchanges do not support the swap between the different blockchain networks. So the question is, is there any way to swap native assets to other blockchains? For example, trade between Binance and Avalanche blockchain. 

Here the THORChain comes as the solution to this.

Introduction To THORChain

THORChain is a decentralized cross-chain liquidity protocol that allows its users to trade digital assets from one blockchain to another in a frictionless, secured and decentralized manner. There are no custodians and wrappings.  Users are paid to stake their assets in the liquidity pool to earn a fee at every swap.

Before we take a deep delve into the mechanics of THORChain, let's have a quick view of its history.

History Of THORChain

THORChain was founded in 2018 in a Binance Chain hackathon by a pseudorandom team. The team hasn’t come up with the real identity until now, but they have continued their research even after the hackathon. Later, advancements in Tenderment, Cosmos SDK, and working implementation of threshold signature scheme- TSS have helped them develop a fully cross-chain decentralized exchange. The TSS here is a cryptographic primitive for distributed key generation and signing. 

THORChain will start by allowing trades of Ether(ETH), Bitcoin(BTC), Litecoin(LTC), and Binance Chain(BNB). But more is coming shortly as a limited mainnet called "Multichain ChaosNet" was also released in April 2021.

Also read about Tendermint and Cosmos SDK.

One important thing to mention here is that the native token of THORChain is RUNE. Every token in the THORChain has an equal value associated with the RUNE token in that blockchain. 

RUNE

RUNE is the native coin of THORChain, which empowers its economic ecosystem while providing incentives to the network. It serves as a settlement asset or liquidity in the network, providing security and governance to the THORChain network. The total supply available of RUNE is $484 million, while 50% of its supply has been burnt.

Roles

The network has specific roles. These are the following roles in the THORChain network.

Liquidity Providers

Liquidity providers are the users who add their assets to the pool to gain incentives and rewards.

Swappers

Swappers use liquidity to swap their asset by paying some amount of fee to the pool.

Node Operators

Node operators are those who validate transactions, reach consensus and add that transaction to the blockchain.

Traders

Traders are responsible for maintaining the pool by paying fees with the intent to earn a profit.

THORChain Technology

Churning

Churning is a mechanism in which only one node is active and can sign the transaction while others are waiting on standby. For every 50,000 blocks, the churning mechanism hits the system to replace the older nodes with the set of standby nodes. It makes sure that each node that fulfills the criteria must have a turn in the system to verify the transactions. Even though a high amount of RUNE is required for a fully functioning node, nodes with less RUNE can still verify the transaction without signing it.

THORNodes

THORNode services the THORChain network. THORChain is designed in such a way that anybody with funds can join the secured network. However, it has taken a step further with a high churn schedule that continuously emits the nodes. THORNodes comprises multiple independent servers which run a full node for each connected chain.

Bifröst Protocol

Bifrost is a protocol that provides the service of connecting each chain. Once all the nodes are in sync, the observer starts monitoring the vault address. Whenever they see any inbound transactions, they validate them and convert them into witness transactions. THORChain observes each transaction and collects it's node signer to confirm that they both are identical. As the nodes reach consensus, the transaction moves from the pending state to the finality state.

Source: https://docs.thorchain.org/technology

THORChain State Machine

The state machine is responsible for performing the finalized transaction's logic and delegating them to the outbound vault. 

Source: https://docs.thorchain.org/technology

Bifröst Signer

As the transaction reaches a finalized state, the signer marks the valid transaction to their respective chains. This transaction is then sent to the TSS module, where it performs key-signing and broadcasts it to their respective chains.

Source: https://docs.thorchain.org/technology

How THORChain Works?

The THORChain protocol is a network built with Tendermint in a Cosmos SDK ecosystem in which the application layer is not attached to the consensus and networking layers. 

The consensus mechanism in THORChain is significant as the nodes inside the protocol must work together to record the transactions coming from different blockchains. To understand how it works, let’s take a simple example over here.

Let’s assume Alice wants to initiate a trade between ETH on the Ethereum network and BNB on the Binance Smart Chain. Alice will send a transaction to her ETH vault, where it keeps being monitored by the THORChain network. Here a Bifrost protocol will act as a connecting layer between the THORChain and the other blockchain networks. Its function is to keep track of the vault address and the inbound transactions. First, ETH is traded to RUNE in the Ethereum network, and as the nodes reach the consensus, the RUNE is then traded into BNB.

One thing which is essential to mention is that if a person wants to trade ETH against BNB. The user will be responsible for paying the ETH gas fee, while the BNB trade fee will be deducted from the outbound.

Source: https://docs.thorchain.org/technology

Conclusion

THORChain envisions empowering its economic ecosystem, and with continuous growth is proving its vision. More and more nodes are adding, increasing the trading volume and total locked value of the network.

Also read ConsenSys Quorum Blockchain

Ethereum is looking to build a network to carry out calculations beyond the blockchain, hence developing Ethereum plasma technology. The reason to carry out the calculations beyond the blockchain is to ensure the chain scales billions of calculations in a second. Another goal is to achieve this with the least possible amount of chain updates. 

With Ethereum Plasma technology, nodes are not required to confirm all data as the next smart contract concludes. According to the Ethereum blockchain developers, the use of their Plasma technology allows transactions between trusted nodes. You can use the Plasma technology without referring to the main block, and you can execute it in a variety of projects. Cryptocurrency exchanges, blockchain systems, and decentralized social networks can use Plasma technology to improve pace and protection parameters. 

Ethereum plasma technology is a second-layer scaling solution for growth. It is likely to become the second fully deployed scaling solution on the Ethereum mainnet, only behind state channels. Plasma is a framework that gives developers the opportunity to create child blockchains that use the Ethereum mainnet as a layer of trust. The Plasma technology enables child chains to be designed to meet specific use cases; especially those not currently feasible on the Ethereum blockchain. All decentralized applications that require users to pay huge transaction fees are best suited for Ethereum Plasma. 

What are the key features of Ethereum Plasma?

The Ethereum Plasma is made up of the following elements:

How Does Plasma Technology Our Work?

Ethereum plasma aims to establish a framework of secondary chains that will rarely communicate and interact with the main chain. The main chain is the Ethereum blockchain system. This Ethereum plasma framework is built to function as a blockchain tree. The blockchain tree is arranged hierarchically to ensure smaller chains are created on top of the main one. 

These smaller chains are also called Plasma chains or child chains. It is vital that note that Plasma chains are similar to sidechains, but they are not the same thing. The plasma structure is built using smart contracts and Merkel trees, thus enabling the creation of an unlimited number of child chains. These child chains are smaller copies of the parent Ethereum blockchain. More chains can be created on top of each child chain, giving rise to a tree-like structure. 

Generally, every child chain is a customizable smart contract designed to work singularly, serving different needs. The implication is that the chains can coexist and still be operating independently. In the end, Plasma will enable companies and businesses to implement scalable solutions in various ways. 

Therefore, the successful development and deployment of Ethereum Plasma technology will ensure the main chain will be less likely to get congested. Each child chain is designed to work in a distinct way to achieve a specific goal. These goals are not necessarily related to the goals of the main chain. In essence, child chains would reduce the overall work of the main chain. 

How Stellar Architecture works? Read here

Ethereum Plasma Architecture  

The Ethereum Plasma constitutes blockchains in a tree-like structure, and each is managed as a separate blockchain. An enforced blockchain background and MapReducible calculations are embedded in Merkel proofs. By reframing a block into a child blockchain supported by the parent chain, users can achieve a broad scale application. The implication is that there will be reduced trust in the root blockchain presence and accuracy. 

Ethereum Plasma Architecture
Ethereum Plasma Architecture

All blockchain calculations are framed into a community of MapReduce structures. They also include an additional way to execute Proof-of-Stake token linkage ahead of preexisting blockchains. However, it comes with the conception that Nakamoto Consensus Drivers discipline with block any restraint. This is guaranteed by implementing a smart contract on the root blockchain using the Proof-of-Fraud method. 

In the Ethereum Plasma architecture, the decision to ensure correctness typically depends on all participants testing the chain. Participants must thoroughly check each block to ensure accuracy before they can be considered. A temporary obligation is used to create a reliable bond so that the claimed data is subject to a controversial period. Within this period, participants are allowed to ensure the data conforms with the state. 

Plasma technology also provides a framework that allows participants to enforce consequences. However, it can only happen if an incorrect state is claimed. The proof model enables interested participants to claim ground truths to non-interested participants on the parent blockchain. This architecture is used for both payments and commutation; thus, making blockchain the decision-maker for contracts.

How Secure is Plasma Technology 

Fraud proofs secure all communications between the child chains and the main chain. Therefore, the root chain is responsible for maintaining the security of the network and punishing malicious actors. Each child chain has a specific mechanism for validating blocks. Fraud proof ensures that users can report dishonest nodes in case of any malicious activity. Users can also protect their funds and exit the transaction whenever there is malicious activity. The fraud proofs are mechanisms that allow a plasma child chain to file a complaint to its parent chain to the root chain.  

Plasma employs the Ethereum blockchain as an arbitration layer, and users can still return to the root chain as a trusted source. The Ethereum main chain is linked to child chains through root contracts. These root contracts are smart contracts on the Ethereum blockchain. 

Conclusion 

The Ethereum Plasma has all it takes to enhance the scalability of blockchain systems. At the moment, the Plasma protocol is still under test. However, experts who were part of the test noted a high throughput of up to 5,000 transactions per second. The implication is that increase in the number of projects on the Ethereum platform will not be correlated with network transaction delays.  

Also learn about Optimistic Rollups in our article.

Overview

One of the biggest setbacks of the Ethereum protocol is the lack of throughput. Optimistic Rollups (ORUs) are layer 2 solution technology helps to scale Ethereum smart contract and Dapps. Optimistic Rollups can scale the Ethereum protocol up to 100 to 2000 transactions per second (TPS). The major advantage it has over other scaling solutions is that it enables Turing-complete smart contracts on layer 2 using Optimistic Virtual Machine (OVM). Thus reducing user transaction cost. 

All the scaling solutions we have seen are layer 2 solutions. Meaning that they deploy alongside the main Ethereum chain. The drawback is that such a layer 2 solution cannot bring a fundamental change to the chain itself. However, Optimistic Rollups seem to have what it takes to scale the Ethereum protocol. 

There are two types of Rollup: Optimistic and ZK Rollups. Rollups works like Plasma. Both Rollups and Plasma scale Ethereum by moving transactions off-chain onto a layer 2 sidechain. Mainnet secures the layer 2 sidechain, also known as layer 1. Plasma and Rollups deploy smart contracts to the mainnet. Which takes custody of all funds deposited into the sidechain.

How Do Optimistic Rollups Our Works

These Optimistic Rollups (ORUs) introduce key actors called aggregators. These bonded aggregators bundle all transactions submitted by users into rollup blocks on the sidechain. These users pay aggregator fees to enable them to submit their transactions. The new sidechain state root to the mainnet. The aggregators accumulate a large number of transactions and publish them to the smart contract on the mainnet. In fact, aggregators have the trust to deploy contracts. Then process all user transactions, and finally adding them in a “rollup block.”

Any user can become an aggregator and start processing rollup blocks. They do this by putting down a bond in the mainnet contract. Additionally, any user can also download the rollup blocks and earn a reward. The reward is earned when a user proves that a state transition is invalid. Once a user successfully invalidates a block, they slash the aggregator’s bond and the bond of all aggregators who built on top of the invalidated block. Consequently, the challenger earns a portion of the slashed bonds. 

This process allows for consensus to reach on a batch of transactions instead of the network having to reach consensus on each transaction. A new rollup block can be challenged if the aggregator fails to include a transaction or posts an invalid transaction. The Optimistic Rollup contract consist of three basic parts:

Transactions will go straight to the Canonical Transaction Chain when users put their ETH on the Ethereum main chain. Once the transaction executes, the results will be posted to the State Commitment Chain. Optimistic Rollups are assumed to be correct until someone proves it wrong. The Fraud Verification Contract recognizes if there is cheating and goes ahead to delete the state if it is wrong. It will not touch the state commitment chain if the contract recognizes that there is no cheating. 

Optimistic Rollups relies heavily on trust and game theory. In general terms, there is an assumption that every aggregator will act honestly. The reason is that their bond will cut-off if they act maliciously. In like manner, these aggregators determine how fees will be submitted. This means that the more developers these aggregators can convince to trust them with their contracts, the more they stand to gain. 

The Merged Consensus Of Optimistic Rollups

The merged consensus is one of the major distinguishing features that differentiate ORUs from other scaling solutions. The merged consensus is a consensus protocol that can be verified on-chain, except for actual block validation. Actual block validation takes place implicitly through fraud-proof. The sidechain of Optimistic Rollup is fork-free (there is no split in the network) by design. Therefore, a non-trivial fork choice rule is irrelevant. A blockchain fork is basically a split in the blockchain network or a change in protocol. 

Here, the validity of blocks executes off-chain and can be proven incorrect on-chain through fraud-proof. This leaves us with leader selection and Sybil resistance. The leader selection algorithm has to do with who is permitted to attempt to progress the chain by adding a new block. A Sybil attack happens when an attacker subverts the reputation system of a network service. The attacker does this by creating a large number of pseudonymous identities. He also uses these identities to gain a disproportionately large influence. 

For the ORUs, the main chain provides Sybil resistance, while leader selection is implicit and post facto. Therefore, ORUs do not need a complex leader selection algorithm or an expensive Sybil resistance mechanism to provide security. The consensus protocol of Optimistic Rollups runs completely on-chain in a smart contract. Therefore, it does not affect or require any support from the mainnet’s consensus rules. Thus ORUs are permissionless through merged consensus. 

Transaction Latency 

Contrary to certain claims, Optimistic Rollups does not reduce transaction latency. Every sidechain block on ORUs needs to be committed to the main chain. So, users do not get blocktimes lower than the main chain. Aside from the use of fully-collateralized channels, there is no known secure and rustle way of reducing this latency. 

Users don’t have to wait for confirmation of sidechains before accepting its transactions. Since Optimistic Rollups are fork-free, valid blocks accepted on-chain are guaranteed to finalize eventually. All the data are available since all blocks are posted on-chain. Also, users have the permission to carry out client-side validation to accept transactions immediately. 

Difference Between Optimistic Rollup and ZK Rollup

Optimistic Rollups sacrifice some scalability to enable them to accommodate smart contracts on layer 2. Also, there is a slight delay in enabling users to challenge invalid blocks from bonded aggregators. On the other, ZK Rollups submit ZK-SNARK to the main chain Rollup contract. Then the main chain smart contract verifies and accepts all valid proofs. This process happens almost in an instant, and it scales immensely. 

Example of ORU 

A good example of Optimistic Rollup in action is UniPig. It is a demo for Uniswap, built using ORU in collaboration with Plasma Group. The demo can process 250 transactions per second (TPS). However, the developers claim that it has the potential to reach 2000 TPS.

Conclusion 

Optimistic Rollups will undoubtedly increase the throughput of Ethereum. Hence driving further innovation in the scalable and sustainable data availability front. As Ethereum prepares to lunch Eth2 or Serenity, ORUs will play a vital role in helping bridge the gaps. Currently, it is not clear whether the ecosystem will adopt ORUs at large. The reason is that there are many layer 2 projects banking on their solution(s) to drive the success of their products. However, time will tell how the mainstream will adopt ORUs.  

Also read Chainlink: An In-Depth Explanation

Xord is a Blockchain development company providing Blockchain solutions to your business processes. Connect with us for your projects and free Blockchain consultation at https://xord.solutions/contact/

The astonishingly Bull Run of Bitcoin in 2017 made Blockchain enthusiasts start looking into Blockchain scalability. Due to the popularity of Bitcoin in 2017, the Bitcoin Blockchain reached its current limitation in transaction throughput. There were more people trying to use the network than the network can handle. This led to an increase in transaction fees required for one user to transfer BTC to another user. 

The bottleneck brought up debates on the issue of scaling the Blockchain network. Although scalability in general terms means the capability of a system to handle an increasing amount of work, it has a broader meaning in the Blockchain domain. You may have come across the chart comparing the transaction speeds of cryptocurrencies to PayPal and Visa. 

Visa can process about 24,000 transactions per second (tps) whereas bitcoin can only handle seven transactions per second. On the other hand, Ethereum can only process 20 transactions per second. Before Blockchain can reach mass adoption, it is imperative to address the issue of Blockchain scalability. 

The Concept Of Throughput, Finality, and Confirmation Time

To help you understand these concepts, consider this interesting example. Imagine you are waiting at the train station to catch a train to your home. It takes the train 15 minutes to get to the train station, and 45 minutes to arrive at your destination. However, there is another twist, it is a popular route and there is always a long queue. 

It is after 15 minutes and the train has arrived at your waiting station but there are several people ahead of you in line. Therefore, you will have to wait another 15 minutes before you can catch a train home.

Items Time taken
Waiting time to get onto a train15 minutes
From the train station to home45 minutes 
Total travel time60 minutes
Capacity of train10 person/minute (60/10)

Now let’s explain throughput, finality, and confirmation using the concepts above.

You should note that measurements in throughput (tps) are not enough, and we need to also consider confirmation time. It’s not enough for a protocol to process up to 100, 000 tps if it will take a 2-day confirmation time. Therefore, when there is congestion in the network, throughput will remain the same (since the train can still carry 10 people per minute).

However, the confirmation time will increase because of long waiting times for the first block. The Finality is fixed as the “6 blocks confirmation” waiting time we need to ensure the block is not reversible. Depending on the situation, the average block waiting varies. 

The Limitations To Blockchain Scalability 

Cost

More nodes will be needed to process the increased traffic caused by more users and an increased number of transactions. The running cost to all these are enormous, and miners have a preference for transactions with higher fees. It means that having a transaction verified in time during peak times can move from a fraction of a cent to a few dollars. There is no way to tell how these fees can skyrocket when scaled.

The Response Time

Every transaction requires a peer-to-peer verification. Depending on the number of blocks involved, it can become time-consuming. Currently, Bitcoin verifies or creates one block every ten minutes. Therefore, the more transactions standing in the queue, the more time it takes for their processing. Imagine having more people on the Blockchain, the transaction per second will definitely reduce. 

Blockchain Scalability Solutions

Since scalability has been one of the major hurdles to the mainstream adoption of Blockchain technology. Blockchain enthusiasts have been working round the clock to develop solutions that will improve Blockchain scalability. Over the years, there have been several solutions that seek to improve the scalability of Blockchain technology. 

Ethereum 2.0 (Eth2)

Dubbed by many people as “the promised savior,” Ethereum has gone ahead to establish itself and the world’s first decentralized supercomputer. Eth2 is an upgrade to the Ethereum network, incorporating scalability and security it needs to serve us all. The first stage is phase 0 and is planned to launch this year (2020). Ethereum 2.0 will move away from Proof-of-Our Work which is a consensus mechanism it shares with Bitcoin. 

The success of Ethereum 2.0 will outperform the current limit of 10-15 tps. Thus making it more efficient. It will also open up dApps to operate with reduced latency at fewer network fees. However, Ethereum has a scalability trilemma, as it must have a trade-off when choosing between scalability, security, and decentralization. In the Ethereum scalability trilemma, only two of the three can be satisfied. 

However, Eth2 will try to make the network more scalable while improving its security and decentralization. Unlike the Proof-of-Our Work (PoW) that Ethereum 2.0 is moving away from, the Proof-of-Stake (PoS) requires that validators propose and vote on transactions. The only opportunity cost is staked collateral. By forgoing the energy-intensive mining process of PoW, PoS is more efficient. 

At the moment, the Ethereum network is only able to process between 10-15 transactions every second. This poses a great problem when network traffic is high. The reason for the low throughput is every node must process every transaction. Although it shows that the network is very secure, it comes at the cost of speed. To improve scalability, Eth2 employs a process called Sharding.

Sharding and Shard Chains 

This is the scaling solution that is being implemented to achieve long-term scalability. It is achieved by eliminating the need for every node to verify every transaction. The shard chains are involved in this process. You should see them as parallel Blockchains sitting comfortably within the Ethereum network and handling some of the network’s processing work. Shard chains have the capacity to transform Ethereum into a superhighway of interconnected Blockchain networks. 

The Beacon Chains

While the shard chains are processing transactions in parallel, the beacon chains ensure that they are all in sync. The beacon chain is a new Blockchain at the core of the Ethereum network. It provides consensus to all the shard chains. Validators will create blocks of transactions on every shard chain and report back to the beacon chain. They all work together to improve the scalability of Blockchain technology. 

Polkadot

Polkadot is an experimental project initiative of the Web3 Foundation to build a decentralized web. The project tackles the challenges of centralized web. Hence, it is known as Web 3.0. 

Polkadot offers transactional Blockchain scalability. It achieves this by spreading transactions across multiple parallel Blockchains. The multiple parachain architectures of Polkadot provides a horizontal scaling solution, allowing a high number of transactions to be processed in parallel.

At the heart of the Polkadot, architecture is the Relay Chain. The Relay chain which connects other chains (parachains) together. The chains are connected together by coordinating the cross-chain transactions and enabling the consensus mechanism across the platform. Polkadot also permits parachains to have state machines that are customizable for specific tasks, which promote efficiency in speed.

That way, the network achieves scalability in a decentralized manner. 

Conclusion 

Although most experts believe that Blockchain cannot attain decentralization, security, and scalability simultaneously. However, it is a hasty generalization because it will be a function of time for Blockchain. Concurrently, Ethereum, Polkadot, and the likes are currently working on Blockchain scalability.

Also read our latest article on Blockchain Architecture here.

Xord is here for your Blockchain projects and free Blockchain consultation. Connect with our Blockchain experts at https://xord.solutions/contact/

What is supply chain management?

The supply chain management is a process of managing a chain or network of activities, individuals, procedures, organizations, and resources that are involved in the execution of the flow of materials from the supplier for its distribution to customers in the most economical way. That is to say, the supply chain is a step by step process which involves physical and information flows. The physical flows involve the movement of goods/resources from the supplier to the customer. They are the visible part of the supply chain. Whereas, the information flows involve all the flow and coordinations that take place between the parties in the supply chain process. The four-step equation below will guide you on how exactly a supply chain works:

1) Supply of raw material:
This is the first step, to begin with, the supply chain of any products, the supplier delivers the raw material of the products to the manufacturer who processes it further.

2) Manufacturing of the goods:
The manufacturer makes the goods from the raw materials and then sends them to the distributor.

3) Distribution of the goods:
After this, the distributor delivers the goods to different retailers.

4) Selling the product:
As soon as they receive the finished goods, retailers sell them to the customers.

Issues in a supply chain:

Traceability issues:

The ability to track the product at every stage of the supply chain is now a crucially important step. The customers have more demands than before, they demand more data regarding where the products come from and what stages have the raw material gone through before coming to them. Sharing authentic information about every step develops the trust of consumers in a brand. However, it is not an easy thing to do. The lack of traceability in the supply chain process causes consumers to lose trust in the brand. If for instance, there is an outbreak of food-borne disease, it takes days or sometimes weeks to trace the actual cause of it.

Cost issues:

The cost to transfer goods from one place to another elevates very quickly. These include fuel of transport, logistics, manpower, the investment in software and management workers.

Fraudulent data issues:

The parties involved in the transfer of goods sometimes may be corrupt or disloyal to each other. They may replace the data of the goods.

Communication issues:

Other times, the parties may be honest, but the data they transfer is fragmented or incomplete, because of less knowledge of each other’s plans of action. For instance, the information about the quality of the raw material of a certain good may be misinterpreted by someone and forwarded to the other party further. Poor communication results in poor records of information, inefficiency, waste, recalls, and customer dissatisfaction.

Safety issues:

The safety of the products matters a lot to customers these days, especially if its a food product. They value their health over everything else. Hence, they demand a lot more from the manufacturer and retailer. The manufacturers have a lot of pressure to produce the best quality products they can. Unpleasant weather conditions, poor storage spaces and transportation delays all affect the safety of a product.

Time issues:

It takes days and sometimes weeks, to sign an agreement between a manufacturer and supplier, and a customer and vendor, just to initiate a supply chain. The contracts take even more time as they require a lawyer and banks.

A real-world example of a supply chain:

The Coca Cola company, headquartered in Atlanta, has one of the biggest supply chain systems in the world. they sell different kinds of beverages but their specialty is Coca Cola itself. Coca Cola company manufactures its concentrated syrup and then sells it to one of its partners like Coca Cola enterprises. The company has a franchised distributed system for the distribution of this syrup to local bottlers all around the world. These enterprises combine concentrated products with different ingredients to manufacture and package the drink. Then they market the product to retailers, and finally, to customers.
The supply chain of the Coca Cola company is spread across more than 200 countries. It has always been a challenge for the company to manage the supply chain. Because of its complexity, it has been inefficient, costly and lacked visibility. All of the cross-company transactions had been inefficient. The company wanted to elevate the cash flow in the supply chain and bring about efficiency.

How is Blockchain revamping the supply chain industry?

Blockchain is more than just a way to transfer digital currency to parties directly. Its three main properties; Transparency, Immutability, Scalability help supply chain management in tons of ways.

Blockchain Supply chain
Blockchain Supply chain

Transparency:

The transparency in supply chain management provides the companies with a clear view of all the information and data with manufacturers, vendors, retailers, and customers. It helps in the tracking of the goods, where they are at the moment, their delays, which path they are taking, where they are stored and at which time they will be delivered to the customers. The chances of misplacement or wastage of a product are nearly impossible this way. Any mishap can be reported directly without any efforts of parties, or costs of transportation.

Immutability:

The immutability of Blockchain technology eliminates the corruption aspect in the supply chain. So no fragmented data or miscommunication. No one can change or edit the transaction record, payment record or any other sort of information once it is sent through a Blockchain-based system.  Blockchain's scalability replaced their paperwork written and checked on by workers, with efficient and trustworthy Blockchain-based systems. It also helps to identify issues with the products faster. The negotiations become faster between parties, adding potential to the overall communication.

Security:

Blockchain adds security to the system, by not letting any third party accessing the information entered by the workers of an organization. Moreover, the potential of Blockchain to connect different nodes or ledgers with each other, keeping their integrity in place is a property that corporations strive to implement in their systems to build brand trust in customers. Permissioned Blockchain platforms, such as Hyperledger Fabric, are now helping companies like Wallmart to not only trace their food products but also maintain the integrity of data among different parties involved in the supply chain process.

A real-world example of a supply chain with Blockchain:

Traceability of products with Hyperledger Fabric:

Walmart is a multinational retail company, headquartered in America, that operates a huge chain of supermarkets, departmental stores, clubs, and grocery stores. Walmart is the world’s largest company by revenue according to the Fortune Global 500 list, 2019. It is also the largest employer in the world, with 2.2 million employees working across the globe, in 11,438 stores and 27 clubs in 27 different countries. Walmart has a relatively larger and more complex food supply chain network than many other corporations that operate hypermarkets. It is hard to keep track of food supplies while they are being delivered to and from different points in the supply chain network. Walmart has been failing in implementing systems to help in the supply chain. Luckily, they decided to explore Blockchain technology.
The food safety and technology team at Walmart partnered with IBM and planned to run two proof of concept projects to test how the distributed ledger technology, ‘Blockchain’ could help in their food supply chain network. The POCs could trace two items in two different countries. Mangoes in the US store, and pork in the China store.
The Hyperledger Fabric blockchain-based food tracking and traceability system finally worked! The time to trace the provenance of mangoes used to be 7 days before the implementation of this system. Now, it is 2.2 seconds! For the pork tracing, the system allowed uploading a certificate of authenticity to the blockchain, which brought in more trust in the system. According to a case study written on this POC experiment, Walmart had been building centralized systems that were inefficient for such a supply chain ecosystem. That was the mistake they have been doing. When the IT department put forward the idea of a distributed ledger system for their ecosystem, Walmart team started imagining the possibilities.
After testing, Walmart deployed the new system to trace the origin of 25 different products and 5 suppliers. Currently, it is tracing products like poultry, fresh produce, dairy, and multi-ingredient products with the latest Hyperledger Fabric blockchain-based supply chain system. It plans to deploy it to further products in the near future.
Take a look at the actual case study published by Walmart here.

Want to know about more use cases of Blockchain? Talk to a Blockchain expert from Xord here and get FREE consultation.

We develop cutting-edge products for the Web3 ecosystem supported by our extensive research on blockchain core and infrastructure.

Write-Ups
About Xord
Companies
Community
© 2023 | All Rights Reserved
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram