Why is Resource Pricing a Big Deal in Blockchain?

Rabia Fatima   |   

Sep 09, 2022

Sep 09, 2022

TL;DR: In this article, we are gonna discuss what resource pricing in blockchain means and how we can limit and price the submission of transactions that get included in the chain. We will also discuss the trade-offs between different approaches that can help us to understand and improve the status quo of resource pricing. Excited? Let’s quickly jump into this.

In the blockchain ecosystem, scalability trilemma is one of the most talked about problems. However, when we folks talk about scalability, we tend to confuse it with the throughput of the blockchain. The common mis-conception is that the more the throughput, the more the scalable the blockchain is. But, that is not true, throughput is transactions per second, whereas, scalability is the transactions per second / cost to validate all transactions.

This introduces the resource factor in our scalability problem. Why should resources cost less? Because, If the validator machine is so costly that a normal person cannot set and run it, then the system is not scalable because not everyone can spin up a costly machine.

In blockchain we need to measure these resources in units or in any sort of weighing scheme. So if a block is taking a million years to validate because of slow resources you can’t abandon it. You have to limit your resources to prevent denial of service factors. Once we have metered these resources we have to put a bound on resource consumption for each block.

Understanding Social Cost from First Principle Method

How do you define a blockchain from the first principle?

Well, a blockchain is a replicated state machine where a bunch of nodes perform independent transaction execution to arrive at some state of a virtual machine. You can imagine the state machine as a mathematical abstraction used to reach a new state after a transitioning function. Now remember all these activities take resources.

We already know that every transaction confers some personal benefit to its sender, but transactions also induce social cost to the network as every node has to process it independently. This was the model presented by Satoshi. In such cases, the general economic theory dictates that the incentives of performing these state transition activities have to be more than the social cost that was consumed by each transaction in the network.

The social cost of running a state machine can be broken down into five fundamental types of resource expenditure.

  1. Network Bandwidth: It is the cost of downloading the transactional data in order to reach a new state.
  2. History Storage: It refers to the cost of storing each transaction data.
  3. Computation: The cost of a node to verify a transaction.
  4. Memory Bandwidth and Capacity: Refers to the memory read and write functionality for data. Let’s understand this concept with the help of an example. Suppose you want to copy a few kilobytes of data from one location to another in a VM. You would definitely not do it for free as it takes some amount of memory to read and write. Since the memory has finite capacity and bandwidth hence we have a social cost associated with it.
  5. Random-access Disk Bandwidth and Capacity: This refers to storage read and write of data.We need to perform random access read and write into storage. This is different from history, where we just append the data.

Before diving into these five fundamentals, here’s a subtle reminder; In the blockchain world, we have two transaction models, one is the UTXO model, used in bitcoin, and the other one is the account-based model, used in Ethereum.

In Bitcoin, the block has to download approximately 1–4 MB of data every 10 mins, while Ethereum has a less frequency of about 5–10KB of data which takes around 12–15sec.

Resource Metering in Bitcoin:

Pre-SegWit

Earlier before segWit, Bitcoin used ”vByte” as a measurement unit which is equal to 4 weight units.

1Gas = 1vByte = 4 Weight Units

Transactions prior to segWit are called Legacy transactions. The block was about 1Mb and 4million weight units. For more about segWit read this.

Post-SegWit

The main purpose of SegWit is to improve transaction throughput on a blockchain network.

SegWit is basically a typical transaction that takes an UTXO ID and provides a witness that we authorize this UTXO with the account. Now with SegWit, this signature does not provide the effect of transactions such as whether the UTXO is created or burned? It simply authorizes the transaction. So SegWit removes this signature from the transaction tree but keeps it in the block.

With SegWit transactions are not smaller, they are exactly the same size as were before. WHY? Because SegWit hasn’t deleted the data, rather it has just moved it from one place to another.

4gas(weight units) per non-witness byte

1gas(weight units) per witness byte

So a block has a limit of 40 million weight units which means Either the block can be filled with 4MB of non-witness data or 1MB of witness data. Now there is a catch here.

With the increased block size we have quadrupled the network bandwidth and history storage for full nodes.

However, are we really getting a quadrupled throughput by this? Let’s figure it out.

Consider we have an equally distributed data of both witness and non-witness data type which is 2MB for both.

Now, Non-witness data = 2MB(data) * 4gas = 8MB gas*4 = 3,200,00 Weight units.

Witness data = 2MB*1 = 2MB gas = 800,00 Weight units.

With the above calculation, we can conclude that removing the witness data from the transaction storage tree has given us only a 25% increase in throughput. This means that the throughput was not the bottleneck for bitcoin.

“So what exactly led to SegWit?”

Is it Network bandwidth? No, as we are downloading more data i.e 4MB.

Is it Disk Capacity? No, as the state is growing quadruple.

Is it computation? I am sure No, as we have quadrupled the amount of signatures we can verify.

Then is it Memory bandwidth and capacity? As the size of blocks has increased the nodes fetching the data from memory has also increased. This means segWit did not provide any advantage for memory bandwidth.

This exact reason why bitcoin has opted for SegWit is Random access and disk capacity! SegWit helps to bound the rate at which state grows by removing witnessed data from the transaction tree.

This concludes that If we increase the load on bottlenecks then we are hurting decentralization and the cost of running a full node.

Additional Cost

Bitcoin uses a proof of work consensus mechanism, which inherently infers some uncertain electricity cost. Though this cost is not the subject of matter here, I would like to mention that there is a marginal cost associated with it, such as expensive ASICs or mining rigs that every miner must endure.

Resource Metering in Ethereum:

Now let’s discuss how Ethereum meters its resources.

Ethereum uses the accounts plus storage data model while for execution it uses a turing complete machine. Ethereum’s execution model is without predicates unlike bitcoin, however, it discards the infinite loops by limiting gas consumption.

Ethereum Gas Cost Structure Before EIP-1559

  1. Fixed Base cost
  2. 16 gas per byte for a data in transaction
  3. Variable gas per opcode or instruction executed

Why Do We Need Base Cost?

Ethereum has restricted its block size to be in the range of 0 to 30M gas. So even if we remove our base cost you will still not be able to create an infinite-sized block with infinite transactions. You will still have a cap on block size limits in bytes. So why then is the base cost important? It helps at the mempool level to avoid the denial of service attack. Imagine an account is trying to send 0 Eth to some random address. This is a completely valid transaction. This means bots can flood the mempool with random transactions, creating overhead. So the base cost is essential for the validity of mempool. The base cost is adjusted based on the network activity after every new block.

Again the same question arises. Why the 30M gas limit?

There are two possible reasons that we have found for Ethereum to select 30M gas limit are:

  1. Validation →Ethereum nodes can validate 50–100 Mega gas per second on a laptop. so when you do full sync you can catch into a reasonable time. However, if we increase the block limit or decrease the block time we will be hurting the validation rate, which means that the full sync will take longer and longer.
  2. State Growth→Ethereum gas is decent for metering operations except for state growth. Ethereum wants to bound the state growth rate which is 90G bytes at the time of writing. This means the larger the state the longer time will take to sync.

The block size here has been increased by a factor of 2 approx. This means that the resources will also double. However, our main point of discussion was to reduce the resources and decrease the burden on validators.

So what are solutions that are brought up by the Ethereum community to tackle the resource pricing problem? There are plenty but let’s discuss some of them.
State Expiry → The older history of Ethereum will be removed from the chain. If any smart contracts wants to re-execute the older transactions they can query it from protocols like The Graph and The Protocol Network.

Statelessness → By using Verkle trees and Kate commitments technology, nodes can be able to validate the block without syncing to the full blockchain data.

Make Storage Operations More Expensive → Ethereum could simply increase the gas cost for storage Opcodes.

Make data more cheap→ Some technology or alternate solution such as Data Availability which can replace the requirement of data dumping from Ethereum to some other chain such as Celestia.

In the end, I would like to present my opinion about why I feel that DA layers are the better choice for Ethereum regardless of all the others. The DA layers will not only help the monolithic blockchains such as Ethereum to lower the price of their storage but also they will help in the sustainability of Rollups too. We know that in rollups we have a supernode which can process infinite transactions; however, even that supernode is restricted to limit the transactions inside a block. This is because Go-ethereum hasn’t changed so it will still validate 50–100M gas/sec on the laptop. In DA layers we don’t have to download each byte to validate a transaction. We can simply perform data availability sampling which has a subliner cost.You might argue that, we can achieve the same with Ethereum’s data shards too. Maybe, but in my opinion running a validator node in data shards is not equal to a DA layer node as you have to run a heavy EVM too which has some hefty changes in gas costs.

This article is greatly inspired by John Adler talk on ETHCC4.

Yeah, so this is it for now but, in the next post we will definitely discuss how these resources can be capped efficiently. Till then, au revoir!!!
You can also read about the Flashbots here.

Written by

Modularism maxi, exploring scalability problems in blockchain tech.

Similar Articles

January 9, 2021
Author: Zainab Hasan
January 28, 2021
Author: Zainab Hasan
March 15, 2021
Author: Zainab Hasan
1 2 3 16

Get notified on our latest Web3 researches and catch Xord at a glance.

    By checking this box , I agree to receive email communication from Xord.