ERC-4337 Gas Estimation for L2s and Signature Aggregators

ERC-4337 Gas Estimation for L2s and Signature Aggregators

Author: Dan Coombs

Reviewed by Brady Werkheiser

Published on July 11, 20238 min read

This article was co-authored by David Philipson.

In ERC-4337 Gas Estimation we discussed how gas works in ERC-4337 and our method for gas estimation. In part 2, Dummy Signatures and Gas Token Transfers, we found out that estimating gas is not always straightforward, and we need to account for edge cases. This post will dive into a few more of the edge cases we encountered.

The L2 Problem

Part of the definition of an Ethereum Layer 2 rollup is: it “lets layer 1 handle security, data availability, and decentralization, while layer 2s handles scaling.”

To achieve this, L2s will “roll up” many transactions into a single batch and then post them onto the layer 1 blockchain. This transaction cost isn’t free as L2s need to pay for the calldata costs incurred when posting a large batch of data to the L1 chain.

L2s need a way to charge their users for these incurred L1 calldata costs. Rollup frameworks achieve this in different ways.

This article focuses on the two largest EVM rollups: Arbitrum and Optimism.

How does Arbitrum calculate the cost to cover L1 gas fees?

On Arbitrum, the L2 gas charges to cover the L1 gas cost is calculated using the following formula, where the size of the data is its size in bytes after Brotli compression:

💡 L1 Cost (L1C) = L1 price per byte of data (L1P) * Size of data to be posted in bytes (L1S)

💡 Gas (G) = L1 Cost (L1C) / L2 Gas Price (P)

This gas is charged before a transaction begins execution and counts towards the transaction’s gasLimit. Thus, it must be accounted for during transaction gas estimation.

How does Optimism calculate the cost to cover L1 gas fees?

On Optimism, L1 gas cost is calculated a similar way, where the size in bytes after compression is multiplied by an L1 fee. Instead of translating this value to L2 gas like Arbitrum, Optimism deducts the required ETH directly from the sender’s account.

Senders do not need to take this value into account during gas estimation, but don’t have the ability to set a limit on their spending. Optimism takes care to ensure this fee won’t spike.

How can a bundler on L2s charge for L1 fees?

In both cases a bundler submitting a bundle transaction on an L2 is charged for L1 fees. The bundler needs a way to charge the bundled user operations for this fee by increasing the L2 gas.

The effective impact on L2 gas can be determined by:

L2_gas = L1_gas * L1_fee / L2_fee

verificationGasLimit and callGasLimit are metered by the entry point and thus can't be used by bundlers to charge for this extra gas. Bundlers need to rely on other methods.

Attempt 1: Set a Higher Priority Fee

Requiring a higher maxPriorityFeePerGas could allow the bundler to recoup these lost fees.

The calculation would look like:

  1. Estimate L1 fee and convert into L2 gas
    a. Assume a bundle of size 1 user operation and estimate gas using network-provided methods, typically exposed as special contract calls.
    b. This requires assuming both an L1 base fee and an L2 base fee to convert the fee into L2 gas values.

  2. Estimate verificationGasLimit
    a. Since this gas limit is under strict simulation rules, its highly likely that the estimated value will be very close to the actual value, unlike callGasLimit.

  3. Set maxPriorityFeePerGasBuffer = L1_fee / verificationGasLimit‍

  4. Add that buffer to any priority fee required

This could work, but it has terrible UX.

To protect itself, the bundler must assume that the amount of call gas used will be 0 and charge the user as if verification gas is the only component. The user then will over pay by the buffer priority fee component multiplied by any call gas used. This isn’t great for the user.

Attempt 2: Manipulate preVerificationGas

That leaves preVerificationGas as the only reasonable field to manipulate. This does fit well into the definition provided above that preVerificationGas is “the gas field used to capture any gas usage that the entry point cannot measure." Since this is gas that the entry point doesn’t meter, we would expect to be able to use this field to charge the user.

The calculation would look like:

  1. Estimate L1 fee and convert into L2 gas, set as preVerificationGas.
    a. Assume a bundle of size 1 user operation and estimate gas using network-provided methods, typically exposed as special contract calls.
    b. This requires us to assume both an L1 base fee and an L2 base fee to convert the fee into L2 gas values.

  2. Calculate the L2 unmetered gas and add this to the value calculated above.
    a. This is the same method as described in our PreVerificationGas calculation section for a normal preVerificationGas calculation.

  3. During eth_sendUserOperation also run (1) and (2), and reject any operations that don’t have a high enough preVerificationGas as part of the “pre-check” stage.

  4. During bundling, run (1) and (2) again right before submission.
    a. Reject any operations that don’t have a high enough preVerificationGas.

While this mechanism works, it has a very significant UX issue.

During (2) the bundler must assume L1 and L2 base fee values to perform the gas calculation. Because base fees are dynamic, if between the gas estimation step and the submission/bundling step the ratio of L1_fee / L2_fee increases, a higher preVerificationGas will be required, and user operations that calculated with a lower ratio will be rejected.

The best the user can do to improve this is assume that the ratio will increase between estimation and bundling and provide an overhead on their preVerificationGas to improve their chances.

Since preVerificationGas is always charged in full (i.e. its not a limit field), the user is stuck paying for this overhead regardless of what happens with price. The user is stuck choosing between potentially overpaying or having their operations rejected.

💡 Rundler implements the preVerificationGas calculation above. We recommend that users of these L2s add a 25% buffer on the preVerificationGas returned by eth_estimateUserOperationGas to improve chance that the operation is not rejected.

Optimism Edge Case

Optimism’s base fees are incredibly low, often well below 100 wei (yes wei). The preVerificationGas required to charge for the L1 gas fee is inversely related to the L2 gas fee, thus requiring preVerificationGas to be extremely high (in the millions).

Optimism’s priority fee is often orders of magnitude higher than its base fee. Therefore, a user must be very careful not to submit a priority fee proportional to the network’s priority fee, as the entry point requires payment for preVerificationGas (very high) * priority fee (normal) , causing massive overpayment. For this reason Rundler requires the priority fee to be a static percentage of the base fee to incentivize bundling on Optimism.

The Signature Aggregator Problem

Signature aggregation is a much talked about feature of ERC-4337 for its ability to:

  1. Reduce calldata costs on L2s via signature compression, leading to significant savings.

  2. Amortize the gas cost of an aggregated validation check across a bundle of operations.
    a. This validation check could be as simple as a BLS signature, or as complicated as an aggregated ZK proof.

In the current version of the entry point, the call to the signature aggregators validate function is unmetered. This means that the bundler is required to find a means to charge aggregated user operations for this gas, similar to the L2 problem above.

Like the L2 problem, the only reasonable way to do this is to increase preVerificationGas.

One way to do this is:

  1. The bundler assumes a target bundle size

  2. Bundler calls validateUserOp on every operation it receives prior to estimation to extract an aggregator address if used.

  3. When an aggregated user operation is received, the bundler needs to estimate the amount of gas used by the signature aggregator, per operation, at that target bundle size.
    a. One way would be to replicate the received user operation target number of times into a bundle and then use eth_estimateGas on aggregator.validateSignatures‍
    b. Another way would be to maintain a signature aggregator whitelist with gas measurements pre-populated (with a static value and a per operation dynamic value).

  4. Add the estimated gas to the preVerificationGas calculation above and return this value.

There are a few issues with this approach:

  1. The bundler is taking a risk by assuming a target bundle size. Either:
    a. The bundler waits until it can actually bundle target operations, hurting UX via latency
    b. The bundler bundles less than target, and eats the cost

  2. User’s can’t “bid” more or less depending on how fast they want to be included.
    a. In a limit based approach, if a user wants to ensure a quick mine, they can over-estimate the cost. If the cost ends up being lower, they aren’t over charged. In this approach, due to the static preVerificationGas, users must always pay their entire bid.

  3. In the P2P network, bundlers may have different target bundle sizes.
    a. This means that the bundler used for estimation may over/under estimate the preVerificationGas required by another bundler in the mempool.
    b. These off-chain assumptions hurt the interoperability of the mempool.

Rundler currently doesn’t have support for signature aggregators due to these complications. It is likely that we will add support for the method above and assume some (small, starting at 1) bundle size which will make using signature aggregators very expensive.

Potential Entry Point Changes

The issues above are both due to a lack of metering in the entry point contract for significant gas usage by the user operation. Relying on preVerificationGas to charge for this gas usage has the significant UX problem of requiring users to pay more than they actually use in order to increase the chance of their UO landing onchain quickly.

A solution to these issues could be to modify the entry point contract to meter this extra gas usage and attribute it to limit-based gas fields.

L2-L1 Calldata Gas Metering

💡 This is a very rough outline of a solution to a tough problem. We would love to hear ideas from the community!

The L1 calldata gas cost can be metered onchain and user operations can be charged only for the exact cost that they incur, and not the overhead. A limit-based field should be used.

A potential solution is to introduce a new field, daCallDataGasLimit (da for Data Availability, working title). This field would be native to an L2+ version of the entry point and used on chains where transaction calldata is posted onto a different system and thus must be charged for in a separate manner.

The entry point logic would be the following:

  1. At deployment the entry point is associated with a helper contract daGasMeter that has a single function measureUserOperationDaGas(UserOperation userOp)
    a. This function takes a user operation and measures exactly how much DA gas used.
    b. For example, on Arbitrum the meter can make a call to gasEstimateL1Component with an in-memory user operation to determine almost exactly how much L1 gas it used (it can’t account for extra compression due to a bundle with multiple ops).

  2. The entry point will call this function for each user operation prior to validation and can attribute this gas to the daCalldataGasLimit .

  3. To avoid reverts, instead of reverting if the DA gas used is greater than the daCalldataGasLimit the entry point will just cap there and the bundler is stuck paying for any gas over this limit. This can be accounted for off-chain by bundlers by ensuring that any bundled operation has a sufficiently high buffer in their limit field before bundling.

There are a few, pretty significant, downsides to this approach:

  1. The entry point contracts on different chains will have different addresses.

  2. Gas utilization during measurement.
    a. This is less of a problem since on L2+’s execution gas cost is usually much less than data gas cost.

Signature Aggregator Gas Metering

The entry point contract can be modified to measure the amount of gas used during its call to aggregator.validateSignatures, divide that by the amount of user operations aggregated, and attribute the gas usage evenly by deducting from verificationGasLimit .

Each whitelisted signature aggregator should be associated with a static value, the base cost to validate the aggregated signature, and a dynamic value, the per aggregated operation cost increase. For example, a BLS signature aggregator could have its static value as the one-time signature verification cost and the dynamic value as the per op hashing operations cost. Bundlers can increase verificationGasLimit during eth_estimateUserOperationGas by the static value divided by a target bundle size plus the dynamic value.

How do we set the target bundle size?

A potential solution here is to supplement the arguments to eth_estimateUserOperationGas with a minimumBundleSize corresponding to the smallest bundle size (thus highest gas) that the user wants to be included in. Users who are willing to pay more can decrease their minimum bundle size and improve their time to inclusion.

Bundlers then need logic to ensure that they only include a UO in a bundle that is at least as large as that UO’s minimum bundle size. This minimum bundle size can be calculated during simulation by tracking the amount of gas used by the account validation step, subtracting that from verificationGasLimit , and then calculating the minimum size from whats left. Bundlers can store this value alongside the UO in their mempool and use it as a hint for bundle building.

💡 One downside to this approach is that it assumes gas usage by a signature aggregator is uniform per operation. If this isn’t the case, the metering could be pushed onto the aggregator contract and returned by its verification function. Bundlers would need a way to calculate these non-uniform values off-chain as well.

What does this mean for signature aggregator developers?

Signature aggregators are likely going to need to be directly whitelisted by bundlers by providing them methods to compute off-chain signatures and with their static/dynamic gas cost components. The cold-start problem is going to be difficult, especially given the inability for a user to “bid” to speed up their operation.

What does this mean for account client developers?

Its important to understand the L2 distinctions described above, especially when it comes time to estimate gas fees. On Optimism, if you use the network provided priority fee, you can massively overpay for a user operation since this priority fee now also applies to L1 costs. If using Alchemy’s bundler endpoint refer to our documentation for tips on estimating fees.


Continue Reading

The next article in this deep dive on ERC-4337 gas estimation provides a walkthrough of the user operation fee estimation process. If you missed part one or two, learn how ERC-4337 gas estimation, dummy values, and the token transfer problem works.

Section background image

Build blockchain magic

Alchemy combines the most powerful web3 developer products and tools with resources, community and legendary support.

Get your API key