By Tarun Bansal, Gustavo Grieco, and Josselin Feist

We recently introduced our new offering, invariant development as a service. A recurring question that we are asked is, “Why fuzzing instead of formal verification?” And the answer is, “It’s complicated.”

We use fuzzing for most of our audits but have used formal verification methods in the past. In particular, we found symbolic execution useful in audits such as Sai, Computable, and Balancer. However, we realized through experience that fuzzing tools produce similar results but require significantly less skill and time.

In this blog post, we will examine why the two principal assertions in favor of formal verification often fall short: proving the absence of bugs is typically unattainable, and fuzzing can identify the same bugs that formal verification uncovers.

Proving the absence of bugs

One of the key selling points of formal verification over fuzzing is its ability to prove the absence of bugs. To do that, formal verification tools use mathematical representations to check whether a given invariant holds for all input values and states of the system.

While such a claim can be attainable on a simple codebase, it’s not always achievable in practice, especially with complex codebases, for the following reasons:

  • The code may need to be rewritten to be amenable to formal verification. This leads to the verification of a pseudo-copy of the target instead of the target itself. For example, the Runtime Verification team verified the pseudocode of the deposit contract for the ETH2.0 upgrade, as mentioned in this excerpt from their blog post:

    Specifically, we first rigorously formalized the incremental Merkle tree algorithm. Then, we extracted a pseudocode implementation of the algorithm employed in the deposit contract, and formally proved the correctness of the pseudocode implementation.

  • Complex code may require a custom summary of some functionality to be analyzed. In these situations, the verification relies on the custom summary to be correct, which shifts the responsibility of correctness to that summary. To build such a summary, users might need to use an additional custom language, such as CVL, which increases the complexity.
  • Loops and recursion may require adding manual constraints (e.g., unrolling the loop for only a given amount of time) to help the prover. For example, the Certora prover might unroll some loops for a fixed number of iterations and report any additional iteration as a violation, forcing further involvement from the user.
  • The solver can time out. If the tool relies on a solver for equations, finding a solution in a reasonable time may not be possible. In particular, proving code with a high number of nonlinear arithmetic operations or updates to storage or memory is challenging. If the solver times out, no guarantee can be provided.

So while proving the absence of bugs is a benefit of formal verification methods in theory, it may not be the case in practice.

Finding bugs

When formally verifying the code is not possible, formal verification tools can still be used as bug finding tools. However, the question remains, “Can formal verification find real bugs that cannot be found by a fuzzer?” At this point, wouldn’t it just be easier to use a fuzzer?

To answer this question, we looked at two bugs found using formal verification in MakerDAO and Compound and then attempted to find these same bugs with only a fuzzer. Spoiler alert: we succeeded.

We selected these two bugs because they were widely advertised as having been discovered through formal verification, and they affected two popular protocols. To our surprise, it was difficult to find public issues discovered solely through formal verification, in contrast with the many bugs found by fuzzing (see our security reviews).

Our fuzzer found both bugs in a matter of minutes, running on a typical development laptop. The bugs we evaluated, as well as the formal verification and fuzz testing harnesses we used to discover them, are available on our GitHub page about fuzzing formally verified contracts to reproduce popular security issues.

Fundamental invariant of DAI

MakerDAO found a bug in its live code after four years. You can read more about the bug in When Invariants Aren’t: DAI’s Certora Surprise. Using the Certora prover, MakerDAO found that the fundamental invariant of DAI, which is that the sum of all collateral-backed debt and unbacked debt should equal the sum of all DAI balances, could be violated in a specific case. The core issue is that calling the init function when a vault’s Rate state variable is zero and its Art state variable is nonzero changes the vault’s total debt, which violates the invariant checking sum of total debt and total DAI supply. The MakerDAO team concluded that calling the init function after calling the fold function is a path to break the invariant.

function sumOfDebt() public view returns (uint256) {
    uint256 length = ilkIds.length;
    uint256 sum = 0;
    for (uint256 i=0; i < length; ++i){
        sum = sum + ilks[ilkIds[i]].Art * ilks[ilkIds[i]].rate;
    }
    return sum;
}

function echidna_fund_eq() public view returns (bool) {
    return debt == vice + sumOfDebt();
}

Figure 1: Fundamental equation of DAI invariant in Solidity

We implemented the same invariant in Solidity, as shown in figure 1, and checked it with Echidna. To our surprise, Echidna violated the invariant and found a unique path to trigger the violation. Our implementation is available in the Testvat.sol file of the repository. Implementing the invariant was easy because the source code under test was small and required only logic to compute the sum of all debts. Echidna took less than a minute on an i5 12-GB RAM Linux machine to violate the invariant.

Liquidation of collateralized account in Compound V3 Comet

The Certora team used their Certora Prover to identify an interesting issue in the Compound V3 Comet smart contracts that allowed a fully collateralized account to be liquidated. The root cause of this issue was using an 8-bit mask for a 16-bit vector. The mask remains zero for the higher bits in the vector, which skips assets while calculating total collateral and results in the liquidation of the collateralized account. More on this issue can be found in the Formal Verification Report of Compound V3 (Comet).

function echidna_used_collateral() public view returns (bool) {
    for (uint8 i = 0; i < assets.length; ++i) {
        address asset = assets[i].asset;
        uint256 userColl = sumUserCollateral(asset, true);
        uint256 totalColl = comet.getTotalCollateral(asset);
        if (userColl != totalColl) {
            return false;
        }
    }
    return true;
}

function echidna_total_collateral_per_asset() public view returns (bool) {
    for (uint8 i = 0; i < assets.length; ++i) {
        address asset = assets[i].asset;
        uint256 userColl = sumUserCollateral(asset, false);
        uint256 totalColl = comet.getTotalCollateral(asset);
        if (userColl != totalColl) {
            return false;
        }
    }
    return true;
}

Figure 2: Compound V3 Comet invariant in Solidity

Echidna discovered the issue with the implementation of the invariant in Solidity, as shown in figure 2. This implementation is available in the TestComet.sol file in the repository. Implementing the invariant was easy; it required limiting the number of users interacting with the test contract and adding a method to calculate the sum of all user collateral. Echidna broke the invariant within minutes by generating random transaction sequences to deposit collateral and checking invariants.

Is formal verification doomed?

Formal verification tools require a lot of domain-specific knowledge to be used effectively and require significant engineering efforts to apply. Grigore Rosu, Runtime Verification’s CEO, summarized it as follows:

Figure 3: A tweet from the founder of Runtime Verification Inc.

While formal verification tools are constantly improving, which reduces the engineering effort, none of the existing tools reach the ease of use of existing fuzzers. For example, the Certora Prover makes formal verification more accessible than ever, but it is still far less user-friendly than a fuzzer for complex codebases. With the rapid development of these tools, we hope for a future where formal verification tools become as accessible as other dynamic analysis tools.

So does that mean we should never use formal verification? Absolutely not. In some cases, formally verifying a contract can provide additional confidence, but these situations are rare and context-specific.

Consider formal verification for your code only if the following are true:

  • You are following an invariant-driven development approach.
  • You have already tested many invariants with fuzzing.
  • You have a good understanding of which remaining invariants and components would benefit from formal methods.
  • You have solved all the other issues that would decrease your code maturity.

Writing good invariants is the key

Over the years, we have observed that the quality of invariants is paramount. Writing good invariants is 80% of the work; the tool used to check/verify them is important but secondary. Therefore, we recommend starting with the easiest and most effective technique—fuzzing—and relying on formal verification methods only when appropriate.

If you’re eager to refine your approach to invariants and integrate them into your development process, contact us to leverage our expertise.