Smart Contracts Aren’t Trustless, Nor Should They Be
Criticizing smart contracts for not being completely “trustless” instruments completely misses the point.
Recently, Bitcoin supporter Jimmy Song wrote a popular article critiquing smart contracts and later reiterated his arguments on Laura Shin’s podcast Unchained . While strong critiques are useful, it’s important to examine whether Song is characterizing smart contracts accurately, and whether his arguments hold up under close scrutiny. This essay will show that Song has a very different definition of smart contracts than is traditionally understood, and Song’s extreme definition leads him to misunderstand the usefulness of smart contracts.
Jimmy Song describes the purpose of smart contracts as eliminating the need for trust. “A smart contract that trusts a third party,” he says, “removes the killer feature of trustlessness.” However, absolute “trustlessness” has never been a requirement of smart contracts. Smart contracts attempt to minimize the amount of trust necessary, not eliminate it, which is in most cases a patently impossible task.
A quick look at smart contract definitions throughout the decades shows that trusted third parties are commonplace. Nick Szabo, in one of his now famous papers , said, “Smart contracts often involve trusted third parties, exemplified by an intermediary, who is involved in the performance, and an adjudicator, who is invoked to resolve disputes arising out of performance (or lack thereof).” Not only does Szabo, the very inventor of smart contracts, never mention “trustlessness,” he directly contradicts the idea by mentioning the importance of trusted third parties.
Moreover, other historical references to smart contracts often included some sort of human interaction. For instance, the split contracts proposed by Mark S. Miller of Agoric and Marc Stiegler in their 2003 paper “The Digital Path” are smart contracts that are made of two or more parts. Some parts are automatically enforced in code, and others— such as the natural language instructions for human arbiters —are not.
If smart contracts aren’t “trustless”, what are they?
As Song correctly observes, smart contracts do not have anything to do artificial intelligence. What makes them “smart” is merely that they can be embedded in hardware and software. (To be clear, there is no requirement that they be entirely embedded in hardware and software.) Furthermore, there is no requirement that they be run on a blockchain. Originally, they were to be run on trusted secure servers.
For Szabo, smart contracts were important because they could radically reduce “mental and computational transaction costs.” In other words, traditional contracting makes certain kinds of agreements expensive, sometimes prohibitively so. By lowering transaction costs, smart contracts could make entire categories of human interactions much easier to accomplish, and enable others for the very first time. It is by this metric that smart contracts should be judged, not the impossible feat of achieving true “trustlessness.”
The definition of “trustlessness” is most likely rooted in a misunderstanding of how blockchains work within smart contracts. A blockchain is able to replace the trusted server, therefore creating a censorship‐resistant and disinterested substrate on which to run programs. In other words, we no longer need to trust a server to run the program correctly, so to that extent, using a blockchain can reduce the amount of trust needed. However, using a blockchain is extremely costly and slow and not all smart contracts need that kind of censorship resistance. Factors like transactions per second may be more important.
Because mechanisms can be costly, the mechanism chosen to reduce trust should be tailored to the specific threat model and what, exactly, is at risk. A vending machine is completely adequate for selling sodas, for example. A vending machine for selling diamond necklaces would need to be designed very differently. What Jimmy Song and others are trying to argue is the equivalent of arguing that any “secure” vending machine should work for vending diamonds, and this is simply not realistic or wise.
There are also many different kinds of trust that may be necessary. For instance, the purpose of legal contracts in the first place is to reduce the need to trust that someone will carry out their promise. A blockchain reduces a different kind of need for trust—trusting that a single server can execute the code correctly. It’s a mistake to think that blockchains eliminate the need for trust entirely.
Human judgment and ambiguity
Song argues that smart contracts by definition preclude human involvement and therefore don’t “have any room for ambiguity.” However, the purpose of contracts, legal and smart alike, is to reduce ambiguity and discretion so that a predictable set of outcomes are produced, and these outcomes can be depended upon.
Without a contract, the performance of a promise is entirely left up the discretion of the parties. This gives us full “room for ambiguity,” but we lose the ability to depend on the outcome. Thus, we don’t really want ambiguity in contracts—we want accuracy in the enforcement of the agreement.
Now, it’s true that code alone is insufficient to enforce many agreements. For instance, it would be difficult for code alone to discern whether an apartment meets an implied warranty of habitability —not too cold, not too hot, not infested with vermin, etc. We could imagine code that attempts to do this, that somehow measures the number of mice caught per day (hopefully zero), records the temperature, and records the air and water quality. However, such an effort would be expensive and ultimately futile, for surely there would be some important measurement that would be missed.
This kind of judgment about events, such as whether or not a dwelling is habitable, is currently best done by people. Machines are notoriously bad at recognition tasks, as exemplified by the “dog or muffin” meme (thanks to Meng Wong and Oliver Goodenough for this example):
So how do we reconcile the need for flexibility in determining whether events happened (such as whether an apartment is habitable) with the need to reduce discretion and ambiguity as much as possible? Law Professor Oliver Goodenough offers a distinction between two types of ambiguity that can occur in contracts: rule ambiguity and event ambiguity. Event ambiguity has to do with the recognition task—in other words, has this event happened? Is the apartment habitable?
Rule ambiguity, on the other hand, is uncertainty about which rules to follow. Most lawyers would agree that rule ambiguity in contracts is a bad thing. The entire purpose of contracts is to project dependable order into a set of future interactions.
As Goodenough points out, oftentimes a word like “habitability” is just a shorthand to avoid having to do the work of deciding exactly what that means. In other words, we don’t actually want ambiguity, we want human judgment in order to make things easier. Fortunately, smart contracts as historically understood have always included the possibility of including human judgment as an input.
The Oracle Problem
According to Song, any reliance on an oracle (a third party entity that provides information about the outside world to blockchain code) makes a smart contract useless. Again, Song’s definition here is extreme and fails to match the historical definition of smart contracts, including the papers that introduced the idea.
“Since an oracle determines what a smart contract sees (i.e. it controls the inputs to the smart contract) it also holds the power to control what the smart contract does in response to these inputs.”
This conflates event recognition with the enforcement of rules, and assumes that because an oracle has authority in one area, it has full discretion over the entire contract. This is simply false. We can imagine a simple contract for an online marketplace, in which the buyer and the seller agree at the time of the contracting to third party arbitration in the case of a dispute. In most cases, the arbiter has no control over the contract whatsoever. It is only in the case of a dispute between the buyer and the seller that their input is solicited at all, and they are limited to choosing specific outcomes, such as between sending the funds back to the buyer or to the seller. In all cases, it is impossible for them to choose to give the money to themselves, or anyone other than the buyer and seller. Thus, they clearly do not have complete control over the contract.
This concept of limiting authority to specific functions is known as the Principle of Least Authority (POLA) in computer science. By limiting the authority to the minimum amount necessary, the “surface area” for attacks is greatly reduced, and the risks are better understood.
Furthermore, there are mechanisms to avoid relying on a single, pre‐identified party. For instance, if the contract is a bet on the outcome of an NBA game, the oracle providing the score of the game from nba.com could be chosen at random from a pool of thousands, thus reducing the possibility of bribery or denial of service attacks. Or, a number of parties could chosen at random, and the answer only used if the answers agree. Economics can provide a great deal of needed insight into how to incentivize and structure the oracle mechanisms, but nothing about this is intractable.
Property title using a blockchain is still an open problem, but there’s nothing that has been proven intractable about it. Using a token on a blockchain allows for the secure transfer of property title in a smart contract, which is still a major improvement in terms of transaction costs over traditional contracting and escrow.
Again, the metric that we should be using for smart contracts is whether they are (or in the future, can be) an improvement over the current system. We should not be using the unrealistic and ahistorical conception that smart contracts must be absolutely “trustless” to be of any use.