The Legal Risks of Automated Transaction Copying in Crypto Markets

Co-authored with Alex Sarch and Natasha Vasan. Originally published by The FinReg Blog, Duke University (6 June 2023).

Markets built on public, permissionless blockchains like Ethereum are radically transparent. While pending transactions in traditional finance are considered private information, viewable only by brokers or corporate insiders, transactions submitted to Ethereum’s public mempool—where they wait to be included on the blockchain—are publicly known. This creates profit opportunities and trading strategies that are rarely, if ever, seen in traditional markets. Among the most interesting, but understudied, is the use of algorithms (“bots”) programmed to automatically copy and front-run, or otherwise exploit, other users’ trades as they wait in the mempool to be executed. “Generalized Profit-Seeker” (GPS) bots can access publicly available pending orders, simulate them to determine if they will be profitable, and copy (or otherwise piggyback on) those transactions deemed profitable according to the parameters of the bot. Sometimes, this benefits the original sender of the copied order—the copier effectively facilitates, even subsidizes, its execution. Other times, the sender of the copied order is blocked from a profit opportunity because the copier manages to get there first, thus making it entirely unavailable or at least less profitable for anyone who comes second. To further complicate matters, such automated strategy copying may involve replicating or facilitating criminal or otherwise illicit transactions, such as attempted hacks of blockchain applications or platforms. 

What liability is generated by automated piggy-backing strategies, which involve copying or exploiting the trades of others when the bot deems it profitable to do so? In our new article, we argue that such copying or exploitation of others’ publicly viewable transactions is unlikely to generate liability under existing U.S. securities law in a range of common scenarios, although we identify some important exceptions to this conclusion where liability is likely. For example, this might be where illegal transactions are at least recklessly copied, or where express representations of private transaction flow have been violated.  

Case Study: Piggybacking on the Inverse Finance Hack 

An eye-opening example of such liability risk may be found in what happened ​​in connection with the attack on Inverse Finance, a decentralized finance application on the Ethereum blockchain, ​​depriving the application of over $1 million. There, GPS bots battled to profit by piggybacking on a hacker’s illegal transactions, which attempted to exploit a vulnerability in the Inverse Finance platform (transactions viewable here and here). Unbeknownst to the hacker, the second of their transactions involved in the exploit was spotted while it was pending in the mempool by two automated bots (“MEV searchers”) watching the pool of pending transactions for profit opportunities. Both bots learned—through simulation—what the economic effect of the hacker’s transaction would be and realized that the transaction (1) would make a great deal of money to whoever is set as the beneficiary of the transaction and (2) would create a price imbalance across blockchain exchanges, creating an opportunity for riskless arbitrage (buy low on one exchange, sell high on another).  

Because the exploiter’s pending transactions were public, one of the searchers—“Bot1”— copied and submitted their own version of these two transactions, identical to the originals except that Bot1 set its human operator as the beneficiary to capture the profits from the exploit. Importantly, the operator behind Bot1 was likely unaware that any of this was happening because their bot operated autonomously: at most, she was aware of the likelihood that something like this may happen. To ensure that the copycat transaction would execute ahead of the copied transaction, Bot1 attached a very high transaction fee totaling over 6 ETH (over $6,000) for the validator. Those fees were much higher than the transaction fee set by the original exploiter, so Bot1’s transactions had a good chance to be executed ahead of the exploiter’s version of the transactions. Had Bot1 succeeded, she would have “deprived” the exploiter of their illicit gains, realizing them herself. However, Bot1 failed.  

The second searcher, Bot2, offered an even higher transaction fee to the validator than Bot1—almost $100,000—to ensure that the exploiter’s transaction was executed. Thus, Bot2 was instrumental in the exploiter’s successful attack on Inverse Finance. Why would a stranger pay so much to ensure the execution of someone else’s trade? Bot2 did so to realize the other profit opportunity created by the exploit transaction: the arbitrage. In fact, Bot2 profited over $111,000, despite the high fee. The mechanism through which Bot2 was able to pay a fee to advantage someone else’s transaction is known as “bundling” and is also a form of copying of transactions. However, by bundling, the copier does not make any changes to the copied transaction, they only copy it into a “bundle,” which includes both the original (copied) transaction and the copier’s own transactions, and pay for privileged execution of the whole bundle. In this case, Bot2’s reason to bundle the exploiter’s transaction was that the imbalance across markets created by the exploit was about to create a very competitive and lucrative arbitrage opportunity. To profit from such an arbitrage, it is essential to have one’s trades executed as closely as possible after the transactions that create the imbalance across markets. Hence, it was worthwhile for Bot2 to facilitate the exploit by bundling it with Bot2’s own “back-running” arbitrage, because this way Bot2 made sure that (1) the exploit created a market imbalance and that (2) it was Bot2’s operator who profited from that imbalance through arbitrage. 

The tactics of Bot1 and Bot2 are widespread in Ethereum’s decentralized finance (“DeFi”) ecosystem. Bot1 and Bot2 were attempting to extract Maximal Extractable Value (MEV). This involves taking advantage of the power wielded by actors called validators (formerly “miners”) to determine the order of transactions within blocks at their discretion – typically based on user-specified transaction fees via an auction which the validators pocket as rewards for assisting in the process of transaction validation on Ethereum’s proof-of-stake consensus mechanism.1As a result, both Bot1 and Bot2 were effectively competing—based on transaction fees—to have their transactions executed in strategic positions relative to the exploit transaction. Ultimately, it was Bot2 who won, while effectively assisting the Inverse Finance exploiter. 

What is the liability for the operators of Bot1 and Bot2 here, assuming that the underlying transactions carrying out the hack of Inverse Finance were illegal (perhaps a violation of the Computer Fraud and Abuse Act or other fraud)? Consider two cases – suppose first that Bot1 had won the battle to exploit the hacker’s transactions, not Bot2. Then we consider the actual case where Bot2 won the battle.  

Case 1: Bot1 Succeeds 

The searcher operating Bot1 detected (through running simulations of execution of transactions in the public mempool) that the hacker’s transaction would be profitable and attempted to front-run it by paying a high transaction fee for Bot1’s copy of the hacker’s transaction to be executed before (and likely instead of) the hacker’s version. Imagine (contrary to fact) that Bot 1 had succeeded in this endeavor rather than being blocked by Bot 2. 

Let’s assume the operator of Bot1 is not herself a block proposer or validator who exploits their privileged position of control over the contents of a block. This (as we discuss in Part IV.A of our article), could create distinctive legal risks in virtue of their special position of centrality and trust in the Ethereum ecosystem. Instead, we assume Bot 1’s operator is just a searcher scanning the public mempool as anyone in principle can.  

To determine the liability of Bot1’s operator, it is key to identify the nature of the violation that the hacker’s transaction that Bot1 copied would amount to – and particularly, what mens rea that violation requires. If the hack amounted to a strict liability violation—perhaps a breach of sanctions law by trading with a prohibited wallet address—then the operator of Bot1 would simply inherit the liability the hacker would face by virtue of the bot being the first to execute the hacker’s transaction.  

Matters are more difficult where the hacker’s transaction constitutes a violation with a higher mens rea, such as recklessness or knowledge. Suppose the hacker’s transaction, had it succeeded, would have amounted to theft by unauthorized access of a computer system in violation of 18 U.S.C. § 1030(a)(4), which requires knowledge of one’s lack of authorization to access the relevant computer system plus intent to defraud.  

The trouble is that it is unlikely that the person who operated Bot1 would know of the details of the hacker’s transaction, which Bot1 automatically copies and executes ahead of the hacker. Plausibly, Bot1’s operator does not and cannot closely monitor the details of every transaction her bot copies, since it is meant to run autonomously and at scale. Had Bot1’s operator known of the hacker’s transaction specifically, suppose all she would have seen is that it was so profitable that in such a competitive environment it could realistically only be the result of illegal activity. But the operator of Bot1 is unlikely to have particularized knowledge of the hacker’s precise transaction because the bot simulates and copies many transactions independently and the operator cannot review them all (manually reviewing all transactions would defeat the purpose of running the bot in the first place). So it’s doubtful that Bot1’s operator would have the mens rea of knowledge required to violate § 1030(a)(4). One might object that Bot1’s operator was willfully blind to the illegal transactions her bot will surely end up copying over time, although that would at least require a more particular showing of both 1) suspicions about the particular illegality involved in the hacker’s transaction and 2) deliberate efforts to remain in ignorance about them. 

By contrast, where the hacker’s transaction would amount to a recklessness offense, it is more plausible that Bot1’s operator would face liability by virtue of copying that transaction. Suppose the hacker’s transaction would constitute a fraud, manipulative device, or deception in violation of CFTC Rule 180.1 or SEC Rule 10b-5 (depending on whether the crypto assets are commodities or securities), where the requisite mens rea is usually at least recklessness. It is not implausible that the operator of Bot1, a sophisticated actor to be sure, would be aware of at least a substantial risk that some of the trades her bot copies are illicit. (Assuming Bot1’s operator is not a “white hat” MEV operator, this substantial risk of copying illegal transactions would also likely be unjustified, as typically required for recklessness. See Model Penal Code, 2.02(c).)  

Thus, where recklessness suffices for the violation that the underlying hacker’s transaction constitutes, Bot1’s operator may plausibly find herself on the hook by virtue of her copying that transaction. In applying Rule 180.1, for example, the CFTC defines recklessness as “an act or omission that ‘departs so far from the standards of ordinary care that it is very difficult to believe the actor was not aware of what he or she was doing.’” It is likely that an actor like Bot 1’s operator, who is sophisticated enough to operate a GPS bot, will quite plausibly, based on the available evidence about what happens in crypto markets and the prevalence of hacks and fraud therein, be aware of the relevant risks about her own transactions. This risk is going to be objectively clear and obvious to anyone as sophisticated as this operator. So, a plausible case for recklessness under 180.1 and 10b-5 plausibly would exist in such cases. This means that Bot1’s operator, in virtue of her recklessness as to the manipulative or fraudulent nature of the transactions her bot carries out, stands a reasonable chance of violating CFTC Rule 180.1 or SEC Rule 10b-5 (depending on whether the crypto assets are commodities or securities).  

Case 2: Bot2 Succeeds 

Now return to the actual Inverse Finance case, in which Bot2 (not Bot1) wins the transaction fee bidding war. Bot2 detected that if the hacker’s transaction succeeded, it would create a price imbalance across exchanges, which would be profitable to exploit via a traditional arbitrage trade (trading behind, or “back-running”, the hacker’s trade). However, because of how competitive such arbitrage opportunities are, Bot2 could only be sure to realize the arbitrage profits if it could guarantee that its arbitrage trade is executed immediately after the hacker’s trade, which created the price imbalance. So, Bot2 bundled the hacker’s trade with Bot2’s own back-run arbitrage trades and paid the extremely high transaction fee (almost $100,000) to ensure that the hacker’s trade would go through with Bot2’s back-run immediately afterward. Bot2 thus piggybacks on the hacker’s trade to ensure that Bot2 can realize the arbitrage profits through Bot2’s back-run. In this way, Bot2 blocks Bot1’s effort to copy the hacker’s trade and pushes it through into an earlier position in a block, so Bot2 can profit from the immediate back-run that is her primary aim.  

What is interesting about Bot2 is that it does not actually copy the hacker’s trade but facilitates it – giving it a boost to ensure its execution (so Bot2 can safely back-run it). In this case, we argue that Bot2’s operator would be unlikely to meet the requirements for aiding and abetting liability, at least under the criminal standard, due to her lack of the requisite mens rea. Still, Bot2’s operator isn’t totally off the hook, as market manipulation liability is more likely insofar as her conduct manipulatively or deceitfully creates the arbitrage opportunity that she goes on to exploit. Here is why. 

First, supposing the hacker’s transaction was a crime, would Bot2’s operator count as aiding and abetting it (in violation of 18 U.S.C. § 2(a)) by having boosted it as Bot2 did? To be an aider and abettor, one must not only perform an action in aid of the conduct of the principal but also do so with some mens rea towards the principal’s underlying crime. Some courts (e.g. Rosemond at 1248-49) have held that for some offenses, it is enough to aid the principal’s conduct while merely having knowledge that the crime will be committed. For other offenses, courts have adopted the “derivative approach” in which the mens rea for aiding and abetting tracks that for the underlying offense.  

Accordingly, the mens rea analysis we undertook for Bot1’s operator would be likely to carry over to the analysis of the mens rea of Bot2’s operator for purposes of assessing her aiding and abetting liability. If knowledge of the underlying crime proves to be the mens rea for aiding and abetting the relevant offense, it is doubtful that Bot2’s operator would rise to that level – for the same reasons as we saw with Bot1’s operator. Absent special circumstances, it is most likely she will be aware of a substantial chance that Bot2 will facilitate a crime – without having full knowledge (practical certainty) thereof. By contrast, in the rarer cases where recklessness might suffice for aiding and abetting, if any, Bot2’s operator would face greater legal jeopardy. 

Even where liability cannot be imposed on Bot2’s operator for aiding and abetting the hacker’s offense, the operator of Bot2 may still face liability under CFTC Rule 180.1 or SEC 10b-5 for illicitly creating the arbitrage opportunity that Bot2 goes on to exploit through its back-run of the hacker’s trade. Both Rules 10b-5 and 180.1 prohibit the execution of any manipulative device, scheme, or artifice to defraud.  By boosting the hacker’s illicit transaction, Bot2 did not just exploit an existing arbitrage opportunity that arose independently, but actually helped to create the arbitrage opportunity. Bot2 did this by facilitating the illegal transaction constituting the hack of Inverse Finance. Thus, this was not a run-of-the-mill DEX arbitrage or other standard arbitrage opportunity, but a disruption in prices caused by the hacker’s illicit transaction, which Bot2 facilitated. This is important to note as not all transaction fee bidding wars lead to the conclusion that their winner is a market manipulator. Claims for fraud-based manipulation require misconduct such as the use of a “manipulative device” or an act that “would operate as a fraud or deceit.” That exists, we submit, when one recklessly facilitates a crime that creates an artificial price imbalance that provides the opportunity for profitable arbitrage. The price imbalance Bot2 exploits is not one that arose through the “natural interplay of supply and demand,” but rather was deceitfully created by the operator of Bot2 via her reckless facilitation of the hacker’s criminal transaction that constituted the hack of Inverse Finance. Because Bot2’s operator would likely be at least reckless as to the reliance on a criminal transaction to create this arbitrage opportunity, as required for a violation of CFTC 180.1 or SEC 10b-5, we submit that courts would have a plausible basis for concluding that Bot2’s back-run of the hacker’s transaction – about which Bot2’s operator was likely at least reckless – is a prohibited form of market manipulation.  

In the end, while each case must be treated on its own facts, it seems generally true that operators of GPS bots in crypto markets should proceed with caution in the face of indications that their bot may end up copying or otherwise facilitating illegal transactions, like the one involved in the Inverse Finance hack.  It is worth exploring further, either for assessing liability or considering reforms to regulatory standards, whether the GPS bot was configured to automatically alert its operator – or perhaps even block the relevant transaction – where certain red flags are present, including profitability margins that usually only can be reached if a hack or illicit transaction is involved. Alternatively, systems permitting retroactive review of unusual or particularly risky transactions may also be worth exploring. Regardless, the factors that generally impact the assessment of whether a bot operator was involved in dishonest, deceitful, or otherwise manipulative schemes or artifices merit careful attention.