This year’s Nobel Memorial Prize in Economic Sciences was awarded today to Paul Milgrom and Robert Wilson for their work on auction theory and improvement of auction designs. (See Marginal Revolution’s excellent coverage here, here, and here.) Milgrom and Wilson not only made major contributions to auction theory, but also designed auctions that the Federal Communications Commission used to distribute spectrum. The study of auctions today is advanced by a marriage of economics and computer science known as algorithmic game theory. (See here for a book-length introduction.)
Another strand of algorithmic game theory involves identification of equilibria in games, including imperfect information games. Players’ strategies in a game form a Nash equilibrium if each player, knowing the strategy of the other, will retain her strategy rather than switch to some other strategy. A strategy is a function that maps every potential game situation into a probability distribution of actions to take in that situation. Much of the progress in algorithmic game theory in recent years has used the game poker as a test bed. Poker is a game of asymmetric information. One cannot simply reason from the current state of the game to the end of the game, as in chess. Rather, a poker player must assess the different possible current states of the game, an assessment that depends on analysis of an opposing player’s moves. Because those moves were made also thinking both retrospectively and prospectively, players’ strategies in any game state potentially influence the optimal strategy even at entirely different game states.
An important paper, published in 2007 by Martin Zinkevich et al., introduces an algorithm known as counterfactual regret minimization to address this challenge. The literature has offered many variations on and improvements to this algorithm. Marc Lanctot’s dissertation is an excellent introduction to counterfactual regret minimization. Recent advances have involved incorporation of deep neural network learning (see Noam Brown et al.’s contribution here and Eric Steinberger’s here), which allows for machine learning of strategies in games where there are too many ways the game can unfold for a computer to traverse efficiently every possible permutation of moves in the game tree.
Like poker, litigation is a game of asymmetric information. In a simple lawsuit between the plaintiff and the defendant, each side knows what it thinks about how strong the case is, but it doesn’t know what the other party thinks. Differences in evaluation may result from differential access to information or from independent analyses of the same information. For example, the plaintiff’s and defendant’s attorneys may reach different conclusions about the probability that the plaintiff will prevail before a jury, and these separate analyses can themselves be seen as a form of asymmetric information. Just as parties may bluff in poker, pretending to be more confident in their cases than they really are, so too may parties bluff in litigation. This bluffing may make cases harder to settle.
Today, in assessing a settlement, a lawyer will use experience and intuition to assess the probability of prevailing on each issue in the lawsuit, the distribution of potential jury verdicts, and the cost of various phases of litigation. The lawyer will likely also try to gauge how confident the other side is about its case; the more one believes that one’s opponent is genuinely confident, the more generous one ought to be in settlement. But lawyers do not embed these assessments into formal Bayesian models that divide the litigation into various phases and consider optimal settlements to offer or accept given assumptions about probabilities of possible outcomes and the distribution of one’s opponent’s possible views about those outcomes.
It is not surprising that settlement bargaining today does not rely at all on the formal models of settlement bargaining in the legal literature. The literature’s assumptions about settlement bargaining are far too simplistic to be of use. Many models are non-Bayesian, so litigants are assumed not to adjust their views of the strength of a case based on new information, such as an opponent’s settlement offer. Most of the Bayesian models are one-sided asymmetric information models, meaning that one party knows exactly how strong the litigation is while the other party knows nothing about the quality of the individual case, but only the distribution of cases overall. The literature offers important observations about settlement bargaining dynamics and about the possible effects of legal interventions, such as fee shifting, but they do not purport to be useful tools for lawyers to use.
For a model to be useful as a Moneyball guide to decisionmaking, it would need to incorporate at a minimum these features: First, two-sided asymmetric information, i.e. where each party has some information that the other party lacks. Daniel Friedman and Donald Wittman wrote an important paper on this topic, but it is based on the unrealistic assumption that the judgment is always equal to the average of the two parties’ estimates of the litigation outcome. They also do not model the possibility of asymmetric information about both damages and liability. Second, player options not to proceed with litigation, both at the outset of litigation (when the plaintiff might not file or the defendant might not answer) and at a late stage (when a party might not have a credible threat to go to trial). Peter Huang has shown the importance of such options in affecting litigants’ strategic behavior. Third, a discovery process in which parties over time spend money, revise their opinions, and make and consider offers from their opponents. Fourth, risk aversion, including the possibility that one party is more risk averse than the other.
No model of litigation incorporates all of these relatively foundational features of the litigation process. The math becomes complicated very quickly. Consider, for example, this paper by Daniel Klerman et al., which meets just the first of the criteria above (though without considering the possibility of uncertainty on both damages and liability). It is a sophisticated piece of work that includes many double integrals. Indeed, it is sufficiently complex that one wonders about the feasibility of relaxing some assumptions, such as the assumption that each party’s information is equally good. It will be especially challenging for the model to incorporate parties’ learning and bargaining over time. My own view is that the returns from further mathematical development of settlement bargaining are rapidly diminishing.
Can algorithmic game theory provide an alternative possible avenue to modeling litigation in a more realistic way? In a work in progress, I argue that the answer is yes. I consider several papers modeling two-sided asymmetric information, and I show how computational models can help to simplify many assumptions in the mathematical models. One virtue of the computational models is that one can easily change any assumptions about how players’ payouts are determined. This makes it possible, for example, to assess the implications of fee shifting or asymmetric risk aversion without changing more than a couple lines of code.
That is not to say that algorithmic game theory modeling is easy. Indeed, although a litigation game incorporating all of the features above could have a much smaller game tree than poker, litigation may still be more difficult to model, because it is a general-sum game rather than a zero-sum game. Most of the convergence guarantees in the literature on approximating Nash equilibria apply only to zero-sum games. There are algorithms for finding exact equilibria (for example, using a sequence form representation), but in my experimentation, these are infeasible for all but the simplest versions of a litigation game. Still, in my paper, I show that one can often obtain exact equilibria even without a guarantee that one will do so.
Admittedly, academic work on settlement bargaining will not lend itself to practical application as easily as the auction design literature. For one thing, one might argue that it makes sense to use a sophisticated formal model only if one thinks one’s opponent is also using the same model. There is thus a chicken-and-egg problem to be overcome before use of these models is widespread. But I believe that computational identification of equilibrium strategies will improve in the years to come. Quantum computers may be especially useful here, but my own experimentation suggests to me that progress can be made even using classical computers and variations on algorithms that are now well known in the literature. My hope is that these developments will allow the legal literature both to offer more sophisticated assessments of fee shifting and other legal system design issues, but also to point the way toward practical application. There is no Nobel at the end of the rainbow, but more sophisticated bargaining and modeling have the potential to improve the legal system.