The word “complexity” is being used a lot lately in discussions of the financial system, particularly when it comes to regulation. The severity of the financial crisis seems to have opened a space for new ideas about the relationship between the complexity of the financial system and the complexity of regulation, with some of the most interesting proposals calling for greater simplicity in one or both of these areas.
The counterintuitive idea of simplicity as a tonic for complex problems is very attractive. It strikes me as common sense that attempting to control complex institutions with increasingly complex regulation often only makes things worse – it creates incentives for cheating, drives destructive “innovation” designed to exploit gaps, and often arms regulators to fight the last battle rather than preventing problems.
But I also think it’s worth taking a step back to think more clearly about the problem of regulating complexity from a number of perspectives before zeroing in on specific proposals in detail. Working from the purely conceptual to the purely applied, the remainder of this post looks at the problem from the perspectives of biology, social systems and banking.
Ashby and the law of requisite variety
Ross Ashby was one of the godfathers of systems theory and cybernetics, and developed a famous concept called the law of requisite variety. I haven’t read his book yet (free pdf here) but he followed it with a fairly accessible paper in 1958 that laid out the central idea.
In the paper, Ashby approaches the question of how to regulate a system from the perspective of biology and engineering, where a “regulator” can be an enzyme, an overflow valve, or anything that acts to counteract disturbances to the system. Using set theory, he defines successful regulation as a set of regulatory responses (the plural is important) that reduce the space of possible outcomes from a set of disturbances to an acceptable subset.
The key takeaway is that the success of regulation in a complex system is a function of the relationship between the set of regulators and the set of disturbances. More precisely, because no single regulatory response will be effective across all possible disturbances, the success of a set of regulations is in part a function of the match between the variety of disturbances and of responses (hence the law’s name).
Working with Roger Conant, Ashby followed this paper with another in 1970 that reached an even more dramatic conclusion, namely that (to quote the title) “every good regulator of a system must be a model of that system.” The conclusion of the paper, which must please economists to no end, is that the only way to regulate a system effectively is to build a comprehensive model of its internal dynamics.
The abstract appeal of these concepts runs aground when they confront the messy reality of real-world social systems, with two important implications for regulating finance. First, if no single regulation can be uniformly effective across a dynamic range of problems, then attempting to regulate using law becomes much more problematic. I’d imagine that this (along with lobbying for loopholes) is one of the main reasons regulation has become complicated in tandem with the growth of finance – if one rule can’t work, the natural implication is that you write more of them, and add to the set as more problems manifest. But by definition, a single set of rules will never be robust to every possible event.
Second, the need for a model implies that the designer of the set of regulators has to have deep knowledge of all aspects of the system, and to understand the non-linear dynamics of interconnected systems. But our regulatory framework for finance is set up in such a way that this is impossible. Banks alone are regulated by multiple agencies, each focusing on a different part of the business, while various types of markets have their own overseers, and none of these effectively reach beyond national borders. New meta-regulators like the Financial Stability Oversight Council in the US and the Financial Stability Board globally are attempting to get there, but it is much too early to know how well they are doing. Worse still, even the most sophisticated model will be prone to similar problems as law – by dealing with known risks, models by definition don’t take into account risks that lie beyond the designer’s awareness or ability to simulate.
So if effective regulation requires a perfect model of the financial system that is so complicated that seemingly no one has handle on the whole thing, let alone an ex-ante understanding of all possible bad events, how should we deal with the real-world complexity of the financial system?
Perrow: eliminate complexity in the system itself
The most rational answer is to make the system less complex. A simpler financial system is will presumably be less likely to fail catastrophically, while also reducing the scale of necessary regulation.
Charles Perrow wrote nearly three decades ago that complex systems, which he identifies as those marked by tight coupling and complex interactions, are inherently vulnerable to failure by accidents. In fact, these accidents are so predictable that he named them “normal accidents” (the title of his landmark study).
Since the financial system is a textbook case of tight coupling and complex interactions, the natural question is how best to reduce the risk of accidental system failure. Perrow’s more recent book addresses this question, though not before Perrow asserts that the financial crisis was a result of malfeasance that was abetted by the complexity of the system itself, rather than an inevitable system accident. Moreover, he points to the concentration of economic power as a key faultline to be addressed since it creates dangerous dependencies (a pernicious form of tight coupling) and creates power dynamics that work against regulation.
Perrow’s recommendations for reducing risk in complex systems are consistent with his normal accident framework. He points to the need for effective regulation as the most important factor, but he also acknowledges the difficult politics that are likely to make reform a long and difficult process. Given that, he calls instead for two related measures to reduce the vulnerability of complex systems. First, a reduction in the concentration of power is crucial in order to lessen systemic risk (an idea echoed by Hansen, as I discussed here). Replacing one-way relationships of dependency, which tie individuals and companies to more powerful organizations, with more equal and reciprocal arrangements is also necessary.
So what would a Perrow-esque (Perrovian?) financial system look like? He doesn’t say, but I’d have to think it would be consistent with the rising chorus of demands to break up the biggest banks roughly along the lines of Glass-Steagall. Doing so would reduce the complexity of individual institutions, and would also significantly reduce the concentration of economic power these organizations currently enjoy.
That’s the easy part. What is less clear is how to address the problem of interconnectedness. One of Perrow’s insights is that tight couplings eliminate slack (the extra capacity that absorbs energy) in densely connected systems, and are the primary reason that bad outcomes can propagate so quickly across them. So while an industry of smaller banks and trading firms would be good, those firms would remain embedded in trading networks. As former Chancellor of the Exchequer Alasdair Darling wrote in this week’s FT, Lehman Brothers was a standalone investment bank that nonetheless had a disastrous global impact when it failed.
While it is difficult to imagine a financial system with smaller banks in the current political environment, it is impossible to imagine it with fewer connections. By implication, structural fixes to the system itself are unlikely to go far enough to eliminate the scope for normal accidents so long as the financial system remains so tightly interconnected.
Haldane: eliminate complexity in the regulatory framework
The Bank of England’s Andrew Haldane approaches the problem from the other side, by focusing on reducing complexity in regulation rather than in the financial system it governs.
This is not to say that Haldane argues against simplifying the banks – quite the opposite. In an inversion of Perrow’s approach, Haldane notes toward the end of this typically erudite 2012 speech (summary) that restructuring of banking institutions along the clear lines of Glass-Steagall would be ideal, but that the political reality is that debate over mandating “has reached a stalemate.”
Instead, Haldane focuses more closely on the ways in which regulation could be restructured to be simpler in order to be more robust to the unexpected and less subject to model error. He provides a number of proposals for how this might work, including using simpler metrics to reduce regulators’ reliance on banks’ own risk models (a terrible legacy of the roaring 1990s) and recalibrating the role of the regulator to include more judgment from experience rather than box-ticking. He also proposes a sort of complexity tax to be levied on institutions large enough to create “complexity externalities.”
There is much more to say about Haldane’s ideas, and they will be central to the next complexity post. Until then, I’ll close by noting that none of these views is well-represented in our current regulatory environment. While we do have an increasingly complicated (some would say Byzantine) regulatory framework for banks, the complexity of the institutions themselves has far oustripped the ability of regulators to keep up, let alone create a perfect Ashby model. And while there is mounting political pressure to break up the banks, it has yet to overcome the political power of the banks themselves to resist this change, let alone to address the problem of tight coupling. Meanwhile, Haldane remains a bit of a lone voice, though that is changing.
Up next: the failure of expertise, and a potential solution