Keywords

2.1 The Dilemma of Risk Regulation: How Much Togetherness Between Regulators and Regulatees and How Much Information Asymmetry?

How socially close or distant should regulators and regulatees be in high-hazard industries (defined as systems or processes where malfunctions can create serious societal harm)? Closeness and high interdependence between regulators and regulatees can enable regulators to overcome otherwise disabling information asymmetry and draw on the technical and operational expertise of the regulatees, while the latter can rely on regulators to provide them with formal and informal authorisation for their continued ‘social licence’ to operate. But any such social closeness runs the risk of ‘regulatory capture’ by producer interests (by creating close-knit policy communities that may include revolving doors between regulator and regulatee positions or shared conceptual or cultural outlooks).Footnote 1 Such relationships typically lead to charges of lack of regulatory independence from alternative industries, from social movements challenging what they see as inherently unsafe activities, and from political parties and advocacy groups committed to the outright prohibition of certain technologies (such as nuclear energy, GM foods or human gene-editing) rather than regulation.

So is there an ineluctable policy dilemma between risk regulation that is well-informed but lacks credible independence and regulation that is socially distanced from producer interests but hampered by significant information asymmetry? This question represents one of the fundamental issues in the study of risk and regulation and this chapter consequently identifies a set of broad recipes for limiting capture that have been debated over the past thirty years. It does so by reflecting in particular on the intellectual journey in teaching and research on the subject in the London School of Economics and Political Science (LSE) over the past three decades. What does that journey reveal about what were considered to be high-hazard industries and what were the most salient recipes for regulating them? Accordingly, we revisit the start of the journey by giving a brief account of the ‘state of the art’ as viewed in the early 1990s. We then turn to four recurring recipes for dealing with regulatory capture in high-hazard industries, noting variations within these recipes that emerged as the journey went on. We conclude by considering the state of the art as viewed in the early 2020s and the extent to which there has been change in perspectives over the past 30 years.

2.2 Where the Journey Began: The Risk Regulation World of the Early 1990s

Three decades ago, the discussion of high-hazard industries was particularly shaped by the aftermath of the 1986 meltdown of Reactor No. 4 in the Chernobyl nuclear power plant in Ukraine which preceded the collapse of the Soviet Union and, from the mid-1990s, by rising concern over the spread and transmissibility of ‘mad cow disease’ (BSE) first identified in the late 1980s and reaching its peak in the early 1990s. In debates over how to handle such hazards, much attention was paid to Perrow’s (1984) work on ‘normal accidents’, which called for the abandonment of some high-hazard industries (notably nuclear power, following the 1979 meltdown at the Three Mile Island nuclear plant in Pennsylvania). But along with Perrow’s abolitionist approach, alternative ideas developed about how to institutionalise safety and ‘high-reliability organisations’ (La Porte 1991; Sagan 1993; Weick 1989) rather than abandoning or outlawing high-hazard processes. Many studies exploring such issues followed prominent ‘man-made’ disasters of that time, of which some of the leading cases were the methyl isocyanate leak at the Union Carbide Bhopal chemical plant in 1984 (resulting in over 15,000 deaths on some estimates), the launch disaster of the Challenger space shuttle in 1986, the sinking of the ferry Herald of Free Enterprise in 1987 and the Piper Alpha oil-rig fire of 1988. Other high-hazard industries that were explored through safety-culture lenses included air traffic control systems, drug approval processes, and the application of pesticides. More generally, a central and much-discussed contribution was Ulrich Beck’s Risk Society (1992), written in the aftermath of Chernobyl, that explored the changing nature of risk and underlying anxieties about its management.

LSE responded to and helped to shape the risk regulation debate in the 1990s in several ways, including an interdisciplinary seminar on the handling of risk that led to a social science contribution to the Royal Society’s second publication on risk management in the early 1990s (Royal Society 1992); an interdisciplinary master’s programme on regulation (comprising elements of economics, law, sociology, and political science) that developed in the mid-1990s; and various research projects that led up to the formation of LSE’s interdisciplinary Centre for the Analysis of Risk and Regulation at the end of the decade. Those developments reflected at least three academic concerns that reflected on the broader themes of risk regulation noted above:

  1. (a)

    The critiques of ‘classical’ regulation that emerged in economics and law in the 1980s (e.g., with the work of (judge) Breyer (1982)), and ideas about alternative styles of regulation, in particular Ayres and Braithwaite’s (1992) idea of ‘enforced self-regulation’, ‘responsive regulation’, and ‘management-based regulation’ as ways of surmounting the limitations of classical regulation. Those much-discussed ideas, reflected in the formal design of many regulatory systems, involved a combination of significant credible sanctions for repeat or extreme offenders together with the encouragement of regulated organisations to ‘own’ their own distinctive approaches to handling risk and hazard. The claim was that such a regulatory approach not only incentivised regulated organisations to move from compliance-seeking box-ticking to vigorous management of their own safety regimes, but their iterative relationship with the regulator was also said to become less adversarial and more cooperative, thereby reducing the regulatory challenges associated with information asymmetries and low trust regulator–regulatee relationships.

  2. (b)

    The development of ideas about the social construction of risk and hazard that challenged the concept of risk as objectively calculable independently of social context (like measuring speed by a speedometer in contrast to subjective estimates of speed). In the early 1990s, this ‘speedometer’ view of risk was still embraced in the engineering world, and it was linked with the idea that risk tolerability could be derived from observation of risks voluntarily undertaken by humans (e.g., in extreme sports or driving behaviour). The work of Douglas and Wildavsky (1982) and their followers on risk perception had presented an all-out challenge to that ‘objective’ view of risk in the early 1980s, and it was arguably that element that made the social science contribution to the 1992 Royal Society report on risk (orchestrated by LSE) so controversial to the Royal Society’s distinguished engineers that it was downgraded to a publication that was not an official report.

  3. (c)

    The related development of other new ideas about how organisations and societies handled risk, particularly in Michael Power’s ‘Audit Society’ work (1994, 1997) that offered an account of the social dynamics that led to the rise of ‘audit’ as a dominant programmatic idea and set of technical practices for administrative control based on ideas and practices of financial audit. By offering such a perspective on the ‘explosion’ of audit-based approaches to risk regulation that were develo** at that time and by emphasising the likely negative consequences of audit-related ‘rituals of verification’, the work of Michael Power and his followers highlighted some of the possible unintended consequences of regulation and thereby provided a new angle on a classical theme in social science (Merton 1936) for the analysis of regulation.

2.3 Four Recurring Recipes for Limiting Regulatory Capture in High-Hazard Industries

None of those concerns or analytic approaches that animated the LSE’s explorations of risk and regulation at the outset of its journey thirty years ago have wholly disappeared. The interest in the ‘audit explosion’ moved to a broader concern with the rise of risk management (Power 2007, 2016) and there continues to be interested in (the construction of) technologies that seek to establish the risk appetite and enforcement strategies of regulators (‘risk-based regulation’, Baldwin and Black 2016) or seek to make risks ‘calculable’ (Mennicken and Espeland 2019), processes of institutional risk ‘attenuation’ (Rothstein 2003) as well as continued interest in cross-sectoral and cross-national variation. Attention continued to be paid to the prerequisites and limitations of ‘responsive regulation’ and other models of enforced self-regulation. Further, there has been underlying continuity in the kinds of recipes on offer for handling or overcoming the dilemma of ‘distance’ versus ‘togetherness’ in regulators’ relationships with regulatees. Those recipes are:

  1. (a)

    ‘Techno-regulation’: This recipe rests on using physical and digital architecture to reduce opportunities for deviant or unsafe conduct and supplement official rules, as in the case of medical equipment that can only be used in prescribed ways (e.g., in single-use products or apparatus that cannot be disconnected). There is nothing new about the basic idea of fail-safe systems (a traditional example is the so-called dead man’s handle in electric trains, dating back to the late nineteenth century), and there were some antecedents for what is now called ‘nudge’ (following the title of Thaler and Sunstein’s 2008 best-seller) to denote the changes of behaviour that can be produced by careful framing of choices in IT architecture. But technological development since the 1990s has changed that techno-regulatory risk landscape not only in creating new potential hazards but also in new potential for using robots, algorithms (based on big data and machine learning), and other non-human elements in regulatory processes to check, supplement or even replace human discretion (see Yeung and Lodge 2019).

  2. (b)

    ‘Super-bureaucrats’: This recurring recipe aims to make regulatory bureaucracies better or smarter by making them less prone to regulatory capture or other common flaws associated with regulatory institutions. Thirty years ago, Breyer (1993) was calling for a ‘super-regulator’ to limit (what he saw as) the inconsistencies and reactive ‘tombstone’ quality of much risk regulation in the USA. That meta-regulation approach developed to some extent in the following three decades in that there were recurring efforts at creating ‘better regulation’ frameworks mainly by codes of conduct setting out procedural desiderata rather than the creation of additional layers of regulatory oversight. More recently, the weaponisation of network industries as part of international economic warfare (a source of risk far less discussed thirty years ago) has dramatically changed the character of cyber-regulation. At the same time, building on a model established particularly for bioethics a generation ago, a new epistemic breed of ethics advisors to anticipate and analyse likely ethics issues for the future, rather than develo** standards for current practices, has come into the world of risk regulation to supplement traditional econocratic and legal expertise.

  3. (c)

    ‘People power’: A third continuing recipe for countering producer capture in regulatory systems is to invoke lay community participation (such as citizens’ juries, town hall meetings, and similar processes) to assess regulatory standards and monitor regulatory behaviour. Back in the 1990s, Schrader-Frechette (1991) was just one of the numerous advocates for using community input to challenge regulators over their handling of regulatees and to establish what risks were considered tolerable or not (e.g., in deciding when to apply the precautionary principle). Schrader-Frechette was writing at a time when the Internet hardly existed, let alone modern social media. In today’s digital age, the ‘people power’ approach she was advocating has both greater potential, lower costs, and new associated hazards. Indeed, variants for the people power approach have become part of the regulatory furniture since the 1990s. A prominent example was the use of citizen panels to deal with GM foods, both in the UK in the late 1990s and subsequently in other international settings (see Pimbert and Barry 2021). By the 2010s, the people power approach was utilised in the form of ‘challenge panels’ to inform regulators’ decision-making and in regulatees’ use of ‘engagement panels’ (with firms negotiating directly with stakeholders over business plans before those plans go to regulators for approval) (Heims and Lodge 2018). Another variant has been the growing interest in ‘crowd-sourcing’ input through online means, either by reducing the cost of providing input (Balla and Daniels 2007) or by establishing dedicated platforms (such as the UK ‘red tape challenge’ which was initially trialled between 2011 and 14 with limited results only to be briefly revived in 2020, Lodge and Wegrich 2015).

  4. (d)

    ‘Strict liability’ and Tort versus Criminal Law: A fourth recurring recipe for dealing with the dilemmas associated with regulator–regulatee relationships is based on the design of legal processes, notably over rules of evidence relating to culpability and the use of criminal rather than civil law (and consequent imposition of fines and penalties), to offset regulatory capture or similar producer-dominated behaviour. The imposition of strict liability on producers of defective or unsafe products or services (i.e., penalties and liability that do not require evidence of intention or mental state (mens rea) on the part of the risk producers) is a long-running issue in risk regulation. A related issue concerns the rules of evidence for proof of negligence, for example in the field of medical risks, where the decision of judges in some state supreme courts in the USA in the 1960s removed the necessity for testimony from other medical practitioners in proving medical negligence, thereby heralding a new era of ‘defensive medicine’ with its associated costs and benefits. Similar issues repeatedly arise over the handling of culpability over the handling of financial risk, for instance in efforts to impose strict liability on senior managers in financial firms for regulatory misconduct on the part of their subordinates. It is in this context, as well as, arguably even more prominently, in the field of competition law, where across jurisdictions there has been a growing emphasis on deterrence by linking individual accountability for wrongdoing to criminal sanctions rather than relying primarily on tort law or on sanctions on businesses that were seen as simply ‘costing in’ potential fines.

None of those four broad recipes have gone away thirty years later. Variants of each of them keep emerging, whether in the idea of criminalising actions previously only regulated by tort law, new ‘fail-safe’ mechanisms based on technologies intended to complement if not replace human judgement, the call for new super-regulators, or new variations on the ‘people power’ theme.

2.4 From Mad Cows to Corona: So Where Are We Now?

We suggested earlier that LSE’s approach to risk regulation three decades ago was shaped by events such as the Chernobyl disaster, and concerns about the regulation of food safety in view of the ‘mad cow disease’. Thirty years later, concerns with nuclear risks and other disasters produced by corporate and regulatory failings are still central to risk regulation debates, particularly in the aftermath of the 2012 Fukushima disaster ('t Hart 2013), though risks associated with genomics have not (yet) attracted the attention that was anticipated three decades ago in connection with the sequencing of the human genome. A decade into the LSE’s risk regulation journey, the overnight collapse of one of the largest corporations in the USA (Enron in 2001) highlighted the importance of financial risks emerging from accounting scandals. Similar themes emerged in the context of the German payment processor Wirecard in 2020. More generally, financial transactions have increasingly been defined as ‘high-hazard’ operations, especially following the bank collapses in the 2008 global financial crisis. In recent years another major new risk of concern has been the ‘weaponising’ of the cyber-world and broader concerns with the future development and deployment of artificial intelligence that create interdependent large technical systems far beyond the network technologies of the past in telecommunications, power, or transport (Hughes 1983). And the COVID-19 pandemic brought other high-hazard processes and regulations into contention, for example in the interface between hospitals and care homes and the trade-off between healthcare system collapse and kee** basic supply lines open (Hood 2022).

Such developments suggest that new hazards of risk regulation will keep emerging into view, along with new adaptations of the recipes for handling the associated regulatory dilemmas.

Despite the tendency in the literature to scale new heights of hyperbole (for instance in phrases such as ‘mega-crises’ (Helsloot et al. 2012) and ‘super-wicked issues’ (Levin et al. 2012)), none of the issues that preoccupied LSE debates over risk and regulation three decades ago have altogether disappeared from view, and the same goes for the four recurring recipes for dealing with the regulatory capture/information asymmetry issues in the handling of risk. It is not so much that the debate has fossilised as that the basic recipes for dealing with regulatory capture have to be set into an ever-changing political and technological context. Part of that change in context relates to alterations in the ‘epistemic community’ of risk regulation scholars themselves. The stark social divide between the UK Royal Society’s engineers and social scientists over risk perception and management in the early 1990s is arguably much less prominent today, with much more acceptance of culturally constructed risk perceptions (Kahan 2012), although it has by no means completely disappeared. The geographic focus has shifted too: LSE’s debates of thirty years ago mainly concerned UK and US national regulation (e.g., in Hood et al.’s (2001) work on the institutional fragmentation of risk regulation regimes), with much less attention paid to transboundary coordination in the handling of the risk issues and of national regulatory decisions than applies today (Cabane and Lodge 2022).

In conclusion, the dilemma between regulatory independence and the capacity to penetrate information asymmetry in handling high-hazard industries and processes seems unlikely to be resolved over the next thirty years. Nor are the four recurring recipes for co** with that dilemma likely to disappear. Rather, the challenge will be to develop and adapt those recipes to changing conditions, as new high-hazard industries and processes emerge and new opportunities develop for rebalancing political authority and regulatory expertise.