Designing Legal Systems for an Algorithm Society

  • Chapter
  • First Online:
Liquid Legal – Humanization and the Law

Part of the book series: Law for Professionals ((LP))

  • 360 Accesses

Abstract

The Information Age challenges the competency and cultural acceptability of traditional legal systems. Reciprocally, some AI methods are themselves in need of public oversight and stronger social trust. Legal systems and information technology are, however, positioned to help each other; each realm has specific tools to address the problems faced by the other. If properly integrated, their partnership can enhance the law’s efficacy as well as reduce AI’s potential dangers.

This work reaches what may seem a paradoxical conclusion: that the strongest contribution to humanizing the law may be to redirect some attention away from humans and their individual choices. Doing so encourages a legal/AI framework that focuses less on individual choices or their market-place aggregation, and more on the environments in which people actually live, the behaviors or outcomes produced inside those settings, and the measure of those outcomes against explicitly stated justice goals. In an increasingly algorithmic society, AI tools should enable stronger legal problem-solving; but ex ante and post hoc legal oversight should also ensure algorithmic outcomes that are more just as well as more accurate.

This Chapter reflects on what the tools of AI, combined with the law, may offer: a process of understanding social environments more realistically, designing them more deliberatively, and seeking defined ends for a differently imagined humanity.

Tom Barton is Professor Emeritus of law at California Western School of Law in San Diego, California.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 43.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 54.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
GBP 89.99
Price includes VAT (United Kingdom)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    As Balkin puts it, “behind the robots, AI agents, and algorithms are social relations between human beings and groups of human beings. So the laws we need are obligations of fair dealing, nonmanipulation, and nondomination between those who make and use the algorithms and those who are governed by them.” (Balkin 2017)

  2. 2.

    “The Algorithmic Society is a way of governing populations. By governance, I mean the way that people who control algorithms analyze, control, direct, order, and shape the people who are the subjects of the data.” (Balkin 2017)

  3. 3.

    As legal philosophy Lon Fuller wrote a generation ago, whenever the evolution of problem-solving methods cannot keep pace with the sophistication of social problems, people will turn to tyranny (Fuller 1965). In modern times, the tyrant may perhaps take the form not of a human despot, but instead a social self-blinding and abdication to unconstrained and non-accountable algorithmic decisions.

  4. 4.

    These principles are associated with liberalism—the political and economic principles that grew out of Enlightenment thought and that overlap with the equality and autonomy ideas explored here. Lest it be misunderstood, I pursue the inquiry of this Chapter out of deep concern for the well-being of liberalism, if not its survival. Western heritage ideas and institutions for social governance based on atomistic individualism and invisible market hands are under serious stress. Illiberal forces undermining free expression and democracy are spreading and growing more flagrant in their disregard for Western values. The goals here are to enhance the viability of legal institutions and community respect for the rule of law, and to offer fresh outlooks on fundamental liberal ideas that better accord with contemporary culture and standards of justice. A true liberal, valuing the inherent dignity and rights of all human beings, need not be fiercely libertarian or disdain the distribution of outcomes. Far from it: a committed liberal can be deeply concerned for the genuineness of social opportunities and mobility, and work toward guarantees of universal minimal levels of material well-being as well as human rights. It is toward those ends that I suggest reform of liberal ideas.

  5. 5.

    M. Ethan Katsh highlights crucial connections between legal systems and information:

    Law can be looked at in many ways, but in every incarnation, information is a central component. As one lawyer recently wrote, “from the moment we lawyers enter our offices, until we turn off the lights at night, we deal with information.” Information is the fundamental building block that is present and is the focus of attention at almost every stage of the legal process. Legal doctrine, for example, is information that is stored, organized, processed, and transmitted. Legal judgments are actions that involve obtaining information, evaluating it, storing it, and communicating it. Lawyers have expertise in and have control over a body of legal information. … Indeed, one way of understanding the legal process is to view information as being at its core and to see much of the work of participants as involving communication. In this process, information is always moving—from client to lawyer, from lawyer to jury, from judge to the public, from the public to the government, and so on. (Katsh 1995).

  6. 6.

    Although I develop the analogy between the Industrial Revolution and the Information Age, we might have looked to an earlier example suggested by M. Ethan Katsh. The invention of the printing press enabled a systematization of information which spawned dramatic changes in legal systems. Katsh summarizes:

    The embodiment of law in printed form did not simply replace law in [handwritten] form. Rather, it replaced a system of dispute resolution that had often involved written law and oral tradition working together. Printed law, therefore, emphasized decision making according to rules more than some previous systems because attention was focused on rules in a way that had not occurred earlier. In this respect, it is important to recognize that print also brought about a narrowing of the judge’s focus. Judges in earlier periods had been able at times to escape rule-oriented decisions because their attention was not bound to a printed or even a written text. They had options that were not available to later generation of judges whose decision had to be made “by the book.” (Katsh 1989)

  7. 7.

    “Kant held that free will is essential; human dignity is related to human agency, the ability of humans to choose their own actions.” https://en.wikipedia.org/wiki/Dignity#Kant.

  8. 8.

    “The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decisionmakers and often fail when applied to computers instead.” (Kroll et al. 2017).

  9. 9.

    Hannah Bloch-Wehba, for example, lists a variety of current governmental uses of algorithms but adds a cautionary note:

    Government decision-making is increasingly automated. Cities use machine-learning algorithms to track gunshots, determine where to send police on patrol, and fire ineffective teachers. State agencies use algorithms to predict criminal behavior, interpret DNA evidence, and allocate Medicaid benefits. Courts decide, using “decision-support” tools, whether a suspect poses a risk, eligibility for pretrial release, and how harsh a sentence to impose. The federal government uses algorithms to put individuals on immigrant and terrorist watchlists, make policy decisions about whether and how to change Social Security, and catch tax evaders. … But increasing automation may also make government less participatory and open to public oversight and input. (Bloch-Wehba 2020) (citations omitted).

  10. 10.

    “[T]ransparency may be undesirable because it defeats the legitimate protection of consumer data, commercial proprietary information, or trade secrets.” (Kroll et al. 2017) Furthermore, such disclosure is not necessarily required for adequate accountability: “Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process.” Id.

  11. 11.

    Addressing risks while environments are still shapeable—to prevent risks turning into actual injuries or other more serious problems—has long been the domain of “Preventive Law” or “Proactive Law” as it is known in Europe. See, e.g., Brown (1950), Barton (2009) and Berger-Walliser (2012). AI-enabled predictions could greatly assist in discerning risks and enabling early interventions to avert problems as they become more foreseeable.

  12. 12.

    Examples include Law Geex (www.lawgeex.com/platform and Legal Sifter (www.legalsifter.com.

  13. 13.

    Sean Semmler and Zeeve Rose describe “Beagle,” one such software development effort:

    Beagle is an A.I. tool for contract review that is primarily targeted at non-lawyers. Beagle is designed for users who need to review and manage contracts, but lack the expertise to do it themselves or the money to hire an attorney. First, users upload their contracts to the platform. Then, the natural language processing system identifies key clauses for review. This is done by identifying which clauses are used most often (for the type of contract at hand) and analyzing how this contract deviates from the norm. It also has a built-in communication system where users can interact with each other and discuss their documents. In addition to the system’s ability to learn from the individuals who use the tool, the system is able to learn the personal preferences of different users, and incorporate those preferences in future documents. (Semmler and Rose 2017)

  14. 14.

    Concerning regulation, David Freeman Engstrom and Daniel E. Ho capture both the challenge and the potential:

    First, the new algorithmic enforcement tools hold important implications for the accountability of agency enforcement activities. It remains unclear whether the new tools will degrade or enhance legal and political accountability relative to the status quo. On the one hand, the technical opacity and “black box” nature of the more sophisticated AI-based tools may erode overall accountability by rendering agency enforcement decisions even more inscrutable than the human judgments of dispersed agency enforcement staff. But the opposite might also prove true: formalizing and making explicit agency priorities could render an agency’s enforcement decision making relatively more tractable compared to pools of agency enforcement staff. (Engstrom and Ho 2020)

  15. 15.

    “AI causes unpredictable injuries because of its ability to exhibit surprising behavior, also known as emergent behavior, which is itself a product of AI’s aforementioned ability to learn rules from training data autonomously. … As AI grows even more sophisticated, it will even become difficult to fix or understand AI behavior ex post facto.” (Yoshikawa 2019) (citations omitted).

  16. 16.

    ** Yoshikawa, for example, details legal clumsiness and inadequacy in addressing injuries based on doctrines of tortious design; tortious use; strict liability for design defects; and public regulation applied to specific industries or generalized AI application (the author calls for no-fault social insurance against AI injuries) (Yoshikawa 2019). Balkin, however, holds out more hope for the application by analogy of nuisance, fiduciary concepts, and pollution control (Balkin 2017).

  17. 17.

    Effective “near neighbor analysis,” say Lau and Biedermann, should address two distinct algorithmic determinations: “identification,” which pertains to whether the algorithm has built proper classifications; and “individuation,” which pertains to whether it has fit particular data properly within those categories. They write: “Underlying identification is the assumption of likeness, meaning that objects can be categorized together based on the existence of a common set of properties. In contrast, underlying individualization is the assumption of discernible and ascertainable uniqueness.” (Lau and Biedermann 2020)

  18. 18.

    S.C. Code Ann. §§ 14-11-10, 14-11-15 (Law. Co-op. Supp. 1992), as amended, “Establishment of Master-in-Equity Court.” Corpus Juris Secundum, CJS EQUITY § 41: “A state constitution may also be a source of a particular court’s equity powers, in which case the courts’ equitable authority cannot be diminished by statute.”

  19. 19.

    “[Adam Smith’s] Theory Of Moral Sentiments was a real scientific breakthrough. It shows that our moral ideas and actions are a product of our very nature as social creatures. It argues that this social psychology is a better guide to moral action than is reason. It identifies the basic rules of prudence and justice that are needed for society to survive, and explains the additional, beneficent, actions that enable it to flourish.” https://www.adamsmith.org/the-theory-of-moral-sentiments.

References

  • Balkin J (2017) 2016 Sidley Austin distinguished lecture on big data law and policy: the three laws of robotics in the age of big data. Ohio State Law J 78:1217–1214

    Google Scholar 

  • Barton TD (2009) Preventive law and problem solving: lawyering for the future. Vandeplas, Lake Mary, Florida

    Google Scholar 

  • Barton TD (2016) Re-designing law and lawyering for the information age. Notre Dame J Law Ethics Pub Policy 30:1–36

    Google Scholar 

  • Barton TD (2020) Artificial intelligence: designing a legal platform to prevent and resolve legal problems. In: Jacob K, Schindler D, Strathausen R (eds) Liquid legal: toward a common platform, liquid legal institute. Springer, Switzerland

    Google Scholar 

  • Berger-Walliser G (2012) The past and future of proactive law: an overview of the development of the proactive law movement. In: Berger-Walliser G, Ostergaard K (eds) Proactive law in a business environment. DJCF, Sweden

    Google Scholar 

  • Bloch-Wehba (2020) Access to algorithms. Fordham Law Rev 88:1265–1314

    Google Scholar 

  • Brown LM (1950) Preventive law. Greenwood, Westport, Conn

    Google Scholar 

  • Burk DL (2021) Algorithmic legal metrics. Notre Dame Law Rev 90:1147–1203

    Google Scholar 

  • Dignity. https://en.wikipedia.org/wiki/Dignity#Kant

  • Engstrom DF, Ho DE (2020) Algorithmic accountability in the administrative state. Yale Law J 37:800–854

    Google Scholar 

  • Froomkin AM (2015) Regulating mass surveillance as privacy pollution: learning from environmental impact statements. Univ Ill Law Rev 2015:1713–1790

    Google Scholar 

  • Fuller LL (1965) Irrigation and tyranny. Stanford Law Rev 17:1021–1042

    Article  Google Scholar 

  • Hadfield GK (2017) Rules for a flat world: why humans invented law and how to reinvent it for a complex global economy. Oxford University Press

    Google Scholar 

  • Harari YN (2015) Sapiens: a brief history of humankind. HarperCollins, New York

    Google Scholar 

  • Katsh ME (1989) The electronic media and the transformation of law. Oxford University Press, Oxford

    Google Scholar 

  • Katsh ME (1995) Law in a digital world. Oxford University Press, Oxford

    Google Scholar 

  • Kim NS (2019) Consentability: consent and its limits. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Kroll J (2018) The fallacy of inscrutability. Philos Trans R Soc 376:20180084

    Article  Google Scholar 

  • Kroll J, Huey J, Barocas S, Felten EW, Reidenberg JR, Robinson DG, Yu H (2017) Accountable algorithms. Univ Pa Law Rev 165:633–705

    Google Scholar 

  • Lau T, Biedermann E (2020) Assessing AI output in legal decision-making with nearest neighbors. Penn State Law Rev 124:609–653

    Google Scholar 

  • Law Geex (www.lawgeex.com/platform

  • Legal Sifter (www.legalsifter.com

  • Lemley M, Casey B (2019) Remedies for robots. Univ Chic Law Rev 86:1311–1396

    Google Scholar 

  • Lessig L (2001) The future of ideas: the fate of the commons in a connected world random house. New York

    Google Scholar 

  • Pagallo U (2018) Apples, oranges, robots: four misunderstandings in today’s debate on the legal status of AI systems. Philos Trans R Soc 376:20180168

    Article  Google Scholar 

  • Radavoi CN (2020) The impact of artificial intelligence on freedom, rationality, the rule of law and democracy: should we not be debating it? Tex J Civ Lib Civ Rights 25:107–129

    Google Scholar 

  • Semmler S, Rose Z (2017) Artificial intelligence: application today and implications tomorrow. Duke Law Technol Rev 16:85–98

    Google Scholar 

  • Skinner BF (1972) Beyond freedom and dignity. Bantam, New York

    Google Scholar 

  • Strathausen (2020) Call for papers. In: Jacob K, Schindler D, Strathausen R (eds) Humanization and the law: a call to action for the digital age, liquid legal institute. Springer, Switzerland

    Google Scholar 

  • White JB (1994) Imagining the law. In: Sarat A, Kearns TR (eds) The rhetoric of law. University of Michigan, Ann Arbor

    Google Scholar 

  • Winfield AFT, Jirotka M (2018) Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos Trans R Soc 376:20180085

    Article  Google Scholar 

  • Yoshikawa J (2019) Sharing the costs of artificial intelligence: universal no-fault social insurance for personal injuries. Vanderbilt J Entertain Technol Law 21:1155–1187

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas D. Barton .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Barton, T.D. (2022). Designing Legal Systems for an Algorithm Society. In: Jacob, K., Schindler, D., Strathausen, R., Waltl, B. (eds) Liquid Legal – Humanization and the Law. Law for Professionals. Springer, Cham. https://doi.org/10.1007/978-3-031-14240-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-14240-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-14239-0

  • Online ISBN: 978-3-031-14240-6

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics

Navigation