Artificial Intelligence: Designing a Legal Platform to Prevent and Resolve Legal Problems

  • Chapter
  • First Online:
Liquid Legal

Part of the book series: Law for Professionals ((LP))

Abstract

Legal systems evolve procedures designed to resolve legal problems. Gradually, however, problems are molded to be well-suited to the procedures available to resolve them. Legal problems and procedures thus co-evolve, mutually adjusting as new sorts of problems challenge existing procedures, or as procedural innovations prompt us to see problems (and ultimately ourselves) in fresh ways. This ongoing relationship between legal procedures and legal problems may be at a historically significant moment, one calling for thoughtful social response and re-design of traditional legal platforms. Legal problems have become more difficult in the Information Age, and traditional legal procedures may not be kee** pace. The evolved procedures of legal systems are not designed to deal with the complexity, breadth, or volatility of modern problems. Traditional legal approaches should thus be strengthened through incorporating new problem-solving methods that offer greater sophistication and versatility. Artificial intelligence offers one set of potentially powerful tools to augment legal capabilities. This Chapter examines both the advantages and disadvantages of using AI to resolve legal disputes. On the positive side, AI may reveal patterns of human behavior that facilitate early interventions to prevent problems from arising, and increase the effectiveness of the legal systems in co** with problems of great complexity. Yet AI creates its own risks. First, the historical databases on which AI algorithms are trained may perpetuate social discrimination. Second, AI’s immense pattern recognition and explanatory powers may, ironically, eventually divert legal and moral attention from social issues. Too many issues that now are addressed normatively by the law may be perceived as belonging to a realm of scientific or psychological explanation that is impervious to conscious social control or design. As a result, we may come to constrict both the range of legal problems and our ethical imagination.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 53.49
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 69.54
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
EUR 117.69
Price includes VAT (Germany)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The processes of machine learning, data mining, and pattern recognition are intertwined to form what we here term “artificial intelligence” (Simon et al. 2018; Surden 2014).

  2. 2.

    As Andrew Scherer observes:

    The sources of public risk that characterized the twentieth century-- such as nuclear technology, mass-produced consumer goods, industrial-scale pollution, and the production of large quantities of toxic substances--required substantial infrastructure investments. This simplified the regulatory process. The high cost of building the necessary facilities, purchasing the necessary equipment, and hiring the necessary labor meant that large corporations were the only nongovernmental entities capable of generating most sources of public risk. Moreover, the individuals responsible for installing, operating, and maintaining the infrastructure typically had to be at the physical site where the infrastructure was located. The physical visibility of the infrastructure-- and of the people needed to operate it--made it extremely unlikely that public risks could be generated clandestinely. Regulators thus had little difficulty determining the “who” and “where” of potential sources of public risk. (Scherer 2016) (citations omitted)

  3. 3.

    According to Julie Cohen:

    Platforms--including online marketplaces, desktop and mobile computing environments, social networks, virtual labor exchanges, payment systems, trading systems, and many, many more--have become the sites of ever-increasing amounts of economic activity and also of ever-increasing amounts of social and cultural activity. The emergence of platform-based business models has reshaped work, finance, information transmission, entertainment, social interaction, and consumption of goods and services, and has destabilized the locally embedded systems that previously mediated those activities in many different types of communities. Legal and economic constructs based on the idea of “markets” --whether in goods and services or in speech and ideas--have yet to adapt in response. (Cohen 2017)

  4. 4.

    Julie Cohen puts it slightly differently: “Law for the platform economy is already being written-- not via discrete, purposive changes, but rather via the ordinary, uncoordinated but self-interested efforts of information-economy participants and the lawyers and lobbyists they employ.” (Cohen 2017).

  5. 5.

    According to Frank Pasquale:

    Flexibility is especially important for agencies regulating fast-moving fields. It will, of necessity, “break” both the brute-force prediction models and the expert systems models of devotees of artificial intelligence in law. That is a feature, not a bug, of judicial and agency discretion. Many past efforts to rationalize and algorithmatize the law have failed, for good reason: there is no way to fairly extrapolate the thought processes of some body of past decisionmaking to all new scenarios. For example, the introduction of a “grid” of preprogrammed factors in social security disability determinations could easily have been understood as a prelude to automation of such decisions. But very quickly forms of discretion started entering into the grid to do justice to the infinite variety of factual scenarios presented by sick and disabled claimants. (Pasquale 2019)

  6. 6.

    Separately, Frank Pasquale and Jack Balkin focus accountability on the persons designing and applying AI, rather than on the algorithms. “In order for legal automation to truly respect rule of law principles, the adage ‘a rule of law, not of men’ must be complemented by a new commitment--to a ‘rule of persons, not machines.’ Without attributing algorithmic judgments and interpretations to particular persons and holding them responsible for explaining those judgments, legal automation will undermine basic principles of accountability.” (Pasquale 2019). As Balkin elaborates:

    In the Algorithmic Society, the central problem of regulation is not the algorithms, but the human beings who use them, and who allow themselves to be governed by them. Algorithmic governance is the governance of humans by humans using a particular technology of analysis and decision-making. … Why is the problem the humans, and not the robots? First, the humans design the algorithms, program them, connect them to databases, and set them loose. Second, the humans decide how to use the algorithms, when to use them, and for what purpose. Third, humans program the algorithms with data, whose selection, organization, and content contains the residue of earlier discriminations and injustices. Fourth, although people talk about what robots did or what AI agents did, or what algorithms did, this way of speaking misses an important point. These technologies mediate social relations between human beings and other human beings. Technology is embedded into--and often disguises--social relations. (Balkin 2017)

  7. 7.

    Jack Balkin uses a “pollution” or environmental externality analogue in noting caution about enhanced AI analysis: “The Algorithmic Society increases the rapidity, scope, and pervasiveness of categorization, classification, and decision; in doing so it also increases the side effects of categorization, classification, and decision on human lives. These side effects are analogous to the increased levels of pollution caused by increased factory activity.” (Balkin 2017).

  8. 8.

    This example was discussed at the JURIX 2018, 31st International Conference on Legal Knowledge and Information Systems, Groningen, Netherlands Dec 12, 2018.

  9. 9.

    This example was taken from Intraspexion software developed by Nick Brestoff, described at https://intraspexion.com/.

  10. 10.

    “Machine learning can take place in a number of ways. These include ‘supervised learning,’ where the learning algorithm is given inputs and desired outputs with the goal of learning which rules lead to the desired outputs; ‘unsupervised learning,’ where the learning algorithm is left on its own to determine the relationships within a dataset; and “reinforcement learning,” where the algorithm is provided feedback on its performance as it navigates a data set. Machine learning has been applied to better translate documents, to provide users with personalized content, and to make healthcare treatment predictions.” (Simon et al. 2018).

  11. 11.

    Concerning the final point of Barocas and Selbst concerning the difficult of a finding a doctrinal or institutional foundation for challenging the inaccuracy or unjust effects of an algorithm, see also Huq (2019) and Scherer (2016). Scherer concludes: “It does not appear that any existing scholarship examines AI regulation through the lens of institutional competence--that is, the issue of what type(s) of governmental institution would be best equipped to confront the unique challenges presented by the rise of AI.” [citations omitted]

  12. 12.

    As analyzed by Andrew Selbst, “Crime map** based on historical data can lead to more arrests for nuisance crimes in neighborhoods primarily populated by people of color. These effects are an artifact of the technology itself, and will likely occur even assuming good faith on the part of the police departments using it. Meanwhile, predictive policing is sold in part as a “neutral” method to counteract unconscious biases when it is not simply sold to cash-strapped departments as a more cost-efficient way to do policing.” (Selbst 2017). Selbst also suggests at least a partial remedy: creation of a mandatory “algorithmic impact statement,” akin to an environmental impact statement (Selbst 2017).

  13. 13.

    As Balkin describes it, data collection results in an aggregated “identity” for every person, meaning that initial errors may accumulative: “[P]eople’s identities-- including the positive and negative characteristics attributed to them--are constructed and distributed through the interaction of many different databases, programs and decisionmaking algorithms. And in this way, people’s algorithmically constructed identities and reputations may spread widely and pervasively through society, increasing the power of algorithmic decision-making over their lives. As data becomes a common resource for decision-making, it constructs digital reputation, practical opportunity, and digital vulnerability.” (Balkin 2017).

  14. 14.

    Barocas and Selbst (2016) describe the “training” process for an AI algorithm:

    What a model learns depends on the examples to which it has been exposed. The data that function as examples are known as “training data”--quite literally, the data that train the model to behave in a certain way. The character of the training data can have meaningful consequences for the lessons that data mining happens to learn. As computer science scholars explain, biased training data leads to discriminatory models. This can mean two rather different things, though: (1) if data mining treats cases in which prejudice has played some role as valid examples to learn from, that rule may simply reproduce the prejudice involved in these earlier cases; or (2) if data mining draws inferences from a biased sample of the population, any decision that rests on these inferences may systematically disadvantage those who are under-or overrepresented in the dataset. Both can affect the training data in ways that lead to discrimination, but the mechanisms--improper labeling of examples and biased data collections--are sufficiently distinct that they warrant separate treatment. (Barocas and Selbst 2016) [citations omitted]

    Difficulties in constructing training modules for the AI algorithms are also discussed in Pasquale and Cashwell (2018).

  15. 15.

    In environmental cost-benefit decision-making, Laurence Tribe calls this neglect of weighing difficult variables like aesthetics or the sense of natural majesty “the dwarfing of soft variables” (Tribe 1973).

  16. 16.

    The possibility of AI spawning social anomie, or sense of purposelessness, was suggested by Rachel Sirany, personal communication, June 10, 2019 at California Western School of Law, San Diego, California.

References

  • Balkin J (2016) Information fiduciaries and the first amendment. UC Davis Law Rev 49:1183–1234

    Google Scholar 

  • Balkin J (2017) 2016 Sidley Austin distinguished lecture on big data law and policy: the three laws of robotics in the age of big data. Ohio State Law J 78:1217–1214

    Google Scholar 

  • Barocas S, Selbst A (2016) Big data’s disparate impact. Calif Law Rev 104:671–732

    Google Scholar 

  • Barton TD (1983) Justiciability: a theory of judicial problem-solving. Boston Coll Law Rev 24:505–634

    Google Scholar 

  • Barton TD (1985) Common law and its substitutes: the allocation of social problems among alternative decisional institutions. N C Law Rev 63:519–534

    Google Scholar 

  • Barton TD (1999a) Therapeutic jurisprudence, preventive law, and creative problem solving: an essay on harnessing emotion and human connection. Psychol Public Policy Law 5:921–943

    Article  Google Scholar 

  • Barton TD (1999b) Law and science in the enlightenment and beyond. Soc Epistimol 13:99–112

    Article  Google Scholar 

  • Barton TD (2016) Re-designing law and lawyering for the information age. Notre Dame J Law Ethics Public Policy 30:1–36

    Google Scholar 

  • Berger-Walliser G, Barton TD, Haapio H (2017) From visualization to legal design: a collaborative and creative process. Am Bus Law J 54:347–392

    Article  Google Scholar 

  • Brennan-Marquez K, Henderson SE (2019) Artificial intelligence and role-reversible judgment. J Crim Law Criminol 109:137–164

    Google Scholar 

  • Calabresi C, Bobbitt P (1978) Tragic choices: the conflicts society confronts in the allocation of tragically scarce resources. Harvard, Boston

    Google Scholar 

  • Cohen JE (2017) Law for the platform economy. UC Davis Law Rev 51:133–203

    Google Scholar 

  • Frydlinger D, Cummins T, Vitasek K, Bergman J (2016) Unpacking relational contracts: the practitioner’s go-to guide for understanding relational contracts. International Association of Contracts and Commercial Management White Paper, www.IACCM.com

  • Fuller LL (1965) Irrigation and tyranny. Stanford Law Rev 17:1021–1042

    Article  Google Scholar 

  • Haapio H, de Rooy R, Barton TD (2018) New contract genres. IRIS 2018 Proceedings and Jusletter, Feb 2018

    Google Scholar 

  • Hadfield GK (2017) Rules for a flat world: why humans invented law and how to reinvent it for a complex global economy. Oxford University Press, New York

    Google Scholar 

  • Huq AZ (2019) Racial equity in algorithmic criminal justice. Duke Law J 68:1043–1134

    Google Scholar 

  • Lessig L (2001) The future of ideas. Random House, New York

    Google Scholar 

  • Pasquale F (2019) A rule of persons, not machines: the limits of legal automation. George Wash Law Rev 87:1–54

    Google Scholar 

  • Pasquale F, Cashwell G (2018) Prediction, persuasion, and the jurisprudence of behaviourism. Univ Tor Law J 68:63–81

    Article  Google Scholar 

  • Posner R (1981) The economics of justice. Harvard, Boston

    Google Scholar 

  • Scherer MU (2016) Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harv J Law Technol 29:353–400

    Google Scholar 

  • Selbst AD (2017) Disparate impact in big data policing. Ga Law Rev 52:109–195

    Google Scholar 

  • Simon M, Lindsay AF, Sosa L, Comparato P (2018) Lola v Skadden and the automation of the legal profession. Yale J Law Technol 20:234–310

    Google Scholar 

  • Solum LB (1992) Legal personhood for artificial intelligences. N C Law Rev 70:1231–1287

    Google Scholar 

  • Stevenson MT, Slobogin C (2018) Algorithmic risk assessments and the double-edged sword of youth. Wash Univ Law Rev 96:681–704

    Google Scholar 

  • Surden H (2014) Machine learning and law. Wash Law Rev 89:87–115

    Google Scholar 

  • Tribe L (1973) Technology assessment and the fourth discontinuity: the limits of instrumental rationality. South Calif Law Rev 46:617–660

    Google Scholar 

  • White JB (1994) Imagining the law. In: Sarat A, Kearns TR (eds) The rhetoric of law. University of Michigan, Ann Arbor

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas D. Barton .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Barton, T.D. (2020). Artificial Intelligence: Designing a Legal Platform to Prevent and Resolve Legal Problems. In: Jacob, K., Schindler, D., Strathausen, R. (eds) Liquid Legal. Law for Professionals. Springer, Cham. https://doi.org/10.1007/978-3-030-48266-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-48266-4_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-48265-7

  • Online ISBN: 978-3-030-48266-4

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics

Navigation