In the U.S. civil justice system, the latest technologies bring both promise and potential pitfalls. Reform advocates hope that new tools can help make the system fairer and more affordable for people seeking legal assistance. But small mistakes can spark huge consequences.
Those risks are particularly acute with large language models, which serve as the basis for generative AI tools such as ChatGPT. Generative AI is already used in law offices, where it has transformed the work of legal support staff by allowing them to sift through thousands of documents in minutes. For example, one big international law firm, Allen and Overy, announced last year that it had partnered with AI platform Harvey to streamline the firm’s research and document analysis work.
Critics, however, caution that these systems are embedded with bias from the human-created databases used to train them. They’re also prone to simply making things up. While large language model AI is able to understand and generate natural language, it also can fill search gaps with “hallucinated” information.
Just ask Michael Cohen, Donald Trump’s former attorney and fixer, who recently sent his lawyers AI-generated citations from legal cases that did not exist — a practice that is “Always a bad idea,” quipped Supreme Court Chief Justice John Roberts, in his year-end report on the state of the federal judiciary. In the report, Roberts acknowledged AI’s potential but also advised against its incautious use.
David Colarusso, co-director of the Legal Innovation and Technology Lab at the Suffolk University Law School, points to Cohen as a cautionary tale about large language model AI, which gives answers based on the next probable set of words following a search query. People get into trouble with general purpose chatbots, he said, “using them as if they were somehow some specialized research tool, when they aren’t.” Meanwhile, even those who think this technology holds great promise are calling for more transparency surrounding the black box of data used to train these systems, including the American Bar Association. (People “want to see how the sausage was made,” Colarusso said.) That means using specialized software that can analyze AI systems, to ascertain how its data is processed, and detect any biases.
Experts watching this emerging technology believe that the time saved will lower attorneys’ billable hours and thus the cost of obtaining legal assistance, making the justice system more accessible.
For instance, instead of a junior lawyer laboriously searching through thousands of emails for relevant information, software can do the same job on a vastly quicker timescale. It’s “definitely more cost-effective,” Colarusso said. A commentary by John Villasenor, a senior fellow at the Brookings Institution, concurs. “Law firms that effectively leverage emerging AI technologies,” Villasenor writes, “will be able to offer services at lower cost, higher efficiency, and with higher odds of favorable outcomes in litigation.”
In eviction hearings, for example, the current inequality is glaring: Studies cited in the Fordham Law Review in 2010 showed that in many courts, the majority of landlords have legal representation, while a small percentage of tenants have any. In immigration court, less than 40 percent of immigrants have legal representation in removal proceedings. Debt collection lawsuits, which dominate civil court dockets, represent an equally dismal situation. A 2020 report by the Pew Charitable Trusts found that in those cases, less than 10 percent of defendants had legal representation, making it less likely for them to settle the case or win.
In the best-case scenario, technology could change this equation by making attorneys more affordable for low-income people, and even let lawyers offer pro bono work more readily.
The jury is still out on AI’s potential impact. In a 2021 paper in the Berkeley Technology Law Journal, researchers argued that machine learning could vastly reduce the time spent on searching through court transcripts to examine judicial decision-making and identify potential bias in the courtroom. On the other hand, they suggested that AI could actually perpetuate discrimination in such analyses of the justice system by training algorithms on datasets that reflect bias, such as those overestimating recidivism rates among minority defendants. And early last year, the American Bar Association issued guidelines for use of the technology, urging that such systems have human oversight, that organizations be accountable for their use, and that system developers provide adequate transparency.
Meanwhile, some law schools are developing other tools that do not use AI to address gaps in access to legal assistance. They include the University of Arizona, which jointly hosts a program called Innovation for Justice with the University of Utah’s business school. The program, also known as i4J, has created a free online calculator to estimate the cost of providing emergency shelter, medical care, child welfare services, and other social services to evicted people — information that can then be accessed by policymakers. The program also developed a portal for the Nevada state courts where domestic violence victims can easily file protective orders or find safe shelter. Such tools are sorely needed, as nearly 90 percent of victims won’t have a lawyer guiding them through the legal system. In fine-tuning this approach, the i4J program works closely with courts and legal services to “understand the needs of the people who will be using this technology,” says director Stacy Butler, “and make sure that we onboard a technology that aligns with those needs.”
Therein lies the kicker: Will we be able to get this technology right — or at least good enough that it makes our justice system more navigable and affordable for everyone? Or will we simply accentuate, and perhaps accelerate, the inequality in that system? Only time will tell.