Do AI Algorithms Have a Place in Law?
June 22, 2020 - 8 minutes readThe development of artificial intelligence (AI) has opened up numerous possibilities. But should they all come to fruition? Around the world, algorithms are making probation decisions and predicting whether a person will commit a crime. Opponents of this usage of AI are calling for more human oversight.
How Predictive Algorithms Affect the Probation Experience in Philadelphia
To see the use of AI in legal systems, you don’t have to visit tech hubs like San Francisco or Beijing; they’re already more ubiquitous than anyone would initially assume.
Darnell Gates is currently on probation in Philadelphia. Prior to being released from jail in 2018, he had served time for driving a car into a house and threatening his former partner with domestic violence. After being deemed “high risk” by an algorithm, he was ordered to visit a probation office once a week. This requirement eventually stretched to every two weeks, then to once a month.
During all of this time, Mr. Gates never realized the monumental role that AI played in his rehabilitation — until The New York Times told him about it in an interview. His response? “You mean to tell me I’m dealing with all this because of a computer?”
But Gates certainly isn’t alone in his predicament. Created by a professor at the University of Pennsylvania, this algorithm has been shaping the experiences of probationers in Philadelphia for more than five years.
Is Automating Life-Altering Decisions Right?
The Philadelphia probation algorithm is but one of many that are making life-changing choices about people in the US and Europe. And authorities are leveraging these predictive algorithms for more than probation rules; they’re also being applied to set prison sentences and police patrols.
In Britain, an algorithm is being used to rate which teenagers could potentially become criminals. In the Netherlands, one is flagging welfare fraud risks. Berlin-based watchdog Algorithm Watch has identified similar use cases in 16 European countries. But in the US, it’s much more widespread.
Per the Electronic Privacy Information Center, almost every American state is employing a sort of legal governance algorithm in one way or another. As the practice proliferates, United Nations investigators, lawyers, and communities are becoming more outraged by this growing dependence on automation for law and order. Why? Because they believe it’s removing transparency from legal processes.
It’s not exactly crystal-clear how each system or algorithm is making choices. Are they based on age? Gender? Race? That’s difficult to say; many countries and states don’t require algorithm creators to disclose their formulas. Unsurprisingly, opponents of this automation use are worried that biases are being baked into the decision-making process.
Ideally, these algorithms would cut government costs, reduce burdens on understaffed agencies, and eliminate human bias. But opponents believe that governments aren’t showing much interest in the last category — a recent UN report cautions that governments are risking the possibility of “stumbling zombie-like into a digital-welfare dystopia.”
A Black Box That Eliminates Bias or Promotes It?
At its most basic level, a predictive algorithm functions by using historical data and statistical techniques to calculate a future event’s probability. Thanks to advancements in computing power and increases in available data, they’ve now been augmented to an unprecedented degree.
The private sector uses these tools quite often. Whether it’s to predict an individual’s likelihood to get sick, cause a car accident, default on a loan, or click an internet ad, algorithms are employed everywhere these days. With a vast mountain of data on the public, it’s hardly shocking that governments are eager to utilize them as well.
But back in Philadelphia, implementing algorithms is turning out to be more troublesome and complex than the government initially thought. Pennsylvania has mandated the development of an algorithm to aid courts in deciding sentences after someone is convicted.
Todd Stephens, one of its state representatives, is part of the commission working to make this happen. He explains, “We walked into a hornet’s nest I didn’t even know existed.”
The commission’s original proposal for the algorithm had it leaning strongly on data provided by local county probation departments. But many communities and the American Civil Liberties Union protested this plan for fear that it would expand predictive algorithms’ capabilities in the wrong way. In response, the commission opted for a simpler implementation based on software already being used in the state’s courts.
Unfortunately, even if the government shared how the algorithm arrives at its decision, the math behind it is far too difficult for a layperson to comprehend in a timely manner. Many of the algorithms being used by the Philadelphia criminal justice system were created by Richard Berk, Professor of Statistics and Criminology at the University of Pennsylvania.
There’s no denying that it would be hard for a layperson to easily understand the algorithm. But Dr. Berk says that human judgment suffers from the same problem: “All machine learning algorithms are black boxes, but the human brain is also a black box. If a judge decides they are going to put you away for 20 years, that is a black box.”
Fleeting Controversy or a Problem That’s Here to Stay?
Dr. Berk believes that controversy around legal predictive algorithms will fade as their usage becomes more widespread. He sees them as akin to the algorithms used in commercial airliners’ automatic piloting systems. “Automatic pilot is an algorithm,” he explains. “We have learned that automatic pilot is reliable, more reliable than an individual human pilot. The same is going to happen here.”
Of course, it will take more convincing than that for people whose future is at stake here, such as Mr. Gates: “I can’t explain my situation to a computer. But I can sit here and interact with you, and you can see my expressions and what I am going through.”
What do you think of these predictive algorithms? Do they have a place in law? Or should they be eschewed in favor of a more human touch? As always, let us know your thoughts in the comments below!
Tags: AI App Developer, AI app developer San Francisco, AI app developers, AI App Development, artificial intelligence app, artificial intelligence app developer, artificial intelligence app developers, artificial intelligence app development, machine learning app developer, machine learning app developer San Francisco, machine learning app developers, machine learning applications, machine learning apps, mobile app developer San Francisco, mobile app developers San Francisco, mobile app development San Francisco