Over the past few years, the U.S. criminal justice system has become increasingly reliant on artificial intelligence and computer algorithms to predict a criminal’s likelihood of recidivism. Because these technologies are scientifically derived, they must be impartial and accurate, right?
A 2016 investigation by ProPublica found that an algorithm used in the U.S. to influence prison sentencing, was racially biased, predicting that black defendants pose a higher risk of recidivism than they actually do.
During his time in office, U.S. Attorney General Eric Holder voiced concerns about these technologies to the U.S. Sentencing Commission, stressing that “we need to make sure the use of aggregate data analysis won’t have unintended consequences.”
“Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice,” Holder said.
Now, at a time when governmental agencies and the private sector continue dumping money into AI development, this example highlights the idea that while we might expect technology to remain immune to our innate human biases, the reality is far more complicated.
According to experts, users should not assume that there will be algorithmic fairness and lack of bias in AI programming, especially when these algorithms are trained from human-created datasets.
“The point that I always try to make is you have to look under the hood, so to speak,” Ann Cavoukian, a professor at Ryerson University and the former information and privacy commissioner for the Province of Ontario, told The Globe Post. “You have to investigate the algorithms being used, and more importantly, what is framing those algorithms in terms of the training datasets being used and the potential for bias inherent in them.”
Because AI algorithms are also designed to perceive patterns in human decision making, they can pick up the implicit biases of their creators.
“There’s the phrase ‘bias in, bias out,’” Sarah Riley, a PhD student in information science at Cornell University, told The Globe Post. “[AI algorithms] are ultimately designed to perceive patterns in human decision making, so a system trained on biased data will yield biased decisions.”
The criminal justice system is not the only realm in which the implementation of these algorithms have backfired, creating tension between government agencies, technology companies, and directly affected citizens.
Twitter’s attempt at using artificial intelligence to engage with millennials in the U.S. in 2016 went awry after Tay, their verified Twitter chatbot, began spewing anti-semitic and racist comments at users.
A LinkedIn advertising program also made headlines after it indicated a preference for male names in searches on the website over potential female candidates since its suggested results are automatically generated by the patterns of past searchers.
Considering that the implementation of many of these AI algorithms is fairly new, experts identified a variety of factors that currently enable these discriminatory practices.
A lack of laws exclusively designed to protect against discrimination in relation to big-data and machine learning is a problem, experts agreed. Because “laws never keep up with tech,” Cavoukian said individuals currently have few opportunities to challenge decisions based on algorithmic determinations that they feel shouldn’t apply to them.
Darrell West, vice president and director of governance studies at the Brookings Institution, also claimed that a lack of repercussions in place for discriminatory offenders allows misconduct to occur.
“Many internet platforms are exempt from discrimination because they are not liable for what people do on their sites,” West told The Globe Post. “That encourages discrimination because there are no consequences to bad behavior.”
A misunderstanding of privacy rights also makes it challenging for citizens to understand how these algorithms work and how personal data feeds into their operation.
“Privacy is all about control. It’s about personal control relating to the uses of your personal information, how it’s used and to whom it’s disclosed,” Cavoukian said. “Under the existing system of AI, you not only lack all control, but you lack an ability to understand how the decisions were made because of the lack of transparency and accountability.”
For many citizens, the idea of getting involved with AI algorithms and code can also be intimidating and foreign. Riley said many people may be uncomfortable having conversations about AI algorithms because of their mathematical and esoteric nature.
“People are busy and they have stuff to do, and a lot of this is happening in very opaque ways,” Riley said. “It’s not clear how these systems are influencing people’s lives and the transaction cost of figuring out exactly what’s happening and how these systems are operating is really, really high.”
Looking ahead, researchers and computer scientists now face the challenge of creating cutting-edge technology that refrains from relying on decades-old trends of institutional biases and discrimination.
With this contradictory challenge in mind, experts proposed a variety of strategies to increase accountability and to decrease discrimination in the future realm of AI algorithms.
Approaching the problem from a legal framework, West encouraged citizens to “push for the adoption of anti-discrimination laws so that biased behavior is prohibited” and companies face greater accountability for their actions.
Cavoukian urged citizens who are concerned about these discriminatory algorithms to align themselves with advocates and experts in artificial intelligence to break down convoluted algorithms and put them into layman’s terms.
“This stuff doesn’t appear in English, as you know. It’s like a foreign language to us,” Cavoukian said. “You’ve got to get someone who can look under the hood, because absent that, it’s not going to mean anything to you.”
In addition to shifting the conversation away from algorithm design and more towards data procurement, training and implementation, Riley said a greater public awareness campaign should take place to make citizens more aware of how pervasive AI algorithms are becoming.
“Once there’s a fallout I think more people will feel like they have skin in the game, but in a lot of ways, the problem seems very remote,” Riley said. “I think the first step is just making people aware of how many of their decisions are being made increasingly by machines rather than by humans.”
China’s Artificial Intelligence Revolution: a Sputnik Moment for the West?