Reasoning frameworks of artificial intelligence used in criminal justice and health care systems move us to rethink how AI can be constructed to help foster an equitable society. 

We interact daily with algorithms that over time predict and inform our actions. Spam filters in e-mail and real-time mapping technologies on our cell phones are two examples of technologies that learn from our actions and add convenience to our lives.

However, algorithms and predictive models in health care and criminal justice can change the trajectory of someone's life.

WATCH: Ethics, removing bias among the biggest AI trends for the 2020s

In the criminal justice system, if an algorithm introduces bias, it can lead to wrongful arrests and convictions or delayed parole. And in health care, AI could change how providers recommend a certain plan of care for patients. 

Criminologist Renée Cummings is the historic first Data Activist in Residence, at the School of Data Science at the University of Virginia, and Hiwot Tesfaye is a senior data scientist at SAS. Both are working in their respective fields to address some of these issues.

Solving AI biases in criminal justice with 'algorithmic authenticity'

With an educational background in criminal justice, criminal psychology, therapeutic jurisprudence, substance abuse treatment, rehabilitation, and terrorism studies, Cummings says her work has always been about how the mind works and how that knowledge might generally reduce criminal behavior.

Listen to 'Humanity in AI' with Renée Cummings

Early in her criminal justice career, Cummings came across assessment tools used by corrections departments to assist in sentencing and parole decisions. The systems led Cummings to ask a lot of questions, like what data is being used, who created these tools, and how did they determine their recommendations?

crime scene

“That was my entrée into artificial intelligence, looking at algorithmic decision-making systems, whether or not they were fair, whether or not they were accountable,” explains Cummings. “They certainly weren't transparent at the moment, and they were presenting major challenges and really frustrating due process.”

Cummings uses the term “algorithmic authenticity” to describe the mindful process of acknowledging where biases exist and exploring the principles of equity, diversity and inclusion when developing creative technology solutions. She uses this process to evaluate and improve the ways big data and AI are used in policing.

We've got to look at debiasing the data and de-biasing the mind,” says Cummings. So, it's a combination of a technological approach and a thinking approach. And if we were to get both of those really aligned, then we will definitely see the kinds of systems that will create the kinds of legacies that we could be proud of. 

Amid privacy laws, AI still can influence decisions within healthcare

While AI or machine learning algorithms in criminal justice may falsely select racial and ethnic groups disproportionately, similar technologies in health care may not represent attributes of an entire population.

In fact, researchers at the National Academy of Sciences of the US found that algorithms trained with primarily male x-ray data are worst at reading chest x-rays for females -- and vice versa. The same concerns exist when it comes to skin cancer detection algorithms, which can have a harder time detecting skin cancer among darker-skinned patients.

doctors in surgery using ai

Listen to Hiwot Tesfaye's 'AI Bias in Health Care' podcast episode

Because individual medical data is, in most cases, closely guarded by privacy laws, it's harder for algorithms to gather enough information to represent an entire population, race, or gender. Exploring these issues requires that we de-identify the data, which can take extra time and effort.

Tesfaye believes there should be greater empowerment for patients to question and provide feedback to health care decisions supported by AI. Offering this feedback could be the first step to monitoring the impact of AI and driving accountability.

"I think there is a growing awareness of the pitfalls of these algorithms. And part of ongoing conversations around what legislation could look like in this area of algorithmic fairness, accountability, and transparency is giving people the ability to provide feedback to the system to say this was not accurate, or to provide input back to the system that their experience with this algorithm was terrible and this is the impact that it had on their lives, Tesfaye said.

With these changes, AI has the capability to revolutionize the health care system. This technology can help clinicians work smarter by assisting in decisions about patient care, pinpointing various cancers, and identifying abnormalities on CT scans, among other uses.

Reimagining AI's benefits to society

"If we really want this technology to mature and to and to really imagine new ways of existing together and creating systems that are going to benefit society, then we've got to broaden our cosmology. So, it's not about right or wrong, but there are challenges. And I think AI is up for the challenge because there are so many brilliant and dynamic people working in this space, Cummings said.

Cummings believes that creating a space for intellectual exploration and authentic conversation allows technology designers and others to think through important issues, like:

  • What is the solution going to be?
  • What risks could the technology pose to certain communities?
  • What rights might it suppress in development?

“I always say, we ask questions, and then we get the answers, and the answers make us so uncomfortable that we don’t want to move. But we’ve got to understand that the uncomfortable answers create a space for really coming up with creative solutions,” Cummings said.

What should we do differently?

The history of AI is not without controversy, opposing views, or even remarkable developments, but the work of Cummings and Tesfaye demonstrates that bias in AI should be further explored, particularly in criminal justice and health care.

While AI has potential risks, testing, monitoring, and public oversight can refine the technology in hopes of a more diverse and equitable world.

Developers of AI have a duty to bring issues of bias and unethical use to light. Conversations about what's right and what's not necessary, may also be a solution, suggests Tesfaye.

“I just want to make sure that people understand that we, as data scientists and people in tech, have the power in our hands to choose what parts of society we want to amplify in the world through our tools and what parts that we don't want our tools to learn, Tesfaye said. “So, I think there's a great deal of responsibility we should feel to ensure that we're not perpetuating historical biases that have been going on for centuries at this point and to really be aware that our tools have the power to solidify and calcify these systemic issues even further.” 

One of the general objectives of AI is to help advance a growing technological society. Emphasizing the development of ethical advancement and good governance practices in the technology may be the key to using AI to help solve society's most crucial issues now and in the future.

Want to learn more? Download this FREE e-book which explores the current
boundaries of AI, as well as the many ways that modern AI applications can enable better, faster decisions

Share

About Author

Caslee Sims

I'm Caslee Sims, writer and editor for SAS Blogs. I gravitate toward spaces of creativity, collaboration and community. Whether it be in front of the camera, producing stories, writing them, sharing or retweeting them, I enjoy the art of storytelling. I share interests in sports, tech, music, pop culture among others.

Comments are closed.

Back to Top