Ethical Challenges for Artificial Intelligence and its Involvement with Health Care

Aug 3, 2022

Artificial intelligence (AI) has proven to alter the physician-to-patient experience through its ability to analyze vast amounts of data via images and texts. While these tools can play a crucial role in providing more accurate diagnoses and treatment plans for patients, it raises an important question: Is AI sophisticated enough to implement ethical considerations in a medical setting?

According to a 2021 analysis by Frost & Sullivan, AI in healthcare will account for $6.7 billion this year, in comparison to a 2015 report of $811 million. This is primarily owed to data mining and its subsequent impact on decision-making. It can ease the workload for clinicians, from chart documentation to radiology reviews, and allow them more time to enhance patient care.

The true problem lies in the possibility that the data used could involve unconscious bias, wherein algorithms are trained and/or learn biased assumptions about patients.

"Almost every aspect of the AI design process and in many cases aspects of its actual usage have flaws that generate ethical problems," said Nisheeth Vishnu, PhD, a professor of computer science at Yale. This means that in addition to age or disability biases, an already vulnerable population such as individuals with ethnic origins, skin color, or gender, can face even more injustice.

Amazon recently made headlines regarding the misuse of AI for a subpopulation. In 2018, Reuters unveiled the e-commerce giant implemented a trial AI hiring tool that discriminated against female applicants. This was discovered after machine-learning specialists uncovered the problem: The system taught itself preference for males.

The takeaway? A reliable and valid dataset is the beginning, but requires the aid of transparency from AI developers regarding the shortcomings of software.

Amazon's computer models were given data that mostly came from men, and therefore penalized resumes that included words or phrases such as "women" or "women's soccer team." While the company disbanded the project, it revealed a blind spot in AI that can allow gender bias, and thus prevent fairness in professional settings.

In medicine, similar instances can occur with AI that inflict unnecessary misdiagnoses or prescription of treatments. For instance, if an algorithm was predominantly trained with data from a Caucasian population, it would dispense inaccurate medical orders to marginalized groups, such as African Americans or Latinos. This can otherwise be ineffective and jeopardize a patients safety, and perpetuate an already existing narrative of racism in medical care.

Despite the turmoil limited training data can create, there are resolutions to resolve such biases. Attempts to collect more diverse and inclusive data of minority populations, in addition to

an increase in data availability, can advance algorithms for populations that are not equally represented. However, these efforts may not be enough to address such a complex problem.

In fact, medical professionals receive no systematic training as medical students or in research training programs on bias and debiasing strategies.

By default, clinical thinking is reliant upon intuitive thinking and heuristic shortcuts that neglect cognitive bias. This has contributed to diagnostic errors in 36% of 77% of specific cases described in 20 publications involving 6,810 physicians, based on a study published by the National Library of Medicine.

"If you look at algorithmic bias as just a technical issue, it will beget engineering solutions — how can you restrict certain fields such as race or gender from the data, for example. But that won’t really solve the problem alone. If the world looks a certain way, that will be reflected in the data, either directly or through proxies, and thus in the decisions.”says Tristan Panch, a primary care physician, president-elect of the HSPH Alumni Association, and co- founder of digital health company Wellframe.

This highlights that the battle for responsibly using AI in medicine is as much an issue of society as it is about algorithms.

As AI spearheads a healthcare revolution which can optimize workflow in hospitals and readily assess a patient's symptoms, the ethical challenges suggest a gradual introduction in select medical settings, before its successful induction in global practices.

Do you believe the tradeoff between performance in algorithms can eventually outperform existing bias?

Tweet at us! @AILA_Community

ABOUT THE AUTHOR
Kay Raimundo
Intern

Kay Raimundo is a business strategist intern where she focuses on a wide spectrum of organizational and creative solutions to enhance AILA. She holds a Bachelor’s degree in Journalism from Loyola Marymount University and a Master of Science from the University of Southern California. Kay aspires to integrate diversity and inclusion within the start-up world to establish equitable opportunities for disenfranchised minority communities within and beyond the Los Angeles area.