Technology

Biases in ChatGPT may worsen health inequalities for ethnic minorities

AI models, they said, may provide advice based on data on populations wholly different from those in LMICs

London: Systemic biases in the data used by Artificial Intelligence (AI) models such as ChatGPT in healthcare may worsen health inequalities for ethnic minority populations, scientists have argued.

Epidemiologists from the universities of Leicester and Cambridge said that existing inequalities for ethnic minorities may become more entrenched due to systemic biases in the data used by healthcare AI tools.

AI models need to be ‘trained’ using data scraped from different sources such as healthcare websites and scientific research, they argued in the Journal of the Royal Society of Medicine. 

Evidence, however, shows that ethnicity data are often missing from healthcare research. Ethnic minorities are also less represented in research trials.

“This disproportionately lower representation of ethnic minorities in research has evidence of causing harm, for example by creating ineffective drug treatments or treatment guidelines which could be regarded as racist,” said Mohammad Ali, doctoral student in epidemiology at the College of Life Sciences at Leicester.

“If the published literature already contains biases and less precision, it is logical that future AI models will maintain and further exacerbate them,” he added.

The researchers are also concerned that health inequalities could worsen in low- and middle-income countries (LMICs). 

AI models are primarily developed in wealthier nations like the US and Europe and a significant disparity in research and development exists between high- and low-income countries.

The researchers point out that most published research does not prioritise the needs of those in the LMICs with their unique health challenges, particularly around healthcare provision. 

AI models, they said, may provide advice based on data on populations wholly different from those in LMICs.

While crucial to acknowledge these potential difficulties, the researchers said, it is equally important to focus on solutions. 

“We must exercise caution, acknowledging we cannot and should not stem the flow of progress,” said Ali.

The researchers suggested ways to overcome potentially exacerbating health inequalities, starting with the need for AI models to clearly describe the data used in their development. They also said work is needed to address ethnic health inequalities in research, including improving recruitment and recording of ethnicity information. 

Data used to train AI models should be adequately representative, with key actors such as ethnicity, age, sex and socioeconomic factors considered. Further research is also required to understand the use of AI models in the context of ethnically diverse populations.

By addressing these considerations, said the researchers, the power of AI models can be harnessed to drive positive change in healthcare while promoting fairness and inclusivity.

This post was last modified on July 20, 2023 2:12 pm

Share
Tags: ChatGPT
Indo-Asian News Service

Indo-Asian News Service or IANS is a private Indian news agency. It was founded in 1986 by Indian American publisher Gopal Raju as the "India Abroad News Service" and later renamed. The service reports news, views and analysis from the subcontinent about the country, across a wide range of subjects.

Load more...