Lompat ke konten Lompat ke sidebar Lompat ke footer

Researchers Warn: AI Could Pose Patient Risks

Researchers warn that artificial intelligence (AI) might cause patient harm if model development prioritizes precise outcome prediction over treatment strategies.

Specialists cautioned that this technology might generate "self-fulfilling prophecies" when using historical information that fails to consider population diversity or inadequate treatment of specific health issues.

They emphasized that the research underscores the "essential significance" of incorporating "human judgment" into AI decision-making processes.

Researchers in the Netherlands examined outcome prediction models (OPMs) that utilize a patient's personal details like medical background and habits to assist doctors in assessing the advantages and disadvantages of various treatments.

Artificial intelligence can execute these duties instantaneously to further bolster clinical decision-making.

The team subsequently developed mathematical scenarios aimed at examining potential ways AI could jeopardize patient well-being and indicated that such models "have the capacity to cause damage."

Researchers noted that many believe these models could potentially guide treatment choices by forecasting individual patient outcomes, thereby promoting personalized, data-backed healthcare.

We demonstrate, nonetheless, that employing predictive models for making decisions can result in harm, even when these predictions display strong discriminatory power once implemented.

“These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.”

The article, featured in the data-science journal Patterns, also proposes that the focus of AI model development should transition "from prioritizing predictive accuracy to concentrating on modifications in treatment policies and improvements in patient outcomes."

In response to the risks highlighted in the study, Dr. Catherine Menon, a principal lecturer in the Department of Computer Science at the University of Hertfordshire, commented, "AI models can exhibit these issues because they are often trained using historical data which may not adequately consider elements like past under-diagnosis or -treatment of certain medical conditions or demographic groups."

These models will precisely forecast adverse results for patients within these demographic groups.

This leads to a 'self-fulfilling prophecy' when physicians opt against treating patients because of the associated risks with the intervention and the AI’s prediction of an unfavorable result for those individuals.

Furthermore, this repeats the same historical mistake: inadequate treatment for these patients leads to persistently poor outcomes.

The application of these AI models thus poses a danger of exacerbating negative consequences for patients who have historically faced discrimination in healthcare environments because of aspects like race, gender, or education level.

This highlights the essential need for assessing AI decisions within their context and incorporating human logic and evaluation into AI conclusions.

Currently, AI is being utilized throughout the NHS in England to assist doctors in interpreting X-rays and CT scans, thereby reducing workload for staff members and accelerating stroke diagnoses.

In January, Prime Minister Sir Keir Starmer declared that the UK would become an "AI superpower" and suggested that this technology could help address the backlog in the NHS.

Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, pointed out that AI operational performance measures "aren't currently very prevalent within the NHS."

He mentioned that these tools are typically employed alongside current clinical management practices, frequently aiding in diagnosis and/or accelerating processes such as image segmentation.

Ewen Harrison, who is a professor of surgery and data science as well as the co-director of the Centre for Medical Informatics at the University of Edinburgh, commented: "Although these tools offer improved accuracy and personalization in patient care, this research brings attention to one of several troubling issues: predictive models might inadvertently cause harm to patients by affecting how treatments are decided."

“A hospital implements a novel AI system designed to predict which patients might have a difficult recovery following knee replacement surgery. This predictive tool considers factors like age, body mass index, pre-existing medical conditions, and level of physical fitness.”

Firstly, physicians plan to utilize this tool for determining which patients could profit from receiving extensive rehabilitation treatment.

Nevertheless, because of restricted accessibility and expenses, it has been decided to allocate intensive rehabilitation mainly for those patients who are anticipated to achieve the most favorable results.

Individuals flagged by the algorithm as having 'a poor predicted recovery' tend to get lesser attention, reduced number of physiotherapy sessions, and overall diminished encouragement.

He mentioned that this results in a delayed recuperation, increased discomfort, and decreased range of motion for certain individuals.

"These are genuine problems impacting AI advancement in the UK," Professor Harrison stated.

Posting Komentar untuk "Researchers Warn: AI Could Pose Patient Risks"