The ongoing failure to adequately regulate artificial intelligence-generated large language model tools is jeopardizing human well-being, the World Health Organization said Tuesday.
The WHO lamented that precautions typically taken with regard to any new technology are not being applied consistently when it comes to large language models (LLMs), which use AI to analyze data, create content, and answer questions—often incorrectly. Accordingly, the United Nations agency called for sufficient risk assessments to be conducted and corresponding safeguards implemented before LLMs become entrenched in healthcare.
The "meteoric public diffusion and growing experimental use" of LLMs—including ChatGPT, Bard, Bert, and other platforms that "imitate understanding, processing, and producing human communication"—in medical settings "is generating significant excitement around the potential to support people's health needs," the WHO noted. However, "it is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people's health and reduce inequity."
"Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine or delay the potential long-term benefits and uses of such technologies around the world," the agency warned.
Specific concerns identified by the WHO include:
- The data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness;
- LLMs generate responses that can appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors, especially for health-related responses;
- LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response; and
- LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate from reliable health content.
"While committed to harnessing new technologies, including AI and digital health to improve human health, WHO recommends that policymakers ensure patient safety and protection while technology firms work to commercialize LLMs," the agency added. "WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine healthcare and medicine—whether by individuals, care providers, or health system administrators and policymakers."
The agency reiterated "the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing, and deploying AI for health."
The WHO expressed its worries just days after an international group of doctors warned in the peer-reviewed journal BMJ Open Health that AI "could pose an existential threat to humanity" and demanded a moratorium on the development of such technology pending robust regulation.
"While artificial intelligence offers promising solutions in healthcare, it also poses a number of threats to human health and well-being," the physicians and related experts wrote. "With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing."
Fears of the negative implications of AI in healthcare and other arenas appear to be well-founded. As Common Dreams reported in March, progressives urged the Biden administration to intervene after an investigation showed that Medicare Advantage insurers' use of unregulated AI tools to determine when to end payments for patients' treatments has resulted in the premature termination of coverage for vulnerable seniors.
"Robots should not be making life-or-death decisions," health justice advocate Ady Barkan wrote on social media at the time, as he shared a petition imploring the White House to stop #DeathByAI.