
The interface of Tencent's health app is displayed on a mobile phone in Suqian, China on January 5, 2023.
WHO Warns Untested AI Tech Could 'Cause Harm to Patients'
The "growing experimental use" of ChatGPT and similar tools in medical contexts should be halted until pressing concerns are addressed and "clear evidence of benefit" is demonstrated, said the United Nations health agency.
The ongoing failure to adequately regulate artificial intelligence-generated large language model tools is jeopardizing human well-being, the World Health Organization said Tuesday.
The WHO lamented that precautions typically taken with regard to any new technology are not being applied consistently when it comes to large language models (LLMs), which use AI to analyze data, create content, and answer questions—often incorrectly. Accordingly, the United Nations agency called for sufficient risk assessments to be conducted and corresponding safeguards implemented before LLMs become entrenched in healthcare.
The "meteoric public diffusion and growing experimental use" of LLMs—including ChatGPT, Bard, Bert, and other platforms that "imitate understanding, processing, and producing human communication"—in medical settings "is generating significant excitement around the potential to support people's health needs," the WHO noted. However, "it is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people's health and reduce inequity."
"Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine or delay the potential long-term benefits and uses of such technologies around the world," the agency warned.
Specific concerns identified by the WHO include:
- The data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness;
- LLMs generate responses that can appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors, especially for health-related responses;
- LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response; and
- LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate from reliable health content.
"While committed to harnessing new technologies, including AI and digital health to improve human health, WHO recommends that policymakers ensure patient safety and protection while technology firms work to commercialize LLMs," the agency added. "WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine healthcare and medicine—whether by individuals, care providers, or health system administrators and policymakers."
The agency reiterated "the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing, and deploying AI for health."
The WHO expressed its worries just days after an international group of doctors warned in the peer-reviewed journal BMJ Open Health that AI "could pose an existential threat to humanity" and demanded a moratorium on the development of such technology pending robust regulation.
"While artificial intelligence offers promising solutions in healthcare, it also poses a number of threats to human health and well-being," the physicians and related experts wrote. "With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing."
Fears of the negative implications of AI in healthcare and other arenas appear to be well-founded. As Common Dreams reported in March, progressives urged the Biden administration to intervene after an investigation showed that Medicare Advantage insurers' use of unregulated AI tools to determine when to end payments for patients' treatments has resulted in the premature termination of coverage for vulnerable seniors.
"Robots should not be making life-or-death decisions," health justice advocate Ady Barkan wrote on social media at the time, as he shared a petition imploring the White House to stop #DeathByAI.
An Urgent Message From Our Co-Founder
Dear Common Dreams reader, The U.S. is on a fast track to authoritarianism like nothing I've ever seen. Meanwhile, corporate news outlets are utterly capitulating to Trump, twisting their coverage to avoid drawing his ire while lining up to stuff cash in his pockets. That's why I believe that Common Dreams is doing the best and most consequential reporting that we've ever done. Our small but mighty team is a progressive reporting powerhouse, covering the news every day that the corporate media never will. Our mission has always been simple: To inform. To inspire. And to ignite change for the common good. Now here's the key piece that I want all our readers to understand: None of this would be possible without your financial support. That's not just some fundraising cliche. It's the absolute and literal truth. We don't accept corporate advertising and never will. We don't have a paywall because we don't think people should be blocked from critical news based on their ability to pay. Everything we do is funded by the donations of readers like you. Will you donate now to help power the nonprofit, independent reporting of Common Dreams? Thank you for being a vital member of our community. Together, we can keep independent journalism alive when it’s needed most. - Craig Brown, Co-founder |
The ongoing failure to adequately regulate artificial intelligence-generated large language model tools is jeopardizing human well-being, the World Health Organization said Tuesday.
The WHO lamented that precautions typically taken with regard to any new technology are not being applied consistently when it comes to large language models (LLMs), which use AI to analyze data, create content, and answer questions—often incorrectly. Accordingly, the United Nations agency called for sufficient risk assessments to be conducted and corresponding safeguards implemented before LLMs become entrenched in healthcare.
The "meteoric public diffusion and growing experimental use" of LLMs—including ChatGPT, Bard, Bert, and other platforms that "imitate understanding, processing, and producing human communication"—in medical settings "is generating significant excitement around the potential to support people's health needs," the WHO noted. However, "it is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people's health and reduce inequity."
"Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine or delay the potential long-term benefits and uses of such technologies around the world," the agency warned.
Specific concerns identified by the WHO include:
- The data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness;
- LLMs generate responses that can appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors, especially for health-related responses;
- LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response; and
- LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate from reliable health content.
"While committed to harnessing new technologies, including AI and digital health to improve human health, WHO recommends that policymakers ensure patient safety and protection while technology firms work to commercialize LLMs," the agency added. "WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine healthcare and medicine—whether by individuals, care providers, or health system administrators and policymakers."
The agency reiterated "the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing, and deploying AI for health."
The WHO expressed its worries just days after an international group of doctors warned in the peer-reviewed journal BMJ Open Health that AI "could pose an existential threat to humanity" and demanded a moratorium on the development of such technology pending robust regulation.
"While artificial intelligence offers promising solutions in healthcare, it also poses a number of threats to human health and well-being," the physicians and related experts wrote. "With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing."
Fears of the negative implications of AI in healthcare and other arenas appear to be well-founded. As Common Dreams reported in March, progressives urged the Biden administration to intervene after an investigation showed that Medicare Advantage insurers' use of unregulated AI tools to determine when to end payments for patients' treatments has resulted in the premature termination of coverage for vulnerable seniors.
"Robots should not be making life-or-death decisions," health justice advocate Ady Barkan wrote on social media at the time, as he shared a petition imploring the White House to stop #DeathByAI.
- Biden Urged to Crack Down on 'Terrifying' Use of AI by Medicare Advantage Insurers ›
- Experts Demand 'Pause' on Spread of Artificial Intelligence Until Regulations Imposed ›
- Warning of AI Threat to 'Human Existence,' Health Experts Urge Halt to Unregulated Rollout ›
- UN Chief Says Humanity Must 'Harness the Power of AI for Good' ›
- Opinion | What AI Could Mean for Accessing Public Benefits | Common Dreams ›
The ongoing failure to adequately regulate artificial intelligence-generated large language model tools is jeopardizing human well-being, the World Health Organization said Tuesday.
The WHO lamented that precautions typically taken with regard to any new technology are not being applied consistently when it comes to large language models (LLMs), which use AI to analyze data, create content, and answer questions—often incorrectly. Accordingly, the United Nations agency called for sufficient risk assessments to be conducted and corresponding safeguards implemented before LLMs become entrenched in healthcare.
The "meteoric public diffusion and growing experimental use" of LLMs—including ChatGPT, Bard, Bert, and other platforms that "imitate understanding, processing, and producing human communication"—in medical settings "is generating significant excitement around the potential to support people's health needs," the WHO noted. However, "it is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people's health and reduce inequity."
"Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI, and thereby undermine or delay the potential long-term benefits and uses of such technologies around the world," the agency warned.
Specific concerns identified by the WHO include:
- The data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity, and inclusiveness;
- LLMs generate responses that can appear authoritative and plausible to an end user; however, these responses may be completely incorrect or contain serious errors, especially for health-related responses;
- LLMs may be trained on data for which consent may not have been previously provided for such use, and LLMs may not protect sensitive data (including health data) that a user provides to an application to generate a response; and
- LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate from reliable health content.
"While committed to harnessing new technologies, including AI and digital health to improve human health, WHO recommends that policymakers ensure patient safety and protection while technology firms work to commercialize LLMs," the agency added. "WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine healthcare and medicine—whether by individuals, care providers, or health system administrators and policymakers."
The agency reiterated "the importance of applying ethical principles and appropriate governance, as enumerated in the WHO guidance on the ethics and governance of AI for health, when designing, developing, and deploying AI for health."
The WHO expressed its worries just days after an international group of doctors warned in the peer-reviewed journal BMJ Open Health that AI "could pose an existential threat to humanity" and demanded a moratorium on the development of such technology pending robust regulation.
"While artificial intelligence offers promising solutions in healthcare, it also poses a number of threats to human health and well-being," the physicians and related experts wrote. "With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing."
Fears of the negative implications of AI in healthcare and other arenas appear to be well-founded. As Common Dreams reported in March, progressives urged the Biden administration to intervene after an investigation showed that Medicare Advantage insurers' use of unregulated AI tools to determine when to end payments for patients' treatments has resulted in the premature termination of coverage for vulnerable seniors.
"Robots should not be making life-or-death decisions," health justice advocate Ady Barkan wrote on social media at the time, as he shared a petition imploring the White House to stop #DeathByAI.
- Biden Urged to Crack Down on 'Terrifying' Use of AI by Medicare Advantage Insurers ›
- Experts Demand 'Pause' on Spread of Artificial Intelligence Until Regulations Imposed ›
- Warning of AI Threat to 'Human Existence,' Health Experts Urge Halt to Unregulated Rollout ›
- UN Chief Says Humanity Must 'Harness the Power of AI for Good' ›
- Opinion | What AI Could Mean for Accessing Public Benefits | Common Dreams ›

