• Hypertrophic Cardiomyopathy (HCM)
  • Vaccines: 2023 Year in Review
  • Eyecare
  • Urothelial Carcinoma
  • Women's Health
  • Hemophilia
  • Heart Failure
  • Vaccines
  • Neonatal Care
  • NSCLC
  • Type II Inflammation
  • Substance Use Disorder
  • Gene Therapy
  • Lung Cancer
  • Spinal Muscular Atrophy
  • HIV
  • Post-Acute Care
  • Liver Disease
  • Pulmonary Arterial Hypertension
  • Biologics
  • Asthma
  • Atrial Fibrillation
  • Type I Diabetes
  • RSV
  • COVID-19
  • Cardiovascular Diseases
  • Breast Cancer
  • Prescription Digital Therapeutics
  • Reproductive Health
  • The Improving Patient Access Podcast
  • Blood Cancer
  • Ulcerative Colitis
  • Respiratory Conditions
  • Multiple Sclerosis
  • Digital Health
  • Population Health
  • Sleep Disorders
  • Biosimilars
  • Plaque Psoriasis
  • Leukemia and Lymphoma
  • Oncology
  • Pediatrics
  • Urology
  • Obstetrics-Gynecology & Women's Health
  • Opioids
  • Solid Tumors
  • Autoimmune Diseases
  • Dermatology
  • Diabetes
  • Mental Health

WHO Calls For Caution When it Comes to Using AI For Health

Article

The World Health Organization shared its enthusiasm for the “appropriate” use of these technologies. However, they are calling for caution to be exercised to protect and promote human well-being, safety, and autonomy and preserve public health.

© Supatman - stock.adobe.com

© Supatman - stock.adobe.com

Artificial Intelligence (AI) generated large language model tools (LLMs) like ChatGPT, Bert and Bard have gained much public attention in their use for health-related purposes. The World Health Organization shared its enthusiasm for the “appropriate” use of these technologies. However, they are calling for caution to be exercised to protect and promote human well-being, safety, and autonomy and preserve public health.

These LLM platforms have been rapidly expanding as users take advantage of their features that imitate understanding, processing, and produce human communication. Their growing experimental use for health-related purposes is generating excitement around the potential to support user’s health needs, the WHO reported in a release in May.

If used appropriately, LLMs can support health-care professionals, patients, researchers and scientists. But, there are risks and the WHO stressed how crucial it is for these risks to be examined carefully to improve access to health information or enhance diagnostic capacity to protect user’s health and reduce inequity. There is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs. This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation, according to the release.

Abrupt adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and delay any potential long-term benefits or uses of these tools globally.

Concerns shared by the WHO that call for caution of these technologies to be used in safe, effective and ethical ways include:

  • Data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness.
  • LLM platforms generate responses that can appear authoritative and plausible to an end user. They can also incorrect or contain errors, especially for health-related responses.
  • The tools could be trained on data for which consent may not have been previously provided for such use, and they may not protect sensitive health data a user provides.
  • LLMs can be misused to generate convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content.

The WHO encouraged these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine healthcare and medicine – whether by individuals, care providers or health system administrators and policy-makers.

Though further evidence is needed to support these concerns, results from a study published in Jama Internal Medicine in April shared responses to patients using ChatCPT were preferred by healthcare professionals over physician responses.

In the cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed healthcare professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly. The chatbot responses were not only preferred but were also rated significantly higher for both quality and empathy.

Researchers of the study claimed the results suggest AI assistants may be able to aid in drafting responses to patient questions.

Related Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.