Columbia Nursing explores views on using AI for mental health care

A new Columbia University School of Nursing-led study explores how patients view the use of AI for mental health care.

The survey, which included 500 US-based adults, found that 49.3% of participants considered the use of AI for mental health care and thought it would be beneficial.

Particularly, African Americans and those with lower self-rated health literacy were more likely to have this view, while women were less likely to. 81.6% of participants also believed health professionals to be responsible for the misdiagnosis of mental health conditions.

The study, ‘Patient Perspectives on AI for Mental Health Care: A Cross-sectional Survey Study’, was published in the Journal of Medical Internet Research (JMIR) Mental Health.

Supporting health professionals in developing safe AI tools

Natalie Benda, PhD, an assistant professor at Columbia Nursing who led the study, explained: “This survey comes at a time when AI applications are ubiquitous, and patients are gaining greater access and ownership over their data.

“Understanding patient perceptions of if and how we can appropriately use AI for mental health care is critical.”

The researchers also found that participants were most concerned about incorrect diagnoses by AI tools, the application of inappropriate treatments, the lack of interaction with their mental health care providers, and the negative effects of AI on confidentiality.

In addition, participants said they would like visibility into and autonomy around how AI was being applied to help them understand the care process and a tool’s performance.

Recommendations for using AI for mental health care

Based on the findings, the researchers recommend these actions for health professionals when using AI for mental health care:

  • Evaluate AI tools in clinical simulation environments before larger deployment.
  • Promote transparency in AI’s use, communicate its accuracy, and include potential risks for patients.
  • Communicate how potential biases have been evaluated and mitigated.
  • Explain how the performance and process for a given task may change with and without AI.
  • Conduct studies to understand what information is needed within patient communications that incorporates patient values, supports comprehension of concepts, and fosters trust.
  • Increase opportunities for patient autonomy so that patients and health professionals may collaboratively make decisions.
Subscribe to our newsletter

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network