Skip to content

Frequently Asked Questions

Listing will update automatically when filter selections change

Computer systems capable of performing tasks that typically require human intelligence, such as reasoning, learning, problem-solving, and language processing. In psychiatry, AI is used for clinical decision support, patient monitoring, and therapeutic interventions.

AI has sometimes been touted as a replacement for clinicians or as a solution for many of the problems we experience in psychiatry. However, current AI technology primarily utilizes machine learning and is not capable of human level general intelligence or reasoning.

Given the limitations of current AI technology, the heterogeneous nature of psychiatric illnesses, and lack of full understanding of underlying biological processes, it is highly unlikely that AI technology will be capable of replacing a skilled psychiatrist any time in the near future. Instead, this technology is best employed as a tool to augment your clinical expertise and streamline your workflow.

For a list of potential applications please refer to the use case section here.

In addition to accuracy, reliability, and interoperability of these tools it is highly important to make sure there are no privacy or security concerns, and you should always reference any organizational or institutional policies and procedures.

For further direction please reference the APA’s App Evaluation Model and AI Facts Label resources.

  • AI should function in an augmentative role to treatment and should not replace clinicians.
  • Patients should be educated and informed, in a culturally and linguistically appropriate way, if clinical decisions are being driven by AI.
  • AI-driven systems must safeguard health information, and that information should not be used for unauthorized purposes.
  • AI-driven systems used in health care should be labeled as AI-driven and categorized in a standardized and transparent way for practitioners as "minimal," "medium," "high," and "unacceptable" risk to patients.
  • AI-driven systems should incorporate existing evidence-based practices and standards of care, and AI developers should be held accountable and liable for injury caused by their failure to do so.
  • Research about AI must include investigation regarding algorithmic bias, ethical use, mental health equity, public trust, and effectiveness.
  • The active input of people with lived experience of mental illness and substance use disorder should be solicited in the design, and implementation of AI systems for treatment purposes.

View Full Position Statement (.pdf)

Medical leadership for mind, brain and body.

Join Today