Skip to content

The Basics of Augmented Intelligence: Some Factors Psychiatrists Need to Know Now

  • June 29, 2023
  • What APA is Doing For You

Following the release of GPT4 in ChatGPT, augmented intelligence (AI) has been in the news more than ever. You may have tried out ChatGPT on your own for something fun (e.g., “Write a joke from the perspective of a cat,”) or something serious (e.g., “Write a draft lesson plan for a psychiatry residency program about treatment-resistant depression in adults”). A simultaneous strength and challenge of AI is that core to the technology is “learning” and evolution, making it difficult to define a static role for AI in psychiatric practice now or in the future.

This article provides a general overview of AI and related terminology to prepare APA members to recognize the strengths and limitations of AI, as well as some important guardrails for psychiatrists to consider for use of AI in their practices. This article does not contain legal advice, is not intended to be comprehensive, and does not cover all relevant aspects of AI, nor does it state APA’s position regarding AI or APA’s role in the future of AI.

We will start with the basics. APA recognizes AI’s potentially revolutionary role in automating elements of medicine to advance clinician and patient experience and improve outcomes but urges caution in the application of untested technologies in clinical settings. Please see Darlene King, M.D.’s, recent Psychiatric News Viewpoint, “ChatGPT Not Yet Ready for Clinical Practice,” for more information.

Given the regulatory grey area, expansive data use practices of many platforms, and lack of evidence base currently surrounding many AI applications in healthcare, clinicians need to be especially cautious about using AI-driven tools when making decisions, entering any patient data into AI systems, or recommending AI-driven technologies as treatments.

General Overview of AI and Important Terminology

“Artificial intelligence” is the term commonly used to describe machine-based systems that can perform tasks that otherwise would require human intelligence, including making predictions, recommendations, or decisions. Following the lead of the American Medical Association, we will use the term “augmented intelligence” when referring to AI. Augmented intelligence is a conceptualization that focuses on AI's assistive role, emphasizing the fact that AI ought to augment human decision-making rather than replace it. AI should coexist with human intelligence, not supplant it.

AI works by utilizing “machine learning,” the construction of algorithms and statistical models that can draw inferences about patterns in data to make predictions about future outcomes. Machine learning gives computers the ability to learn and adapt without following explicit programming.

AI has been around for decades. However, we are hearing about it more now because of the recent release of new or updated platforms, including ChatGPT, a type of AI known as a “large language model” or LLM. LLMs can recognize, summarize, translate, predict, and generate text and other products based on knowledge gained from large datasets of text and other content. ChatGPT is a “generative AI,” mimicking human speech based on the data it was trained on. ChatGPT is not sentient or conscious but predictive, taking its best guess of the next word based on statistical inference.

AI in the Healthcare Industry

Some potential uses of AI within healthcare are to automate administrative tasks such as billing, scheduling, and basic patient communications. AI could be used to take notes for clinicians, provide indicators of potential diagnoses, and deliver other decision-support interventions. AI tools are increasingly used by health systems to automate documentation in electronic health records and trawl clinical data for risk and quality indicators, by payers to automate coverage decisions, and by both consumer and clinical technology companies to replicate human speech in “chatbots” and other interactive modalities.

AI is also increasingly being considered and deployed in patient-facing capacities (e.g., psychotherapy or care navigation chatbots). These kinds of interventions currently lack an evidence base around quality, safety, and effectiveness and can even cause harm (e.g., An eating disorders chatbot offered dieting advice, raising fears about AI in health). Clinicians should exercise caution in recommending or incorporating AI-driven tools into their practice. APA’s App Advisor Model can be consulted to help assess key details about an app or other technology.

Important Factors for APA Members to Consider When Using AI

AI is seen to have the potential to benefit both clinicians and patients. However, as with any new technology, opportunities must be weighed against potential risks. Below, we have outlined some guardrails for APA members to consider around the use of AI in clinical practice. Please keep in mind that AI technologies are constantly changing and evolving, and much more information will be needed before a full assessment can be made of AI and its potential risks and applications in clinical practice. APA member experts are working to develop additional content in this area.

Effectiveness and Safety

There are a number of concerns that clinicians should take into account to ensure safe and effective treatment when using AI in their practices or recommending AI-driven tools to patients. Generative AI systems can promulgate biased information and have been found to make up false information (e.g., making up citations to peer-reviewed medical texts). Physicians will remain responsible for the care they provide and can be liable for treatment decisions that they make relying on AI that result in patient harm. “Automation bias” refers to the phenomenon of humans implicitly trusting information produced by a computer. Physicians should always carefully and thoroughly review any result or tool, including documentation or clinical decision support, guided by AI before implementing it into any treatment plan. AI should not be trusted implicitly, and instead, physicians should remain skeptical of AI output when used in clinical practice. AI is a tool, not a therapy, and physicians are ultimately responsible for clinical outcomes even when they are guided by AI. For more information on assessing and monitoring the risks of using AI in your practice, please refer to the UNESCO Recommendation on the Ethics of Artificial Intelligence.

Risk of Bias and Discrimination

Based on the data on which LLMs are trained, these models run a significant risk of incorporating existing disparities into clinical decision-making (“garbage in, garbage out” – low-quality or biased data training an AI will produce faulty outputs). For instance, AI models that listen to patient visits and assist in notetaking may not have adequate cultural competencies to take into account factors such as hearing impairments, accents, or verbal cues, and may propagate disparities that impact care. Racial and other biases in AI-driven systems can be introduced and propagated as a result of structural discrimination affecting outcomes in specific patient populations, complicating the use of race and ethnicity as data points in predicting clinical outcomes. Researchers argue that including race and ethnicity in AI, as well as other demographic and health-related social needs data, can be either beneficial or harmful depending on the specific algorithm, patients, and conditions in question. Physicians must be uniquely attuned to the risk of biased or discriminatory results that impact the clinical care of patients from underrepresented groups.

Transparency

Psychiatrists have an ethical duty to “be honest in all professional interactions.” (See Principles of Medical Ethics with Annotations Especially Applicable to Psychiatry (.pdf), Section 2.) Patients have an expectation of honesty from their physicians, and thus “psychiatrists should strive to provide complete information to patients about their health and all aspects of their care, unless there are strong contravening cultural factors or overriding therapeutic factors such as risk of harm to the patient or others that would make full disclosure medically harmful.” (See APA Commentary on Ethics in Practice (.pdf), Topic 3.2.2.) In fulfilling this ethical responsibility of honesty, physicians should ensure that they are transparent with patients about how AI is being used in their practice, particularly if AI is acting in a “human” capacity. For example, if communications to patients are generated using AI, these communications should clearly state that an AI tool was used to generate the message and should include appropriate information for each of their patients about any uses of AI in their practices to avoid unnecessary confusion or fear.

Protecting Patient Privacy

Although there is not yet governmental regulation specific to LLMs in the United States, existing regulatory frameworks still apply. For example, any use of AI in your practice must be compliant with HIPAA and state requirements protecting the confidentiality of medical information. We strongly recommend that clinicians avoid entering any patient data into generative AI systems like ChatGPT. The terms and conditions of many of these AI tools provide access to and use of any information put into them, so entering patients’ medical information into an LLM could violate a physician’s obligations under HIPAA. If you incorporate AI anywhere into your practice that involves the use of any protected health information, make sure that you have in place a HIPAA-compliant business associate agreement with the AI vendor governing the use of protected health information. See more information on HIPAA.

Takeaways

Overall, physicians should approach AI technologies with caution, particularly being aware of potential biases or inaccuracies; ensure that they are continuing to comply with HIPAA in all uses of AI in their practices; and take an active role in oversight of AI-driven clinical decision support, viewing AI as a tool intended to augment rather than replace clinical decision-making. Please send any questions to the APA policy and practice team at [email protected].

References

Medical leadership for mind, brain and body.

Join Today