Skip to content

Patient Privacy and Security

With ever-increasing use of online services and passive data collection from smartphones and computers, contextual integrity describes privacy as the appropriate flow of information in conformance with reasonable expectations, contextual social norms, and the actors involved [1]. For example, HIPAA defines how health information flows in providing health care services and informs patients of the process. Patients accept this because the health benefit of such information flow is worth the risk of potential data breaches and limiting the information flow would impede the provision of health care services. As such, when we think about new tools or systems that will interface with healthcare processes, we need to understand the information flows associated with them. This information can be found in privacy policies, business associate agreements and data use agreements.

Publicly available generative AI tools maintain chat history and user data such as IP addresses. This information is stored on company servers and subject to uses outlined by their terms and conditions and privacy policy. As some terms and conditions may outline further training, long-term data storage and utilizing input information for marketing and third-party advertising, publicly available generative AI tools are not to be used for clinical documentation or clinical care where a patient may be identified by submitted information. Entering patient information into a publicly available generative AI application is a HIPAA violation.

Business Associate Agreements (BAAs) provide more secure ways of interacting with Generative AI applications in that you can negotiate data use, storage and ensure HIPAA compliance. Some companies immediately provide a BAA and state HIPAA compliance. It is important to still review the BAA looking at the data use agreement, data ownership and what cybersecurity measures are provided. A BAA stating that all patient data is owned by the company and is stored indefinitely to be used internally for further AI development is vastly different from a BAA that does not take ownership of patient data, stores the data until the provider or patient requests deletion and does not use provided data to train future AI systems.

Numerous privacy, security, and ethical concerns must be considered when utilizing AI in a healthcare setting. In 2022, Niak and colleagues identified four main ethical concerns regarding AI use in healthcare:

  1. Informed consent
  2. Data privacy (including how data is used and stored)
  3. Safety and transparency
  4. Algorithmic fairness and biases

The decisions made along the development pathway of a machine learning model influence its output. AI that is explainable and transparent, provides insight into factors influencing the output. If we understand the limitations of a model, proper safeguards can be built around them.

Medical leadership for mind, brain and body.

Join Today