April 10, 2023
The Growing use of ChatGPT by Hospital Staff Leads to the Inadvertent Disclosure of Patient and Technical Information
Our Cybersecurity and Tactical Operations Center (CTOC) has recently become aware of several instances where hospital staff usage of ChatGPT has led to inadvertent disclosure of personal health information and administrator passwords.
In one case, a physician used ChatGPT to create an email explaining a surgical procedure to a patient’s family. The physician asked ChatGPT to provide treatment options, risks, and a recommendation and provided the generative AI tool with the patient’s name, address, date of birth, medical condition, and other pertinent details. It is believed that the physician has done this multiple times, citing that it saved time and allowed them to take advantage of a virtual assistant.
Furthermore, the use of ChatGPT by IT teams has become extremely common in the past few weeks. A case shared with our CTOC involves a network administrator providing ChatGPT with a PowerShell script that took an exceedingly long time to execute. The administrator provided the prompt: “How can we make the PowerShell script I just provided you perform faster, as it currently takes three to four minutes to run?” The PowerShell script provided to ChatGPT included an administrator ID and password. A previous alert issued by our CTOC also warned of trojans being used by nefarious actors leveraging ChatGPT excitement by end-users.
As the number of generative AI tools continues to grow—including those recently introduced by Microsoft, Meta, Stanford, and others that are either as or more powerful than ChatGPT—these issues will only become more common and complex. Additionally, none of the tools mentioned meet the standards required for privacy by HIPAA, NIST-CSF, and C2M2. Further, while these tools disclose that they use end-user input for research, how that input is stored or where is unclear.
Without defined policies, due diligence, and risk assessment, using these tools with patient or other critical information contradicts cybersecurity and privacy best practices. As a result, the CloudWave team has compiled the following steps that healthcare organizations should take to educate users on the risks posed by these tools, and communicate the appropriate use of them within the organization.
- Communication – Email all team members to outline the appropriate use of ChatGPT and other generative AI tools in your organization as soon as possible.
- Business Associate Risk Notification – The organization should advise all business associates of your policy and stance regarding the use of AI tools with patient or confidential information. This should include requesting that business associates also educate their end-users and develop appropriate AI cybersecurity policies.
- Security Awareness Training – Formal security awareness training should be considered. This may include a video and quiz that can be tracked via a learning management system. All employees, including physicians, should be required to take this training. This will also help to create a defensible position if there is inadvertent disclosure.
- C-Suite/Board Notification – Using generative AI tools for research, staff augmentation, and more will continue to be a challenge for healthcare organizations. A unified boardroom-to-basement strategy and posture are essential to establish. We recommend notifying your senior leadership of the risks and how the employment of AI should be done with forethought and understanding.
To learn more about our Cybersecurity Tactical Operations Center or cybersecurity services for healthcare, please contact us at email@example.com. To provide additional education for your teams, join our Cybersecurity Insider Program to get exclusive access to live monthly educational webinars, on-demand training, private YouTube and LinkedIn groups, threat intelligence, and more. Register here.
Kelli Watson, Director of Solutions Delivery & Security Operations, CloudWave