On January 28, the Information and Privacy Commissioner of Ontario (OIPC) released guidelines for the responsible development, procurement, and utilization of AI scribes. This guidance responds to the increasing implementation of AI transcription tools in the healthcare sector, emphasizing the importance of maintaining privacy, security, and human rights obligations in line with the Personal Health Information Protection Act, 2004 (PHIPA).
This release coincides with similar guidance issued by the Office of the Information and Privacy Commissioner for British Columbia (BC-IPC), signaling a growing regulatory emphasis on AI accountability within healthcare. However, due to varying privacy laws in Ontario and BC, the OIPC and BC-IPC adopt significantly different approaches.
Ontario’s PHIPA broadly applies to all “health information custodians,” while BC’s legislation differentiates between private and public sector healthcare organizations. Consequently, OIPC’s guidance encompasses all custodians, whereas BC’s applies solely to private entities. Furthermore, the OIPC guidance details the entire lifecycle of AI scribes, outlining responsibilities for developers, purchasers, and users, while the BC-IPC guidance primarily assists private healthcare providers in complying with PIPA.
Understanding AI Scribers
AI scribes employ generative artificial intelligence, speech recognition, and natural language processing to transcribe healthcare appointments and create clinical notes, summaries, and other related documents. The adoption of AI scribe tools is rapidly increasing in Ontario, aiming to alleviate the administrative workload of healthcare providers. Although early implementations indicate potential efficiency improvements, AI scribes also pose new risks to data privacy, security, and clinical accuracy that custodians must address.
Scope and Principles of the New Guidance
The guidance focuses on privacy-related considerations associated with using AI scribes for transcription and documentation.
It aligns with the six core principles for the responsible use of AI previously established by the OIPC in collaboration with the Ontario Human Rights Commission: Valid and reliable; safe; privacy-protective; affirming human rights; transparent; and accountable.
Governance and Accountability for AI Scribers
Health information custodians are ultimately responsible for the personal health information they manage, and their obligations under PHIPA continue when they collect, use, and disclose such information via an AI system like an AI scribe. The OIPC advises custodians to create a robust governance and accountability framework to ensure ongoing compliance with PHIPA in the context of AI scribe usage.
A governance and accountability framework should encompass the following:
- AI Governance Committee: Establish an AI governance committee and a risk management framework rooted in PHIPA accountability. This framework should be integrated into existing governance structures and processes.
- Data Minimization: Implement PHIPA’s data minimization principles throughout the lifecycle of the AI scribe. This includes evaluating the necessity of retaining full audio recordings or transcripts and limiting personal health information shared with vendors.
- Privacy Impact Assessments: Conduct privacy impact assessments prior to introducing AI scribes and update them as changes occur in purposes, systems, or risks. Additionally, implement threat risk assessments and AI-specific evaluations, regularly refreshing these assessments as required.
- Policies, Procedures, and Practices: Maintain well-documented policies, procedures, and practices that are regularly reviewed to reflect legal changes, OIPC guidance, and technological advancements. This also includes user agreements and training for human oversight of AI outputs.
- Transparency and Inquiries: Develop patient-facing transparency materials and establish a process for responding to patient inquiries regarding the AI system.
- Accuracy and Human Oversight: Implement procedures to identify inaccuracies or biases, incorporating adequately trained humans to oversee the quality and performance of the AI system. This is crucial, as the risks associated with common AI-generated inaccuracies, like hallucinations or transcription errors, can be magnified in a healthcare context. Errors in clinical notes or materials that practitioners rely upon can compromise care quality and negatively impact patient safety and well-being, underscoring the necessity for precise AI systems.
- Consent to AI Scribe Use: Ensure that patients who opt out or withdraw consent from AI scribe usage receive the same quality of care as those who consent. If data collected during AI scribe usage is utilized for AI model development, it is important to address potential biases that may arise from excluding non-consenting patients’ data.
Considerations for Developing and Procuring AI Scribers
Custodians have the option to either develop or procure AI systems, including AI scribes. Each approach presents distinct risks that custodians must evaluate.
When developing AI scribes internally, custodians should ensure the system is secure, trained on lawful data, generates accurate and explainable outputs, and includes measures for bias mitigation and cybersecurity. Continuous monitoring and specialized assessments will be critical.
For custodians procuring AI scribes from external vendors, thorough due diligence is essential. They must ascertain whether vendors meet their expectations and agree to obligations pertaining to intended usage, lawful training data, model validation, monitoring, explainability, and incident reporting.
To fulfill their PHIPA obligations, custodians should negotiate robust contractual protections and confirm a vendor’s commitment to performance and ongoing oversight. This may include specific contractual limits regarding the vendor’s access to and usage of personal health information, data retention and destruction protocols, subcontractor controls, security safeguards, and specific breach notification commitments in line with PHIPA, alongside continuous performance monitoring.
Looking Ahead
As AI scribe technologies advance, custodians should anticipate increased regulatory scrutiny and heightened expectations for governance, transparency, and human oversight. It will be crucial for custodians to remain vigilant in monitoring legal and regulatory changes and to update their internal frameworks accordingly. They will need to strike a balance between fostering innovation and enhancing efficiency without compromising adherence to PHIPA compliance and the safeguarding of human rights.
The authors would like to extend their gratitude to Sulayman Syed, articling student, for his assistance in preparing this legal update.