Beware of AI in Meetings: Privacy Risks and Responsibilities
In today’s digital landscape, the integration of artificial intelligence into our workspaces has become increasingly prevalent. While AI tools can enhance productivity during meetings, they also introduce significant privacy concerns. This article explores the implications of using AI notetaking and transcription tools, highlighting the need for awareness and caution.
The Rise of AI-Powered Tools
Meetings may not be as private as you think; it’s not just your boss or family nearby—AI-based tools like Fireflies.ai, Otter.ai, Trint, and Fathom are silently capturing discussions. These tools can overtly record meetings with visible indicators on video conferencing platforms or operate quietly in the background. However, the lack of transparency surrounding their use poses a considerable privacy risk, according to experts.
“We’re in an era where there’s almost weekly news about AI-related mishaps or data breaches,” remarks Nicolas Joubert, a Winnipeg-based partner at the law firm MLT Aikins. Despite these concerns, the popularity of AI notetakers has surged, especially among boards, medical professionals, and young workers. These users appreciate the convenience these tools offer, allowing them to focus on discussions and later access concise summaries, thus saving time.
Advantages and Risks of AI Tools
Kael Campbell, president of Red Seal Recruiting Solutions Ltd., shares that his firm has utilized the HoneIt interviewing platform for four years. He finds the transcripts produced to be more thorough than handwritten notes, allowing for detailed follow-ups: “I often miss specific comments, but now I can refer back to full transcripts for client inquiries.”
However, the advantages come with significant risks. Experts highlight that AI tools create vast amounts of data, often containing inaccuracies and sensitive information. They record everything—casual conversations about the weather or personal anecdotes—alongside the essential points, leading to unintended disclosures.
Moreover, these tools cannot recognize when sensitive or confidential discussions occur and may continue recording, distributing private information broadly. “When in-camera discussions are mistakenly shared, it creates significant problems,” explains Teresa Scassa, Canada Research Chair in information law and policy at the University of Ottawa.
Accuracy and Data Privacy Concerns
The tools are also prone to inaccuracies. If the audio quality is poor or if they encounter unfamiliar terms, they may misrepresent what was said. This phenomenon, described as “hallucination,” occurs when AI generates false information. Such issues extend beyond performance; data may be stored in cloud services, leading to potential leaks or breaches.
“It’s essential to consider where your data is stored, whether it’s processed or sold to third parties, and if it’s used to train other AI models,” Joubert cautions. Each user should critically assess their understanding of data handling practices before adopting these tools.
Lessons Learned from Data Breaches
Often, meeting participants are unaware a recording was made until sensitive information is shared publicly. An example from Ontario in September 2024 illustrates this point. A hospital meeting discussing patient information was inadvertently recorded through the Otter.ai service, as a former doctor used his personal email and device to participate. The resulting summary was sent to 65 recipients, prompting the hospital to alert the privacy commissioner and tighten its policies.
Joubert and Scassa suggest that this incident serves as a crucial reminder for those utilizing AI meeting tools: Always review your settings and notify participants whenever such tools are in use. Red Seal Recruiting Solutions makes it a priority to inform attendees and verify the accuracy of the outputs before sharing them. While their clients are generally comfortable with AI, the company is prepared to revert to traditional notetaking if any concerns arise.
Final Thoughts on AI Usage
Before implementing AI tools, it is vital to thoroughly understand their terms and ensure compliance with current regulations. Many large providers may shift the risk onto the user, making it crucial to understand your responsibilities and the potential consequences of using these tools. “Just like you wouldn’t want to broadcast your personal information on a busy street, the same caution applies to digital spaces,” Joubert warns.
In conclusion, while AI-powered meeting tools can offer valuable support and save time, it is imperative to approach their use with caution and awareness of the associated privacy risks. By being informed and proactive, users can enjoy the benefits of these technologies while safeguarding their sensitive information.
This report by The Canadian Press was first published March 1, 2026.
Tara Deschamps, The Canadian Press