On August 1, 2025, Insight Assurance hosted a webinar examining the security, compliance, and privacy concerns emerging from widespread AI tool adoption. While AI offers efficiency gains, it also introduces new risk categories that many organizations aren’t prepared to manage. The session brought together experts across data protection, AI security, and compliance standards to explore common risks and actionable steps toward safer AI integration.

 

Panelists included:

 

  • Mario Vlieg, Senior Manager of ISO Services, Insight Assurance.
  • Rui Serrano, Data Protection Officer, Insight Assurance.
  • Marwan Omar, Chief AI Officer, Insight Assurance.
  • Dan Le, CEO & CISO, Red Cup IT. 

Let’s walk through the key points from the session, including practical guidance on policy development, vendor vetting, risk mitigation, and frameworks like ISO/IEC 42001.

 

Shadow AI Use Is Already Happening

Dan Le kicked off the conversation by pointing out that AI use often starts before organizations are even aware of it. “Whether you’re aware of it or not, your employees are already going to the free tools … doing that on their corporate computers or on the side using their phones to help them work better and faster,” he explained.

 

This behavior can unintentionally expose sensitive data like PII or PHI. To mitigate that, Dan shared that his team uses browser restrictions, firewalls, and enterprise controls to block unvetted AI platforms. “The third or fourth person that tried [to access a tool] would get a message that says, ‘Hey, please contact IT for approval.’”

 

Security Risks Require More Than Just Optimism

Marwan Omar warned that rushing to adopt AI tools, especially open-source models, can introduce significant risks. “Some of these models might be malicious,” he said. “This would be equivalent to poisoning a well, because if you train an AI model on a poisoned dataset, all of the predictions … could be flipped.”

 

He stressed the importance of understanding what vendors are doing with your data. “Are they using user data to retrain or fine-tune their models? Are they storing it indefinitely? Are they encrypting that data?”

 

To reduce exposure, he recommended isolated testing environments and emphasized that AI adoption should never trade security for speed. “Everybody is looking for faster, better, and cheaper. Well, these three things are great, but we should be looking at the other aspects of it.”

 

AI Agents: Convenient but Risky

The conversation also touched on automated workflows powered by AI models that can execute actions based on prompts. While these AI agents boost productivity, Mario Vlieg cautioned that using them in public cloud instances may result in data exposure or compliance violations.

 

“Using a public AI engine to upload sensitive information — corporate or personal — is in itself a breach.” Mario Vlieg explained. He encouraged organizations to instead set up private, cloud-based environments. “Whatever you upload, it is yours to control, to erase, and so on. That is the right way to handle it from a corporate standpoint.”

 

Governance Begins With Policy and Frameworks

Data Protection Officer Rui Serrano highlighted a positive trend: Most webinar attendees reported having or planning to implement an AI usage policy. He emphasized that even organizations that believe they don’t use AI are likely exposed through unofficial employee activity.

 

Both Serrano and Vlieg pointed to ISO/IEC 42001 as a leading standard for managing AI-specific risks. Unlike traditional frameworks focused on IT security (e.g., ISO/IEC 27001 or SOC 2), ISO/IEC 42001 offers a governance system tailored to AI. However, certification doesn’t equate to legal compliance — organizations must still consider regulatory environments like the EU AI Act.

 

The panel also recommended AI red teaming and using frameworks like MITRE ATLAS to validate models and surface security gaps before production deployment.

 

Key Takeaways

Here’s what organizations should remember when evaluating AI tools and risks:

 

  • AI tools are often in use, even without formal approval, requiring visibility and internal policy.
  • Vendor due diligence is critical. Review privacy policies, encryption practices, and data retention.
  • Test open-source AI tools in sandbox environments to prevent poisoned data or models.
  • ISO/IEC 42001 offers a structured, AI-specific management framework to support responsible use.
  • Tools like MITRE ATLAS and red teaming help stress test models before deployment.
  • Vet AI agents and embedded tools carefully to prevent inadvertent data exposure.

 

Want to learn more? Watch the full webinar to explore how to reduce AI-related risk and navigate your compliance journey with confidence.