
Protecting innovation: Key takeaways from Vention’s 2025 CTO Craft Con presentation
Artificial intelligence is transforming the way businesses operate, but with great innovation comes great responsibility.
That was the guiding theme of Vention’s presentation at CTO Craft Con this week. In "Protecting innovation: Mitigating the hidden risks of emerging tools," Glyn Roberts, Vention’s UK CTO of Digital Solutions, explored the delicate balance between embracing AI and managing its risks.
Missed the talk? Here’s what you need to know.

The wake-up call
Glyn opened with a confession: Back in early 2024, Vention felt well-prepared for AI adoption. With strong ISO 27001 policies, data classification protocols, and robust security training, we had every reason to be confident. But subtle shifts — faster project turnarounds, inconsistent AI-generated content, and a rise in AI-driven meeting summaries — prompted a closer look.
A company-wide survey confirmed it — employees across all departments had already taken the initiative to experiment with AI and boost their productivity. While Vention encourages workflow improvements, this raised a new challenge: how to ensure responsible AI use without stifling innovation.
The risk landscape
While AI unlocks efficiencies, it also introduces risks:
-
Data sensitivity and security: Many AI tools process data externally, raising concerns about cross-border data transfer and regulatory compliance.
-
Regulatory pitfalls: Non-compliance with GDPR, HIPAA, and industry-specific regulations can erode trust and lead to legal consequences.
-
Unintentional data exposure: AI features are quietly embedded in mainstream tools like Microsoft 365, Notion, Slack, and Figma, potentially compromising sensitive company information.
-
Troublesome terms of service: Some AI tools, such as meeting note-takers, include concerning clauses, including storing user data for up to two years and granting broad third-party access.
Implementing AI governance
Recognising the need for structured oversight, we established a cross-department AI governance group to:
-
Develop an AI usage policy: A dynamic framework outlining risks, approved use cases, and alignment with security policies.
-
Approve AI tooling: A proactive process ensures that only vetted tools are used while maintaining flexibility for innovation.
-
Train employees: Understanding AI increases responsible usage. We leveraged publicly available training resources and incentivised completion with badges and benefits.
-
Maintain a central knowledge base: A regularly updated central repository clarifies approved tools, acceptable use cases, and risk factors.

Balancing innovation with control
Outright AI bans don’t work — employees will use it regardless, often without oversight. Instead, the focus should be on responsible enablement:
-
An AI use case framework: Clear guidelines on permissible applications, ensuring human accountability.
-
ISO standards for AI governance: Leveraging frameworks such as ISO 27001, 27701, 37301, and 42001 for compliance and risk management.
-
R&D and experimentation: Hackathons, pilot programmes, and knowledge-sharing initiatives to encourage ethical AI adoption.
How to turn AI into an asset, not a liability
To summarise, Glyn outlined the key steps organisations should take for AI to fuel innovation without compromising security:
-
Establish a formal AI policy: Without clear guidelines, employees will define their own.
-
Identify and monitor AI use cases: Prevent unintended risks before they escalate.
-
Implement a structured approval process: Ensure only vetted AI tools are adopted.
-
Educate employees continuously: Understanding AI fosters responsible use.
-
Stay vigilant for red flags: Monitor AI tools operating outside approved guidelines.
With structured AI governance, businesses can harness AI’s transformative potential while mitigating hidden risks. The future of AI-driven innovation depends on finding the right balance between freedom and control.
Is your business ready?
Source: PR@ventionteams.com