Guide to AI Cyber Risk Oversight: Familiar Principles, Sharper Questions
AI can create significant benefits and competitive advantage. But those benefits depend on trust: from customers, employees, partners and regulators. That trust requires that the risk of malicious interference with AI is managed well. AI cyber risk management is sometimes presented as if it demands a completely new discipline. We believe boards should treat AI cyber risk through the same governance lens they already use for other technology: understand what matters most, set risk appetite, assign accountability, oversee suppliers, prepare for incidents, and ask for evidence that controls work. The UK Cyber Governance Code of Practice[1] is helpful because it reinforces that cyber governance is a board responsibility. The emergence of AI cyber risks does not replace that model; it sits within it.
That does not mean nothing changes. AI can increase speed, scale and opacity. It can introduce new dependencies on model providers and external services. OWASP (the Open Worldwide Application Security Project), through its guidance on risks in large language model applications, has highlighted several recurring AI security weaknesses that boards should recognise[2]. These include sensitive information disclosure, where use of public AI tools can create data privacy or intellectual property risks; prompt injection, where manipulated inputs can influence outputs or connected tools; data and model poisoning, where training or reference data is corrupted; excessive agency, and supply chain weaknesses, where dependence on external models or providers introduces hard-to-see risk.
Boards do not need a separate checklist for AI cyber risk. A better approach is to start with the questions they should already be asking about any important technology:
Who is accountable? - Assign clear executive accountability, on a use case level, but expect joined up working across technology, business, legal, compliance, risk and security teams.
What is our AI cyber risk tolerance? - Decide how much risk the organisation is willing to accept around data exposure, manipulated outputs, service disruption, unauthorised AI actions, and weak traceability.
Is AI cyber risk reflected in our strategy? - Understand how disruption to AI systems, model providers or key suppliers would affect cyber resilience, ensure the strategy is properly resourced.
Do controls, monitoring, incident readiness and assurance cover AI? - Identity and access management should apply to AI environments and agents. Logging and monitoring should cover AI data flows. Request evidence that AI is integrated into incident response planning, control testing and assurance activity.
Are we discussing this regularly at board level? - Encourage a regular board dialogue with the executive team on AI cyber risk. Request a coherent view of cyber- and AI-related regulation.
With that baseline established, boards should challenge their executive team with a smaller set of AI-specific questions. This keeps the discussion grounded in familiar cyber governance, while recognising four areas where AI needs closer attention.
1. Do we have an inventory of our AI tools and use-cases?
Ask management to define where AI is used across the organisation: the data it relies on, the models in use, the applications or workflows it supports, any agents that can act, the suppliers involved, and how the organisation avoids and identifies shadow AI.
2. Do our policies reflect AI use?
Existing policies may need to be updated to address AI, or the organisation may need a dedicated policy for acceptable AI use. Boards should also agree escalation paths for policy exceptions and for cases that exceed agreed risk tolerance.
3. Are our people equipped to manage AI cyber risk?
Does the workforce understand the risks of using AI? Is training tailored to roles? Is responsibility clear when humans deploy or oversee AI systems? The UK AI Cyber Security Code of Practice[3] calls for awareness of AI security threats and role-based training, and the UK National Cyber Security Centre says managers and board members need enough understanding to engage meaningfully on AI risk[4].
4. Are we testing AI against malicious interference?
AI systems should not only be tested before deployment and monitored continuously after launch; they should also be tested regularly for how they could be manipulated by adversaries. Ask whether higher-risk AI use cases are tested against typical attack paths.² The purpose is not to assess model quality or business outcomes, but to understand whether adversaries could extract information, corrupt system behaviour, misuse connected tools, or disrupt availability.
5. Do we know what needs protecting?
The impact analysis should start with a familiar lens, but extend it to cover AI-specific exposure:
Confidentiality: could sensitive data, prompts, credentials or intellectual property leak?
Integrity: could data, models, instructions or outputs be altered or corrupted?
Availability: could the service fail, be disrupted or become too costly to operate?
Authority: could the system or agent act beyond what it should be allowed to do?
Traceability: can we see what happened, who did it, and what the system relied on?
AI cyber risk does not require a parallel oversight approach. It requires boards to apply the same principles they already use in cyber governance, while probing more deeply where AI creates additional exposure.
[1] UK Government, Cyber Governance Code of Practice, 2025, https://assets.publishing.service.gov.uk/media/67ffbb30b73354468d135556/Cyber_Governance_Code_of_Practice_-_one_page_summary.pdf
[2] OWASP, 2025, https://genai.owasp.org/llm-top-10/
[3] UK Government, Code of Practice for the Cyber Security of AI, 2025, https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice/code-of-practice-for-the-cyber-security-of-ai
[4] UK National Cyber Security Centre, Guidance AI and cyber security: what you need to know, 2024, https://www.ncsc.gov.uk/guidance/ai-and-cyber-security-what-you-need-to-know