Artificial Intelligence is no longer a futuristic promise it’s a present-day reality reshaping how organizations approach compliance and cybersecurity. From automating routine tasks to enabling real-time monitoring, AI is driving a fundamental shift: compliance is becoming continuous, proactive, and more strategic than ever before.
We asked industry leaders to share their perspectives on AI’s impact where it’s delivering real value, where it’s introducing new risks, and what it means for the future of compliance and cybersecurity.
AI as a Catalyst for Efficiency and Judgment
Darshana highlights how AI is subtly yet profoundly reshaping compliance and cybersecurity. Rather than replacing human decision-making, AI is streamlining repetitive tasks like evidence collection, control mapping, and documentation freeing teams to focus on strategic risk assessment and informed judgment.
AI is now a part of the compliance and cybersecurity world and honestly, in ways most of us didn’t fully expect.
It might not be a large, standalone shift where everything suddenly looks different. It’s much more subtle than that. It’s showing up inside the day-to-day. The kind of work that used to take hours – evidence collection, mapping controls, drafting documentation, reviewing environments, is now getting done in minutes. Not because the work has changed, but because the way it’s executed has.
Take something as routine as a regulatory update – what used to involve multiple back-and-forth, rewriting, and alignment across teams can now be kickstarted almost instantly. There’s structure, there’s direction, there’s momentum right from the start.
And that’s really the point. AI isn’t replacing decision-making. It’s adding another layer to how decisions get made.
It speeds things up, removes a lot of the operational drag, and brings a level of consistency that’s hard to achieve manually. It also creates space. Space for teams to step back from repetitive work and spend more time on what actually matters -understanding risk, applying context, and making informed calls. That’s where the real value is.
Faster execution, better visibility, more consistency, and the ability to scale without everything becoming heavier or more complex. Compliance starts to feel continuous instead of periodic. Security becomes more proactive instead of reactive.
But the core doesn’t change. AI supports. It doesn’t own.
The judgment, the accountability, the final call – that still sits with people. And the teams that get this balance right are the ones that will move faster without losing control.
Darshana emphasizes that AI’s true value lies in enabling continuous compliance and proactive security. By automating routine processes, teams can shift from periodic reviews to real-time monitoring, ensuring compliance is dynamic and responsive rather than static and reactive.
AI’s Role in Solving the Human Layer of Cybersecurity
Mahmoud Lotfy is a cybersecurity professional and the founder of Excera, an AI-Powered personalised security awareness training platform. He addresses a long-standing challenge in cybersecurity: the ineffectiveness of generic security awareness training. AI is revolutionizing this space by tailoring training to individual roles, threats, and organizational contexts transforming compliance from a checkbox exercise into a genuine defense mechanism.
How AI is finally fixing the oldest problem in cybersecurity
Every compliance team knows the uncomfortable truth about security awareness training. People click through it as fast as possible, collect the certificate, and forget everything by Tuesday. The module was designed for everyone, which means it was designed for no one. A CFO at a regional bank and a junior analyst in the same organisation sit through identical content, despite facing entirely different threats, carrying entirely different access, and responding to entirely different psychological triggers.
This has been the state of security awareness for thirty years. Frameworks have matured, tools have become sophisticated, budgets have grown. And the human layer has stayed stubbornly broken.
AI is changing that, but not in the way most people are talking about.
The conversation about AI in cybersecurity is dominated by detection and response: faster threat identification, automated SOC workflows, smarter anomaly detection. All important. But the more significant shift, and the one receiving far less attention, is what AI makes possible on the human side.
For the first time, it is possible to generate a security awareness session that is specific to this person. Their role, their organisation, their near-misses, their exact regulatory exposure, in their language, reflecting their actual threat landscape. Not a module selected from a library. A session that did not exist before they logged in.
That is a fundamentally different product from anything the market has offered before. For organisations operating under frameworks like ISO 27001, GDPR, or NIST, where regulators are increasingly asking not just for completion rates but for evidence of genuine behavioural change, it is the difference between a checkbox and a defence.
The human layer has always been the hardest problem in cybersecurity. It is also the last one AI is getting around to solving properly.
Mahmoud underscores that AI’s impact extends beyond detection and response. By personalizing training, organizations can finally address the human factor in security, aligning with regulatory demands for behavioral change rather than mere completion rates.
Yevhen explores the dual role of AI in compliance: as a powerful tool for automation and as a new governance challenge. While AI enhances efficiency in monitoring and risk assessment, it also introduces regulatory obligations, such as those outlined in the EU AI Act and ISO/IEC 42001.
AI isn’t just a tool for compliance teams it’s becoming the subject of compliance itself. That shift changes everything.
For most of my career, compliance and cybersecurity were about controlling human behavior and system configurations. Audit trails, access controls, policy sign-offs. AI breaks that model. You can’t audit a neural network the way you audit a policy. You can’t write a control for a system whose outputs you can’t fully predict.
What I see in practice working with organizations on AI governance frameworks and ISO/IEC 42001 implementation is a two-speed reality.
On one side, AI is genuinely improving compliance operations. Automated monitoring flags anomalies in transactions, contracts, and access patterns faster than any manual review cycle. AI-assisted risk assessments can process volumes of data that would take a team of auditors weeks. In cybersecurity, behavioral AI detects lateral movement and insider threats that signature-based tools miss entirely. These are real gains.
On the other side, every organization deploying AI for compliance or security is simultaneously creating new governance obligations they often don’t see coming. The EU AI Act classifies several AI-driven compliance and security tools as high-risk systems under Annex III. That triggers obligations: conformity assessments, human oversight mechanisms, data governance requirements, and audit trails for the AI systems themselves. Most organizations are not ready for that.
The uncomfortable truth is this: the same AI that makes your compliance team more efficient may require its own compliance program.
What leading organizations are doing differently is treating AI governance as an internal controls problem, not a technology problem. That means mapping AI systems against COSO’s risk and control frameworks, assigning ownership, and building audit-ready documentation not just deploying tools and hoping for the best.
The organizations that will win the next five years aren’t just the ones using AI for compliance. They’re the ones that can demonstrate, to a regulator or an auditor, that their AI operates within defined risk tolerance and with appropriate human accountability.
That’s the standard the EU AI Act is moving toward. ISO/IEC 42001 is the management system framework that gets you there.
AI is transforming compliance and cybersecurity. But transformation without governance is just risk at scale.
Yevhen advises organizations to integrate AI governance into their existing risk and control frameworks, ensuring accountability and transparency. The future belongs to those who can demonstrate that their AI systems operate within defined risk tolerances and human oversight.
From Periodic to Continuous Compliance
Johnathan highlights AI’s transformative potential in shifting compliance from periodic reviews to continuous, real-time monitoring. By automating data gathering and horizon scanning, AI enables compliance teams to focus on analysis and strategic decision-making.
Ask any compliance officer what they spend most of their time on and you’ll hear a variation of the same answer: data gathering, report generation, chasing teams for information, manually checking things that really shouldn’t require a human to check. It’s not that these tasks don’t matter, they do, it’s that doing them manually absorbs the bandwidth that should be going toward actual analysis and judgment.
Transaction monitoring, sanctions screening, GAP analysis, first-draft risk assessments, these are exactly the kinds of high-volume, pattern-recognition tasks that AI handles well. And when you free up a compliance team from the grind of producing those outputs manually, something shifts. People start thinking instead of processing. The conversation changes from ‘here’s what the data says’ to ‘here’s what it means and here’s what we should do about it.’
That shift matters. A lot. One of the more fundamental changes AI enables is moving compliance from periodic to continuous. For years, the rhythm of compliance monitoring was dictated by review cycles, monthly, quarterly, annually. Which meant risks could develop in the gaps. An unusual pattern of client behaviour, a cluster of threshold breaches, a process that had quietly drifted out of line with policy, none of these would necessarily surface until the next scheduled review.
Real-time monitoring changes the model entirely. Issues are flagged when they occur, not weeks later. And for firms operating under regulatory frameworks that require them to demonstrate robust systems and controls, being able to show that your compliance infrastructure is genuinely live, not just retrospectively reported, is a material difference when a regulator comes knocking.
Regulatory horizon scanning, tracking what’s coming before it lands, has always been on the compliance to-do list. In practice, it usually meant someone subscribing to a few regulatory newsletters and trying to summarise the relevant bits for a quarterly board report. Better than nothing, but not much better.
AI tools that use natural language processing can now monitor FCA publications, consultation papers, Parliamentary debate, industry body responses and international regulatory developments continuously, filtering for what’s actually relevant to a specific business model and flagging it in real time. The practical effect is that you stop reacting to regulation after it’s been finalised and start shaping your procedures and controls in anticipation of where things are heading.
The firms I see handling regulatory change most confidently aren’t the ones with the biggest compliance teams. They’re the ones that spotted the direction of travel early and had already started adapting. That’s the edge that good horizon scanning gives you and AI makes it operationally realistic in a way it never was before.
Here’s where I’d push back on some of the more breathless takes about AI transforming compliance: it doesn’t replace the hardest part of the job. It never will.
The UK’s financial regulatory environment is principles-based. The FCA sets outcomes and standards, Consumer Duty, SM&CR, operational resilience and expects firms to use judgment in determining how to meet them. There is no rulebook that says ‘if situation X, do Y.’ The interpretation of what ‘good’ looks like for a specific firm, with a specific client base, offering specific products, is a professional judgment call. Every time.
AI can tell you that your customer complaint data shows an uptick in a particular product area. It can cross-reference that against your vulnerable customer framework and flag a potential Consumer Duty concern. What it cannot do is decide how serious that concern is, whether it reflects a systemic failure or a blip, and what the right response looks like given everything else going on in the business. That requires someone who understands the regulatory intent, knows the business, and can be held accountable for the call they make.
The FCA has been explicit about this. Their position is that existing governance frameworks, including SM&CR, already establish individual accountability for AI-assisted decisions. The regulator isn’t introducing a separate AI rulebook. They’re making clear that the humans using these tools are still on the hook for the outcomes.
I think about AI in compliance the way I think about any good infrastructure investment: it doesn’t make average people exceptional, but it does let exceptional people work at a scale and speed they couldn’t reach otherwise. A strong compliance professional with the right AI tooling can cover more ground, spot more risks, and respond faster than the same person buried in spreadsheets. That’s the opportunity.
There are real challenges to navigate, data quality, model governance, the risk of over-trusting an automated output and firms that charge ahead without thinking carefully about those things will create problems for themselves. But the answer to those challenges isn’t to hold back. It’s to be deliberate.
The compliance function has spent too long being the department that says no, files reports, and waits for something to go wrong. AI gives it the tools to be something more useful: a function that sees around corners, moves early, and adds genuine strategic value to the business it sits within. That’s worth getting right.
Johnathan notes that while AI accelerates compliance operations, it does not replace human judgment especially in principles-based regulatory environments like the UK’s. The key is to use AI as a force multiplier for exceptional professionals, not a substitute for accountability.
The Hidden Risks of AI in Compliance
Kyle warns that AI’s acceleration of compliance processes risks creating a “regression” in understanding and ownership. He cautions against over-reliance on AI-generated outputs without critical validation, particularly when sensitive data is involved.
What is the value of a policy no one has ever read? It may satisfy an auditor. It may close a finding. But if the people it governs have never engaged with it, it has no practical value. A policy that exists only to pass an audit is a liability dressed as a control. That is the problem AI is accelerating, and it is not the one anyone is talking about.
THE REGRESSION NOBODY IS TALKING ABOUT
AI is helping us move forward and backward at the same time. Policy documentation, gap analyses, framework interpretation: all significantly faster now. That acceleration matters, especially for lean teams under audit pressure.
But people are prompting for outcomes, not understanding. The goal becomes the deliverable, not the knowledge behind it. Most prompts are never revisited. The context is lost the moment the output is accepted. We are living in the season of instant gratification, and compliance is not immune to it.
AI can be used strategically, but it requires purposeful intent. When used as an aid, to refine documentation, stress-test a control narrative, or pressure-check a gap analysis, it is genuinely powerful. The mistake is treating it as a replacement for the thinking that compliance work demands.
THE DATA SHARING PROBLEM NO ONE WANTS TO TALK ABOUT
Perhaps the most underappreciated risk AI has introduced is what we are willingly handing over. These tools became trusted almost overnight, and with that trust came an extraordinary willingness to share sensitive information. PII, PHI, banking data, internal audit findings, control narratives tied to specific system configurations. It goes in because it generates a useful answer, and most users never stop to ask where it goes next.
The problem is compounded by the fact that many users are not subject matter experts. They trust the output because it sounds authoritative. That combination, sensitive data going in and unverified answers coming out, creates real exposure.
There has never been a greater need for rigorous vendor due diligence. Organizations need to understand exactly where ingested data is stored, whether it is used to train underlying models, how it is retained, and who it may be shared with. That is not a technical question. It is a governance question, and it belongs in every vendor assessment process right now.
FRAMEWORKS ARE ALREADY BEHIND ISO 42001, published in December 2023, is the world’s first international standard focused on AI management systems. But December 2023 feels like a different era in AI terms. The tools, the capabilities, the risks, the threat surface are all materially different from what informed that standard. ISO 42001 is a solid foundation. It is not a ceiling.
The frameworks set the baseline. The real security and compliance value comes from within the organization. A certificate on the wall means very little if the thinking behind it was outsourced to a prompt.
A FINAL THOUGHT WORTH SITTING WITH
The same tools we use to review and rely on as evidence can be used to manufacture it. AI can generate policies, reports, audit artifacts, and security documentation that looks entirely legitimate. As we build more of our compliance assurance on AI-assisted outputs, we have to ask ourselves: how confident are we that what we are looking at is real?
Stay curious. Stay skeptical. Challenge the output. Validate the source. The tools have changed. The responsibility has not.
Kyle stresses the importance of rigorous vendor due diligence and skepticism toward AI outputs. Compliance teams must ensure that AI is used as an aid, not a replacement, for the thinking and accountability that define effective governance.
Rethinking Security Architecture with AI
Jona argues that AI’s greatest impact lies in addressing the structural flaws in traditional security stacks. By leveraging network visibility and behavioral analysis, AI enables real-time detection and continuous compliance critical for modern, credential-based attacks.
There’s a structural problem at the center of how most organizations approach security, and it rarely gets discussed directly: the detection stack isarchitecturally inverted. The layer with the most honest, tamper-resistant data about what’s actually happening in an environment sits at the bottom, chronically underutilized. On top of it, security teams stack alerting systems, endpoint agents, and log pipelines that were designed to catch a threat model that has largely moved on. AI doesn’t fix a broken architecture automatically.
Deployed at the right layer, though, it enables something qualitatively different security that actually matches how modern attacks operate.
T H E D E T E C T I O N S T A C K
TheStack Is Designedfor AttacksThat No Longer Happen
The conventional enterprise security stack was built for a world where attackers introduced detectable artifacts into environments: malware binaries, suspicious executables, known-bad IP addresses. Security teams wrote rules against these artifacts. Threat intelligence feeds kept the rules current. The model worked for that model of attack. That world has largely been left behind. 82% Of detections in 2025 were malware-free. Attackers authenticate with stolen credentials and operate through native administrative tooling legitimate software organizations already trust and can’t block. There is no binary to scan. No hash to match. No signature to write.
(CrowdStrike 2026 Global Threat Report) The techniques are well-documented in the security research community: credential theft, lateral movement through trusted protocols, privilege escalation through legitimate admin interfaces. Every one of them leaves clear evidence in network traffic long before they surface in endpoint logs if they surface at all.
The market is beginning to price this in. The network detection and response (NDR) sector sits at approximately $3.89 billion in 2025, projected to reach $5.82 billion by 2030, as enterprises recognize that behavioral intelligence at the network layer closes the gap that signature-based tooling structurally cannot.
N E T W O R K V I S I B I L I T Y
The Network Is the Most Honest Source of Truth Every lateral movement, every credential abuse, every covert command-and-control channel leaves packets on the wire. You can tamper with logs. You can disable an endpoint agent. You cannot alter the physics of network communication. This makes the network layer the one surface that remains honest regardless of how sophisticated the attacker is provided you’re observing it passively and continuously. The engineering challenge has historically been scale. Passively observing network traffic in a production environment generates a volume of data no human team can reason over in real time. This is precisely where AI changes the equation.
Behavioral models learn what normal looks like for a specific environment: which systems communicate, at what volume, on what schedule, with what protocols. Once that baseline exists, deviations surface immediately. A credential-based attack moving laterally generates characteristic patterns unusual authentication sequences, atypical service-to-service communication, directory queries that don’t fit the operational profile of the account involved. These signals don’t require signatures. They require context. AI maintains that context at a scale and speed no human team can match. 241 days Average breach lifecycle in 2025 a nine-year low, driven by AIpowered detection. Organizations deploying security AI extensively cut that by 80 more days and saved nearly $1.9M on average. (IBM Cost of a Data Breach Report 2025) 29 min Average attacker breakout time in 2025 from initial access to lateral movement. Fastest observed: 27 seconds. Any detection process that routes through a human analyst before containment has already failed at that speed. (CrowdStrike 2026 Global Threat Report)
ComplianceWithout the AuditTheaterCISOs have known for a long time that the annual compliance audit is fundamentally broken.
Your environment is in continuous motion new deployments, configuration changes, thirdparty integrations, infrastructure scaling. The snapshot an auditor captures reflects your posture for approximately one day out of 365. Everything else is extrapolation.
What’s become technically achievable now is continuous compliance not as a feature tacked onto a security product, but as a natural output of the same visibility that powers security. If you have a real-time map of how data moves through your environment, you have the raw material for continuous compliance validation: which systems handle sensitive data, where data crosses regulatory boundaries, which connections violate your segmentation policies.
When a new deployment introduces a data flow that shouldn’t exist under your compliance framework, the system catches it at deploy time rather than months later during an audit.
Compliance reports stop being a scramble of evidence collection and start being an automated output of infrastructure you’re already running for security. For CISOs defending security posture to a board, this is a meaningful shift from periodic snapshot to continuous visibility, from reactive remediation to real-time enforcement.
Frameworks like NIS2, DORA, and emerging AI-specific regulations are already encoding real-time monitoring and continuous risk management as enforceable technical requirements. The annual audit model isn’t just inefficient it’s becoming legally insufficient.
I D E N T I T Y & A C C E S S
The AttackSurface Nobody Is FullyWatching The fastest-growing and least-understood attack surface in enterprise environments isn’t endpoints or applications. It’s non-human identities: service accounts, API tokens, OAuth access grants, CI/CD pipeline credentials, AI agent sessions. The 2025 State of Non-Human Identities report puts the average enterprise ratio at approximately 92 non-human identities per human employee exceeding 500:1 in heavily automated environments. 97% Of non-human identities carry excessive privileges. 71% are never rotated within recommended timeframes. 44% of tokens have already been exposed in code repositories, project management tools, or collaboration platforms. Only 15% of organizations are highly confident in their ability to detect attacks through this surface. (NHIMG 2025 State of NHI Report) Non-human identities are increasingly the path of least resistance for attackers precisely because security programs built around human account monitoring are architecturally blind to machine-to-machine communication. Compromising a service account with broad access, or abusing an AI agent session with elevated permissions, generates no human behavioral signal. It does generate network-layer anomalies that diverge from the established baseline of that identity’s normal activity.
As AI agents proliferate across enterprise environments making API calls, accessing databases, orchestrating workflows autonomously the machine identity surface will grow faster than any manual governance program can track. Applying the same behavioral intelligence to machine traffic that we apply to human activity isn’t optional. It’s the next necessary layer of detection.
A D V E R S A R I A L A I
BothSides AreRunningtheSamePlaybook +89% Year-over-year increase in AI-enabled adversary operations in 2025.
(CrowdStrike 2026 Global Threat Report) Targeted phishing campaigns now synthesize publicly available information about individuals to produce personalized messages at scale that are statistically indistinguishable from legitimate correspondence. Automated reconnaissance pipelines scan public attack surfaces continuously, identifying vulnerabilities faster than most patch cycles. Adversaries are using AI to probe behavioral detection models crafting activity patterns designed to stay within learned baselines while advancing toward the objective. AI raises the capability floor for both sides simultaneously. Defenders who deploy AI on top of weak fundamentals will find their attack surface has grown more automation, more nonhuman identities, more AI agent sessions without their detection depth growing to match.
The organizations positioned to benefit are those treating AI as an amplifier for strong security fundamentals: zero trust access controls, strict identity governance, deep observability at the infrastructure level, and detection anchored in continuous behavioral analysis rather than periodic signature matching. AI makes those foundations intelligent and fast. It doesn’t substitute for building them.
W H A T C O M E S N E X T
TheShape ofWhat’s Coming The convergence already underway will define the security market through the end of this decade. Network detection, extended detection and response, SIEM, and compliance automation are collapsing into unified platforms where security telemetry and compliance evidence cease to be separate data problems. The same behavioral model that detects an anomaly also validates that the system is operating within its compliance envelope. The same data that surfaces a threat generates the audit trail.
For security teams, the implication is clear: the market is moving toward infrastructure, not tooling. Point products built around specific artifact types malware scanners, signaturebased detection, log correlators are facing structural disruption from platforms that operate continuously at the behavioral layer. The regulatory environment is accelerating that transition by making continuous monitoring a legal requirement rather than a best practice.
The future of security is passive, continuous, and context-aware systems that understand your environment deeply enough to recognize what doesn’t belong, operating at the speed of the network rather than the speed of the analyst. That’s not a prediction. For the teams building and operating it today, it’s already the architecture replacing what came before.
Jona predicts a convergence of security and compliance platforms, where behavioral intelligence replaces periodic audits. The future, he asserts, belongs to systems that operate at the speed of the network, not the analyst.
Conclusion
AI is revolutionizing compliance, but it’s not a magic bullet. The real opportunity lies in using AI to handle the repetitive, data-heavy work freeing teams to focus on what matters most: judgment, strategy, and risk management.
The leaders in this space won’t just adopt AI; they’ll integrate it thoughtfully, ensuring it enhances not replaces the human expertise that drives trust and accountability. The future of compliance isn’t about choosing between AI and people. It’s about making them work together, smarter and faster than ever before.