The Evolution of Trust, reflections from the WEF 2026 Annual Cybersecurity Meeting
- Anna Collard

- 18 hours ago
- 5 min read
From firewalls to runtime governance
I’ve just returned from the World Economic Forum annual cyber meeting in Switzerland. This was my 5th time at this meeting and although the geopolitical situation, the AI race, the adversaries.. none of it feels any less daunting than the years before, I generally felt a more positive or optimistic energy in the room.
“Where are the agents, and what are they doing?”
That question was asked in different ways across nearly every session and captured one of the main themes of the week. We are entering an era where AI agents interact with other autonomous systems at machine speed, with minimal human involvement. Cybersecurity teams are no longer just managing users, endpoints and applications. They now need visibility into thousands of agents acting on behalf of employees, vendors and customers, amake decisions, access APIs, move data and trigger workflows autonomously.
The problem is that our governance models weren’t built for this. They assume static environments: known identities, predictable workflows, human-paced decision-making. Agentic environments break most of those assumptions.
Runtime Governance: The New Security Layer
So what we need is a more continuous, real-time “runtime governance” oversight that happens while systems are operating, not only before deployment through audits and policy reviews. The challenge however is that some runtime monitoring approaches consume up to 20% of system resources. That raises a difficult but unavoidable question:
“How do we continuously govern autonomous systems without making them economically unsustainable to run?”
Organisations are caught between two pressures: real-time behavioural monitoring on one side; speed, efficiency and scale on the other. Solving that tension is becoming the central design challenge for AI security architecture.
Anthropics recently released paper about Constitutional Classifiers++(on my reading list!) describes how to perform safety operations in a more cost effective, practical, production-ready solution. It proves that safety doesnt have to come at the cost of massive latency or poor user experience. (Hat tip to Vinh Nguyen for his expert insights on this).

The Mythos Moment
If I had earned a dollar every time Mythos was mentioned during the week, I probably could have paid for a decent Swiss dinner, which, on a SA Rand currency, is saying something. Beyond the hype, the reaction from organisations that had actually tested it showed people who were genuinely impressed, and in some cases slightly unsettled, by the speed and scale at which these systems operate. One participant summarised it bluntly:
“Three weeks of Mythos equals ten pentesters working for a year.”
It is worth keeping a cool head here. Whether the three weeks equals ten pentesters for a year claim holds up perfectly under rigorous benchmarking is almost secondary to the broader point: the economics and timelines of offensive security are changing dramatically. At $500k per month for tokens, Mythos is not something most organisations can afford but it signals clearly where the market is heading. Attack capability is becoming faster, cheaper and increasingly autonomous. Meanwhile, defensive governance still feels fragmented and immature by comparison.
But it also provides a real opportunity for defenders. Using AI to continously test before deploying to prevent vulnerable systems in the first place. Using AI to help red teaming and automating defense. In the US, agentic AI is already deployed in autonomous response mode by SMEs, simply because they don’t have the resources not to. In the EU, those same defensive agents are still largely in discovery mode.
AI Cyber Defence for the 99%
One of the most energising threads of the week was a shift away from the assumption that better security means bigger AI. One of the ideas explored was for smaller, purpose-built, constrained defensive models, not giant frontier systems, but focused agents designed for specific tasks that organisations can actually afford: monitoring credential misuse, detecting anomalous behaviour, constraining risky actions, identifying phishing patterns, enforcing runtime policies.
The presenter called this “AI cyber defence for the 99%.” I loved that framing. If offensive AI becomes democratised, defensive AI must follow. Schools, hospitals, municipalities and SMEs across the Global South are increasingly exposed to sophisticated AI-enabled threats without access to equally sophisticated defensive capability. Smaller, affordable, constrained defensive AI may become one of the most important security equalisers of the next decade, particularly when combined with accessible cyber insurance and incident response for underserved communities.
Identity Is the New Perimeter
Identity, not just human identity but machine identity emerged as a central and recurring concern. API tokens, agent permissions, non-human identities, autonomous systems operating with broad access privileges. As AI agents increasingly act on behalf of humans, they inherit permissions, context and authority.
Several discussions explored “Zero Trust for agents” and even the idea that future AI systems may require something resembling operational identities or reputation systems, almost like passports for autonomous agents. If we don’t develop stronger governance around those identities, including observability, accountability and behavioural constraints, we risk building systems that are highly capable but dangerously poorly governed.
From Cybersecurity to Cognitive Security
Conversations also touched broader societal concerns. Scams, fraud, AI-enabled deception and manipulation came up repeatedly. And the underreporting of cyber enabled crime *only about 15% of these crimes get reported to law enforcement. The line between cybersecurity, cognitive security and societal resilience is blurring. The focus is slowly moving from protecting systems only to protecting trust itself.
Human Judgement as a Scarce Resource
One sentence from the week stayed with me: “Human judgement becomes a scarce and valuable resource.” We cannot build for “humans in the loop” for every decision, that won’t scale. It’s more along the lines of “human on the loop”: humans defining constraints, governing exceptions, and stepping in when systems drift beyond acceptable boundaries. The challenge is designing governance that preserves meaningful human judgement in environments operating faster than humans can realistically process.

Secure-to-Market, Not Just First-to-Market
The tension between speed and safety surfaced in almost every session. The current AI race still heavily rewards being first to market, faster deployment, faster scale, faster capability. But several discussions challenged whether that model is sustainable.
A poorly governed AI agent can become an autonomous actor operating at machine speed with access to sensitive systems and decision-making authority. The cost of insecure deployment is rising fast.
Secure-to-market doesn’t mean slowing innovation. It means building governance, observability and resilience into systems from the start, not bolting them on after problems emerge. Aviation, automotive and pharmaceuticals all went through this maturation. AI governance will too. And as that happens, markets, regulators and customers will increasingly reward organisations that can demonstrate trustworthy, governable systems, not just the fastest or most powerful ones.
The Five Eyes agencies made this point concretely last week, releasing joint guidance warning that agentic AI will likely misbehave and amplify organisations’ existing vulnerabilities, recommending slow, careful adoption rather than rapid rollout. And from a regulatory standpoint, the EU Cyber Resilience Act (coming into effect in 2027) will make security by design a legal requirement, not a nice-to-have. That’s not a constraint on innovation; it’s a long-overdue baseline.
So what next?
The right response to all of this isn’t panic. Adversaries have always had capabilities that challenged defenders, that’s not new. What’s needed now is the same thing it’s always been: calm focus, clear priorities and governance that keeps pace with capability.
The organisations that will navigate this well are the ones that stay calm, stay strategic, and build security into their systems from the ground up.





Comments