A view from the Rubicon and the quandary of agentic AI

Agentic AI is approaching an inflection point. Learn why organizations need robust governance frameworks and ZTA to prepare for this change.

Apr 2, 2026 • 6 Minute Read

Please set an alt value for this image...

In January of 49 BC, Roman Governor Gaius Julius was ordered to disband his army and cede power back to the senate. Instead, he violated that order and crossed the Rubicon River. This is often referenced as the point of no return. But it was after crossing that he usurped power and had himself named Dictator in Perpetuity, thus becoming Caesar.

Agentic AI is heading toward its own Rubicon in the form of intended and unintended security boundary violations. Think of it this way:

  • Intended agent actions and permissions = Governer Gaius Julius 
  • Actual agent threat of escalation = Dictator in Perpetuity, Caesar
  • The policy of your company = Roman Law
  • Users and Administrators = Roman Senate
  • Security boundaries = the Rubicon river

In this way, agentic AI will affect world powers, nation-states, global corporations, and individuals with potentially catastrophic results.

The problem with AI agents and the limitations of Human-in-the-Loop

AI agents can take thousands of actions per second and access a multitude of corporate resources. Once access is granted, they're poised to act against organizational intent and refuse to relinquish privileges. This can lead to devastating consequences if organizations aren't properly prepared to maintain control.

This is the problem presented from an unintended perspective. It gets worse when we consider agentic AI from a malicious perspective.

Because we still don't know how best to prevent and defend against the upcoming storm of agentic misuse and abuse, most of the RSAC sessions I attended offered little more than platitudes about protecting your systems and establishing guardrails, without exploring meaningful ways to do so.

The concept and practice of Human-in-the-Loop (HITL) was one solution to protect against the oncoming onslaught. But in the most informative session I attended, the panelists shared some tough truths.

Francis deSouza, COO and President of Security Products at Google Cloud; Emma Smith, Global CISO at Vodafone; and Shaun Khalfan, Sr. VP, CISO at PayPal, were unanimously sour on the human-in-the-loop (HITL) approach because it’s too slow. 

Even though AI governance certifications, regulations, and frameworks either suggest or demand HITL, these panelists agreed that, given the volume and velocity of agentic AI, HITL is unsustainable and untenable.

Instead, they proposed that humans on or at the loop are necessary. While it’s a good problem-posing exercise, we are nowhere near translating humans on or at the loop into purposeful architecture, let alone a design implementation.

As a certified ISO 42001 AI management systems lead implementer, my thoughts immediately turned to how organizations maintain their ISO 42001 certification (and others) when they aren’t applying HITL to critical processes. 

After the session, I put this question to the Google COO, who told me we will all have to evolve. The “we” that I assume he was referring to includes the certification creators, the certifiers, and the certified. I agree.

We are in an agentic AI arms race between bad actors and negligent actors on one side of the equation, and defenders and protectors on the other. Most of the sessions I attended recognized this, but basically stuck to a regurgitated admonition to impose guardrails and keep humans in the loop.

What can we do now?

The agentic arms race has reached an inflection point—here’s what we can do about it.

First, you should apply existing AI governance frameworks, despite changes in the velocity, volume, and sophistication of the agentic AI evolution. 

It's tempting to jump to sexy technical tools to reduce the threats posed by agentic AI. But doing so means you focus only on aspects of the attack surface, while leaving major weaknesses unaddressed across your enterprise.

When working with organizations, especially those already encountering problems, I always ask to see their policy first. Without a high-level, general representation of leadership's expectations of AI use and limitations, you cannot adequately define baselines, guidelines, standards, and procedures where the technical tools will be used.

Here are a few generic AI governance guides that I use in my consultative discussions and workshops:

Here are two frameworks I use for agentic-specific control guidance:

Implementation and use of agentic AI will not improve poorly managed technology and security governance; it will simply make an organization faster at producing bad outcomes on a larger scale.

Zero trust will still outscore your adversary

Lastly, I must say that I am continually impressed by how passionate some naysayers are about zero trust architecture’s (ZTA) effectiveness, especially when they understand only one or two of its many elements.

In a separate RSAC session,  I asked the deputy CISO at Anthropic if they're identifying ways to apply ZTA in their work with the United States Cybersecurity and Infrastructure Security Agency (CISA). (I have a natural interest because I am one of the CISA trainers who support and guide ZTA implementation across all federal agencies under Executive Order 14028 and OMB Memorandum M-22-09.)

As I was asking the question, I noticed his fellow panelist shuffling in his seat, and his hand shot up to interject. When the deputy CISO finished answering, his fellow panelist said it was a great question and a great answer. Then he went on to state that zero trust is broken because it relies on Role-based Access Control (RBAC), which in turn relies on OAuth, and that you can break the OAuth token.

The statement that zero trust amounts to nothing more than OAuth was not only dangerously reductive but also highly inaccurate. Since it is not the first time I’ve heard this, I believe it is important to address it.

Zero trust details that shouldn’t be ignored

ZTA is composed of five pillars: Identity, Data, Networks, Applications and Workloads, and Devices. RBAC and OAuth are only two of many tools (e.g., Attribute-Based Access Control) within ZTA's Identity pillar.

There are also four measurable levels of maturity that have a progressively granular approach to how access is maintained. The least mature implementation of ZTA means that your zero-trust Policy Decision Point (PDP) will use all six questions of existence (who, what, when, why, where, and how) to grant access for entitlements to be dynamically built and removed. 

RBAC is only concerned with who and some elements of what. If an OAuth token is compromised, the ZTA architecture is designed to minimize the attacker's blast radius by preventing them from moving laterally across other systems. 

The ZTA systems are becoming more sensitive to this work. While I was in San Francisco, my debit card was declined after the second use, and when I returned the call to the fraud department, I asked what indicator of compromise they used. Although all three indicators they gave me were false positives, I was pleased to learn that they were among the six questions of existence operational in their ZTA policy at work.

Agentic AI has not outgrown ZTA, but ignorance of what ZTA entails is like a weed that will quickly overgrow your security posture. The difference between agentic AI and previous technology use cases lies in the presence of autonomous action, greater volume, increased velocity, and widespread access. The principles of attack and protect surface determination remain the same.

I recommend all readers take a moment to review the CISA ZT Maturity Model and then evaluate their own environment.

Conclusion: Goodbye, governor. Hello, Caesar.

We have no choice but to use AI to defend against AI. As we establish operating boundaries in the form of administrative AI policies, like the Roman Senate's proclamation to Julius Gaius to disband his army and return to Rome, we need to prepare the proper actions and environment to enforce them.

Beginning with what you know and the tools you have, define granular and discrete rules of least privilege, integrating your agentic workflow into your ZTA implementations. ZTA treats all access as a breach and all environments as hostile. With this approach, we will be better prepared for the oncoming crossing of the Rubicon by agentic AI.

Dr. Lyron H. Andrews

Dr. Lyron H. Andrews

Dr. Lyron H. Andrews is a nationally recognized cybersecurity and AI leader, educator, and technology executive whose career spans more than three decades at the intersection of security, AI, cloud, and leadership development. As a Pluralsight Author Fellow, Dr. Andrews is one of Pluralsight’s most distinguished experts, helping organizations and professionals build the critical skills needed to navigate today’s increasingly complex digital world. Dr. Andrews has held influential leadership roles across the public and private sectors, including Network Manager for the NYC Department of Education, Senior Director of IT at BMG, and Dean of Technology at BNY Mellon. He is a sought-after instructor and speaker for cybersecurity and AI governance. Dr. Andrews co-authored two (ISC)² certification publications (CISSP and CCSP) and developed the Business established Service Taxonomy (BeST) Framework, a human-centered approach to aligning services, technology, and mission outcomes. His doctoral research at Columbia University explored how critical thinking thrives in environments that cultivate the essential conditions for deep learning and organizational transformation. Dr. Andrews has an Ed.D. and M.S. from Columbia University and a portfolio of globally recognized certifications—including AIMS implementer, TAISE, CCZT, CISSP, CCSP, CISM, CRISC, SSCP, and CCSK—demonstrating the technical rigor and trusted expertise that enterprises rely on to strengthen cyber resilience.

More about this author