Title: “Zero Trust for AI Agents”
Introduction: The Call for Zero Trust in AI
At the recent RSAC 2026, a consensus emerged among key industry leaders: zero trust must extend to AI agents. Microsoft, Cisco, CrowdStrike, and Splunk each highlighted the urgent need for enhanced security measures as AI becomes increasingly integrated into enterprise operations. With 79% of organizations already employing AI agents, the stakes have never been higher.
The Monolithic Agent Problem
Traditional enterprise AI agents often operate within a monolithic container, where all components trust each other. This design poses significant security risks:
– **Shared Credentials**: OAuth tokens and API keys coexist in the same environment as the agent’s code.
– **Compromised Security**: A single prompt injection can expose everything, leading to widespread vulnerabilities.
– **Lack of Ownership**: Security and development teams often pass the buck on responsibility for AI agent security.
Innovations in AI Security Architecture
Two companies have taken different approaches to address the security gaps:
1. **Anthropic’s Managed Agents**:
– **Decoupled Architecture**: Separates the brain (decision-making) from the hands (execution) and maintains an append-only session log.
– **Credential Safety**: Credentials are stored in an external vault, minimizing exposure even if the agent is compromised.
2. **Nvidia’s NemoClaw**:
– **Layered Security**: Implements multiple security layers, monitoring all actions within the agent’s environment.
– **Visibility vs. Autonomy**: High observability comes at the cost of requiring operator approval for every action, which can be resource-intensive.
Identifying the Credential Proximity Gap
Both architectures represent a step forward, but they diverge significantly on credential management:
– **Anthropic**: Eliminates credentials from the execution environment entirely, reducing the risk of credential theft.
– **NemoClaw**: While it constrains the blast radius, it still allows some credentials to reside within the sandbox, increasing potential exposure.
Practical Takeaways for Organizations
As organizations navigate the complexities of AI agent security, several priorities emerge:
– **Audit Existing Agents**: Identify and flag any agents using shared service accounts or holding OAuth tokens in their execution environment.
– **Require Credential Isolation**: Ensure that RFPs specify whether vendors remove credentials structurally or gate them through policy.
– **Test Session Recovery**: Verify that session states persist even after a sandbox failure to mitigate data loss risks.
– **Staff for Observability**: Choose security architectures that align with your staffing capabilities for effective monitoring.
– **Monitor for Indirect Prompt Injection**: Stay informed about vendor roadmaps addressing this critical security vector.
Conclusion: The Future of AI Security
The push for zero trust in AI agents is no longer theoretical. As the monolithic default becomes a liability, organizations must adapt to mitigate risks associated with AI deployment.
If your organization is looking to enhance its AI security posture, consider partnering with BlockNova. Our services include AI consulting, AI agent architecture, self-hosted LLM/AI agent hosting, and server hosting to help you navigate this evolving landscape effectively.





0 Comments