The Center for Internet Security (CIS), in collaboration with Astrix Security and Cequence Security, has launched three new AI-focused Companion Guides to help enterprises secure rapidly evolving environments involving large language models (LLMs), AI agents, and Model Context Protocol (MCP) systems.
The guides extend the widely adopted CIS Critical Security Controls to modern AI architectures, addressing emerging risks such as data leakage, ungoverned agent autonomy, credential misuse, and unsafe tool execution. Each guide targets a specific layer of the AI stack, offering practical and prioritized recommendations aligned with real-world deployment scenarios.
“We translated the CIS Controls into concrete steps that help teams secure AI systems across the model, agent, and protocol layers.” – Curtis Dukes
The AI LLM Companion Guide focuses on securing language models, particularly around prompt handling, context management, and exposure of sensitive data. The AI Agent Companion Guide provides controls for managing autonomous and semi-autonomous agents, emphasizing governed access, safe execution, and operational oversight. Meanwhile, the MCP Companion Guide addresses protocol-level risks, including secure tool access, management of non-human identities, and ensuring auditable interactions.
According to Curtis Dukes, the initiative aims to bring clarity to organizations navigating the complexities of securing AI systems by translating established security frameworks into actionable guidance.
Jonathan Sander highlighted the growing importance of securing AI agents and non-human identities, while Shreyans Mehta emphasized the need for visibility and governance as AI systems increasingly interact with enterprise applications and APIs.
Together, the guides provide a unified framework for organizations to apply existing security controls to AI-driven environments, enabling responsible adoption while maintaining governance, scalability, and operational resilience.
