The OWASP Foundation is thrilled to announce the launch of the Agentic Security Initiative from the LLM and Generative AI Security Project to tackle the unique security challenges posed by Autonomous AI agents. The initiative, part of the OWASP LLM/Gen AI Security Project, known for the Top 10 List for Large Language Models (LLMs), sets the stage for collaborative research to develop actionable guidance and best practices to help secure the emerging architectures and use cases of agentic LLM and Gen AI applications.
Understanding the Initiative
Agentic Gen AI systems are revolutionizing the way we approach complex tasks by introducing features like planning, tool usage, reflection and dynamic adaptation, and memory. These systems extend design patterns first used with other strands of AI (e.g. reinforcement learning) to harness the power of Generative AI and provide unprecedented levels of functionality and autonomy beyond previous generations of agents. By doing so, they extend the scope of excessive agency, which we first introduced in our Top 10 Risks for LLM/Gen AI Applications 2023:24 (v1.1), and updated for 2025 as well as build upon the high-level risks Identified in the OWASP AI Security Solutions for 2025 Guide.
The evolution of advanced frameworks like LangGraph, AutoGPT, and CrewAI, offer programmatic access to these capabilities and create different modalities of autonomy from single-agent automation of tools and constrained agentic workflows, to fully autonomous conversational multi-agent systems, where using patterns of reflection and adaptation new emergent behaviour can appear autonomously. However, these exciting new developments come with new risks.
For instance:
-
- Planning, Refinement, and Adaptation: Agents can decompose tasks into subtasks and refine their strategies through self-reflection and dynamic adaptation evading guardrails and safety measures. Long-term memories can aid evolution of adversarial strategies rendering safeguards ineffective.
- Memory and Environment: Persistent and contextual memory introduces new attack surfaces, particularly when coupled with external vector stores and real time-data access. The latter becomes a possibility with the emergence of on-device LLMs and agents, for instance Gemini on Android devices.
- Tool Use: The ability of agents to autonomously call external APIs or execute code introduces potential vulnerabilities at the interface of autonomy and external systems.
-
- Multi-agency and conversational autonomy challenge our notion of human-in-the loop and how to scale it.
Why This Matters
As agentic systems continue to increase in both complexity and adoption, securing these technologies is critical to ensuring their safe use. Furthermore, the emergence of autonomous and adaptive multi-agent systems raise the fundamental question of how to best scale human oversight, and ultimately AI.
The initiative aims to provide a concrete, fact-based compass to the new Agentic landscape, its threat model, mitigations and guidance, further backed by code examples using the new frameworks.
This aligns with OWASP’s commitment to making security, concrete, actionable and accessible via guidelines, best practices, tooling, and other resources.
Key Goals and Deliverables
The initiative’s goal is to empower developers, security professionals, and decision-makers with an understanding of the threats, how they relate to existing taxonomies, and the tools and knowledge to secure agentic systems effectively. The OWASP LLM and Generative AI Security Project will focus on:
- Threat Modeling using an agreed Agentic Reference Architecture, identifying concrete misconfigurations and vulnerabilities, and investigating new attack surfaces. This will include emerging supply-chain risks, including tooling dependencies and on-device embedded agents and models.
- Mitigations and Recommendations, in our Securing Agentic Systems Guide which will offer actionable guidance for developers and security professionals building and assuring Agentic Systems as well as CISOs and decision-makers on how to prepare for, deploy, and manage Agentic systems. We plan to place particular emphasis on the risks of long-term memory and challenges of scaling human oversight in multi-agent environments, exploring strategies such as triadic adaptation and intelligent monitoring with anomaly detection.
- Agentic Security Landscape, mapping our efforts and contributing to other work and existing taxonomies such as OWASP AI Exchange, NIST, and MITRE ATLAS whilst documenting tools and offerings included in the OWASP AI Security Solutions Landscape and directory to help safeguard Agentic Systems.
Initial Deliverables:
The first major artifacts, including the ASI Threat Model and Vulnerabilities,as well as the Securing Agentic Systems Guide, are scheduled for review in January 2025, with publication targeted for February 2025.
Join the Working Group
This initiative thrives on community collaboration which feeds our ambition to empower builders, defenders, and decision makers with high quality and timely guidance.
We already have highly-regarded researchers, developers, and security professionals from well-known organisations joining the effort. Whether you’re a developer, security professional, or researcher, your expertise can help shape the future of secure agentic AI by joining us.
For more information and to join the working group, visit the OWASP Agentic Security Initiative page.
Together, we can secure the future of agentic systems and scale AI safely!