Announcement

OWASP Top 10 for Agentic Applications – The Benchmark for Agentic Security in the Age of Autonomous AI

Introducing the OWASP Top 10 for Agentic AI Applications – our community’s actionable framework for securing autonomous, tool-using AI systems. Built at global scale informed by real incidents, and based on our work already adopted across industry, this release marks a pivotal moment in turning insight into action, and advancing the security of Agentic Applications at the pace of innovation.

OWASP Top 10 for Agentic Applications – The Benchmark for Agentic Security in the Age of Autonomous AI Read Post »

Article, Featured

OWASP Agentic AI Taxonomy in Action: From Theory to Tools

As OWASP’s Agentic Security Initiative (ASI) gains momentum, its impact is already being felt across the AI security landscape. The Agentic AI – Threats and Mitigations taxonomy is now powering real-world developer tools that embed security into the workflows of AI builders and red teams. In this post, we highlight three standout tools—PENSAR, SPLX.AI Agentic Radar, and AI&ME—that are adopting the OWASP ASI taxonomy to help teams test, defend, and build secure agentic systems. This growing ecosystem is also informing the development of the forthcoming OWASP Top 10 for Agentic AI. Join us at DEF CON and Black Hat to help shape what’s next.

OWASP Agentic AI Taxonomy in Action: From Theory to Tools Read Post »

Events

Recap from OWASP Gen AI Security Project’s – NYC Insecure Agents Hackathon

Creating an insecure agent is surprisingly easy. There are new tools and frameworks available that make creating AI Agents relatively simple. However, AI Agents are prone to several threats outlined in the recent Agentic AI – Threats and Mitigations guide that was released in February. The OWASP Gen AI Security Project’s recently put on a hackathon in NYC with the goal of building insecure agents. In this blog post we recap the event and the most common security findings we saw from the submissions.

Recap from OWASP Gen AI Security Project’s – NYC Insecure Agents Hackathon Read Post »

Article

Securing AI’s New Frontier: The Power of Open Collaboration on MCP Security

As AI systems begin interacting with live tools and data via the Model Context Protocol (MCP), new security risks emerge that traditional approaches can’t fully address. This post summarizes key insights from the OWASP GenAI Security Project’s latest research on securing MCP, offering practical, defense-in-depth strategies to help developers and defenders build safer agentic AI applications in real time.

Securing AI’s New Frontier: The Power of Open Collaboration on MCP Security Read Post »

Scroll to Top