OWASP GenAI Security Project Expands AI Security Frameworks Ahead of RSA 2026, Celebrates Continued Sponsor Support

New resources, a full week of RSA programming and growing industry adoption mark a milestone moment for the open-source AI security community

WILMINGTON, Del. — March 19, 2026The OWASP GenAI Security Project (genai.owasp.org), a leading global open-source and expert community, growing to more than  25K members,  dedicated to delivering practical guidance and tools for securing generative and agentic AI, today released its latest landscape guides for LLM and agentic security. As enterprises across every sector accelerate adoption of generative AI, the security frameworks and guidelines developed by OWASP’s GenAI Security Project have become foundational references for practitioners, policymakers and vendors alike.

The OWASP GenAI Security Project continues to grow, with expanding support from across the security industry that reflects the urgency organizations place on getting AI security right. The project welcomes several new sponsors in Apiiro, Capsule, F5, Fujitsu, NeuralTrust, Starseer, Straiker and Tellus Digital, whose contributions help sustain the peer-reviewed, openly licensed research the community depends on. Additionally, several sponsor alumni have been acquired by the industry’s largest players, including SPLX by Zscaler, Pangea by CrowdStrike, Calypso AI by F5, Lakera by Check Point, and Prompt Security by SentinelOne, underscoring the foundational role OWASP GenAI Project’s frameworks have played in shaping the AI security market.

The Q2 2026 Updated Landscape Guide for LLM and Agentic Security expands the project’s widely referenced AI Security Solutions Landscape — mapping the full LLM and Gen AI lifecycle across development, testing, deployment and governance — with two key additions: updated vendor and tooling ecosystem documentation and new agentic red teaming taxonomy providing a structured, lifecycle-wide framework for identifying, measuring, mitigating and governing AI risk through coordinated adversarial testing, defensive validation and continuous feedback loops. 

Released ahead of the upcoming RSA Conference 2026, these guides join a growing body of peer-reviewed, openly licensed resources seeing rapid industry uptake, including:

  • OWASP Top 10 for Agentic Applications for 2026 – A globally peer-reviewed framework that identifies the most critical security risks facing autonomous and agentic AI systems. 
  • Guide for Secure MCP Server Development – Actionable guidance for securing Model Context Protocol (MCP) servers, which are the critical connection point between AI assistants and external tools, APIs, and data sources.
  • OWASP SBOM/AIBOM GeneratorAn open-source tool designed to enhance AI supply chain transparency and security by generating AI Bills of Materials (AIBOMs), also known as AI Software Bills of Materials (AI SBOMs), ML-BOMs, or SBOMs for AI.
  • OWASP Vendor Evaluation Criteria for AI Red Teaming –  A practical guide for organizations assessing vendors that offer AI red teaming services or automated testing tools.

The GenAI Security Project will once again have a strong presence at RSA Conference 2026 in San Francisco, March 23–26, with four opportunities for attendees and community members to engage, learn and connect with project leaders and peers:

  • OWASP GenAI Security RSAC ’26 Kickoff Party (Monday, March 23 | 6:30–9:00 p.m. | James Bong Building, Market St, San Francisco) — Hosted by Straiker.ai, this networking event connects project leaders, experts and peers. Open to all RSA attendees and community members at no cost.
  • OWASP GenAI Security Jungle Party of the Century (Monday, March 23 | 6:30–9:00 p.m. | DigitalJungleSF, 972 Mission St) — Cap off your RSAC day with drinks, light bites and conversation with community members and project leaders. Open to all at no cost.
  • OWASP GenAI Security Summit 2026 (Wednesday, March 25 | 8:30 a.m.–12:30 p.m. | Moscone South, Room 303) — Bringing together practitioners and CISOs to share community-driven research, best practices and real-world insights on securing LLMs, GenAI and AI-assisted development. Requires ExpoPlus pass.
  • OWASP GenAI Security Open Workshop & Agentic Hackathon (Wednesday, March 25 | 2:00–6:30 p.m. | DigitalJungleSF, 972 Mission St) — A hands-on deep dive into agentic security challenges, featuring organizations implementing the OWASP Agentic Top 10 and a live hackathon using the FinBot Agentic AI Capture the Flag application. Open to all at no cost.

Scott Clinton, Co-Chair, Co-founder,  OWASP GenAI Security Project: “AI and agentic systems are no longer emerging technology. They are production reality, and the security community is still racing to catch up. The resources we’re releasing ahead of RSA represent our most comprehensive view yet of what organizations need to build and deploy AI safely. We look forward to bringing those conversations to San Francisco.”

About OWASP GenAI Security Project

The OWASP Gen AI Security Project is a global, open-source initiative dedicated to identifying, mitigating, and documenting security and safety risks associated with generative AI technologies, including large language models (LLMs), agentic AI systems, and AI-driven applications. Our mission is to empower organizations, security professionals, AI practitioners, and policymakers with comprehensive, actionable guidance and tools to ensure the secure development, deployment, and governance of generative AI systems. Visit our site to learn more.

 

Media Contact 

Tanner Skotnicki 

Force4 Technology Communications  

tanner@force4.co

From our new sponsors:

 

Satoshi Imai, Ph.D., Head of Data & Security Research Laboratory, Fujitsu Limited, said: “The OWASP GenAI Security Project provides critical open frameworks that help organizations understand and mitigate the emerging risks of generative and agentic AI. At Fujitsu, we leverage these mappings and best practices in the development of our own security technologies, including Agentic AI Security Scanning and LLM Vulnerability Scanning solutions. We are also proud of our contributions to OWASP guidelines and actively incorporate into our security evaluation methodologies. Supporting this project reflects our commitment to advancing open, practical security practices for the next generation of AI-powered systems.”

Tim Schulz, CEO, at Starseer, said: “We work on model-level security every day, and the OWASP GenAI frameworks are consistently what we point practitioners toward when they ask where to start. Starseer sponsors this project because it’s producing the kind of open, practical guidance the industry actually needs right now.”

Joan Vendrell Farreny, CEO & Co-Founder at NeuralTrust, said: ” Being sponsors and contributors to the OWASP Top 10 for Agentic AI Security is something we are very proud of. This initiative closely aligns with our mission to make AI agents safe, governable, and ready for enterprise adoption”

Lidan Hazout, Co-Founder CTO Capsule Security, said: “Agentic AI is introducing a new attack surface built around autonomy, memory, skills, tool use, and decision-making at machine speed. The OWASP Top 10 for Agentic AI Security is an important milestone in helping organizations make sense of the shift and respond with greater clarity. Capsule Security is proud to be a contributing member to this project that brings together the security community around one of the most important challenges in technology today – a mission that drives us, as we think about the future of AI security.”

Louise Scully, F5, said: “The OWASP GenAI Security Project is doing important work to help the industry address the real security challenges emerging with generative and agentic AI. We’re proud to continue supporting the community effort to develop practical guidance for securing AI systems.”

Venkata Sai Kishore Modalavalasa, Chief Architect and Engineering Leader, Straiker.AI, said: “What stands out about the OWASP GenAI Security Project is its ability to turn community-driven expertise into usable frameworks, hands-on resources, and practical guidance. These efforts help teams navigate the real security challenges of generative and agentic AI. At Straiker, and through my work as an active contributor to the project, I see firsthand how important it is to give defenders, builders, and security leaders practical ways to learn by doing, alongside open frameworks that are shaping how the industry approaches AI security.”

 

Scroll to Top