- GEN AI SECURITY
- Initiatives
Red Teaming & Evaluation
This project establishes comprehensive AI Red Teaming and evaluation guidelines for Large Language Models (LLMs), addressing security vulnerabilities, bias, and user trust. By collaborating with partners and leveraging real-world testing, the initiative will provide a standardized methodology for AI Red Teaming, including benchmarks, tools, and frameworks to boost cybersecurity defenses.
- #team-genai-redteam
- Github
- Initiative Charter
What’s New
GenAI Red Teaming Guide
This guide outlines the critical components of GenAI Red Teaming, with actionable insights for cybersecurity professionals, AI/ML engineers, Red Team practitioners, risk managers, adversarial attack researchers,
OWASP AI Summit @ RSAC 2024 – AI Red Teaming Panel
This panel explores leveraging both Red Teaming to Secure LLM apps and the potential of GenAI for red teaming exercises to enhance cybersecurity. The panel will
OWASP Gen AI Incident & Exploit Round-up, Q2’25
OWASP Gen AI Incident & Exploit Round-up, Q2 (Mar-Jun) 2025 About the Round-up This is not an exhaustive list, but a semi-regular blog where we aim
The OWASP Top 10 For LLM Team Delivers New Security Guidance To Help Prepare And Respond To Deepfake Threats
The OWASP Top 10 for LLM team is excited to announce the release of the Guide for Preparing and Responding to Deepfake Events. This comprehensive resource
Research Initiative – Securing and Scrutinizing LLMS in Exploit Generation
Challenge Currently limited actionable data exists in understanding how different LLMS are being leveraged in exploit generation, and what mechanisms can be used to detect and
Get Started
Weekly
Monday
9:30 AM PDT
10:30 AM PDT
Weekly initiative meeting.