- GEN AI SECURITY
- Initiatives
Red Teaming & Evaluation
This project establishes comprehensive AI Red Teaming and evaluation guidelines for Large Language Models (LLMs), addressing security vulnerabilities, bias, and user trust. By collaborating with partners and leveraging real-world testing, the initiative will provide a standardized methodology for AI Red Teaming, including benchmarks, tools, and frameworks to boost cybersecurity defenses.
- #team-genai-redteam
- Github
- Initiative Charter
What’s New
GenAI Red Teaming Guide
This guide outlines the critical components of GenAI Red Teaming, with actionable insights for cybersecurity professionals, AI/ML engineers, Red Team practitioners, risk managers, adversarial attack researchers,
OWASP AI Summit @ RSAC 2024 – AI Red Teaming Panel
This panel explores leveraging both Red Teaming to Secure LLM apps and the potential of GenAI for red teaming exercises to enhance cybersecurity. The panel will
Announcing the OWASP Gen AI Red Teaming Guide
The OWASP Top 10 for LLM and Generative AI project , genai.owasp.org, team is thrilled to unveil the Gen AI Red Teaming Guide which provides a
Research Initiative: AI Red Teaming & Evaluation
Red Teaming: The Power of Adversarial Thinking in AI Security (AI hackers, tech wizards, and code sorcerers, we need you!) This is your invitation and an
Get Started
Weekly
Tuesday
9:30 AM PDT
10:30 AM PDT
Weekly initiative meeting.