Red Teaming
& Evaluation

Testing GenAI systems through adversarial red teaming methods.

This project establishes comprehensive AI Red Teaming and evaluation guidelines for Large Language Models (LLMs), addressing security vulnerabilities, bias, and user trust. By collaborating with partners and leveraging real-world testing, the initiative will provide a standardized methodology for AI Red Teaming, including benchmarks, tools, and frameworks to boost cybersecurity defenses.

Whats New?

This guide outlines the critical components of GenAI Red Teaming, with actionable insights for cybersecurity professionals, AI/ML engineers, Red Team practitioners, risk managers, adversarial attack

This panel explores leveraging both Red Teaming to Secure LLM apps and the potential of GenAI for red teaming exercises to enhance cybersecurity. The panel

The OWASP Top 10 for LLM and Generative AI project , genai.owasp.org, team is thrilled to unveil the Gen AI Red Teaming Guide which provides

Red Teaming: The Power of Adversarial Thinking in AI Security (AI hackers, tech wizards, and code sorcerers, we need you!) This is your invitation and

Meetings

Weekly

Tuesday

9:30 AM PDT
10:30 AM PDT

Open Meeting – Agentic Security Working Group

Weekly initiative meeting.

Add to Calendar

Initiative Leads

Sonu Kumar

Initiative Leaders

Jason Ross

Core Team MemberInitiative Leaders

Scroll to Top