- GEN AI SECURITY
- Initiatives
AI Threat Intelligence and Response
Limited actionable data exists in understanding how different LLMS are being leveraged in exploit generation. This initiative aims to explore the capabilities and risks associated with generating day-one vulnerabilities’ exploits using various Large Language Models (LLMs), including those lacking ethical guardrails.
- # team-llm_ai-cti
- Github
- Initiative Charter
What’s New
GenAI Incident Response Guide 1.0
The OWASP GenAI Security Project commissioned this GenAI Incident Response guide to help fill this need by providing security practitioners with guidelines and best practices for
OWASP LLM Exploit Generation v1.0
This paper examines the practical implications of large language models (LLMs) in offensive cybersecurity, moving beyond theoretical possibilities to assess their real-world effectiveness. The research, conducted
Guide for Preparing and Responding to Deepfake Events
Deepfakes—hyper-realistic digital forgeries—have gained significant attention as the rapid development of generative AI has made it easier to produce convincingly realistic videos and audio recordings that
OWASP Gen AI Incident & Exploit Round-up, Q2’25
OWASP Gen AI Incident & Exploit Round-up, Q2 (Mar-Jun) 2025 About the Round-up This is not an exhaustive list, but a semi-regular blog where we aim
The OWASP Top 10 For LLM Team Delivers New Security Guidance To Help Prepare And Respond To Deepfake Threats
The OWASP Top 10 for LLM team is excited to announce the release of the Guide for Preparing and Responding to Deepfake Events. This comprehensive resource
Research Initiative – Securing and Scrutinizing LLMS in Exploit Generation
Challenge Currently limited actionable data exists in understanding how different LLMS are being leveraged in exploit generation, and what mechanisms can be used to detect and