AI Threat Intelligence and Response

Limited actionable data exists in understanding how different LLMS are being leveraged in exploit generation. This initiative aims to explore the capabilities and risks associated with generating day-one vulnerabilities’ exploits using various Large Language Models (LLMs), including those lacking ethical guardrails.

Whats New?

The OWASP GenAI Security Project commissioned this GenAI Incident Response guide to help fill this need by providing security practitioners with guidelines and best practices

This paper examines the practical implications of large language models (LLMs) in offensive cybersecurity, moving beyond theoretical possibilities to assess their real-world effectiveness. The research,

Deepfakes—hyper-realistic digital forgeries—have gained significant attention as the rapid development of generative AI has made it easier to produce convincingly realistic videos and audio recordings

OWASP Gen AI Incident & Exploit Round-up, Q2 (Mar-Jun) 2025 About the Round-up This is not an exhaustive list, but a semi-regular blog where we

The OWASP Top 10 for LLM team is excited to announce the release of the Guide for Preparing and Responding to Deepfake Events. This comprehensive

Challenge Currently limited actionable data exists in understanding how different LLMS are being leveraged in exploit generation, and what mechanisms can be used to detect

Getting Started

Meetings

Weekly

Monday

9:30 AM PDT
10:30 AM PDT

Open Meeting – AI Threat Intelligence

Weekly initiative meeting.

Add to Calendar

Initiative Leads

Bryan Nakayama

Core Team Member

Rachel James

Core Team MemberInitiative Leaders

Scroll to Top