AI Threat Intelligence and Response

Limited actionable data exists in understanding how different LLMS are being leveraged in exploit generation. This initiative aims to explore the capabilities and risks associated with generating day-one vulnerabilities’ exploits using various Large Language Models (LLMs), including those lacking ethical guardrails.

Resource Links:

What’s New

Get Started

Quick access to meetings and collaboration groups
Weekly

Monday

9:30 AM PDT
10:30 AM PDT
Open Meeting – AI Threat Intelligence

Weekly initiative meeting.

Add to Calendar

Initiative Leads

Bryan Nakayama

Core Team Member

Rachel James

Core Team MemberInitiative Leaders

Community of Contributors

Explore a global network of volunteers improving evaluations, patterns, and defenses for autonomous systems.
Scroll to Top