Initiatives

The goal of initiatives within the project are to address specific areas, of education and research to create practical, executable resources and insights in support of the overall project goals through focused working groups. Each inititive charter is reviewed and approved as outlined in the OWASP Top 10 for LLM Project governance.

AI Cyber Threat Intelligence

Limited actionable data exists in understanding how different LLMS are being leveraged in exploit generation. This initiative aims to explore the capabilities and risks associated with generating day-one vulnerabilities’ exploits using various Large Language Models (LLMs), including those lacking ethical guardrails.

Secure AI Adoption and Governance

The Secure AI Adoption Initiative forms a Center of Excellence (CoE) to enhance security frameworks, governance policies, and cross-departmental collaboration for Large Language Models (LLMs) and generative AI. Through strategic planning, training, and the development of standardized protocols, the initiative ensures that AI applications are adopted safely, ethically, and securely within organizations.

Guidance & Resources

Get Involved

Initiative Lead(s)
Working Group

AI Red Teaming & Evaluation

This project establishes comprehensive AI Red Teaming and evaluation guidelines for Large Language Models (LLMs), addressing security vulnerabilities, bias, and user trust. By collaborating with partners and leveraging real-world testing, the initiative will provide a standardized methodology for AI Red Teaming, including benchmarks, tools, and frameworks to boost cybersecurity defenses.

Risk and Exploit Data Gathering, Mapping

This initiative gathers real-world data on vulnerabilities and risks associated with Large Language Models (LLMs), supporting the update of the OWASP Top 10 for LLMs. In attition this initives maintains mappings between the Top 10 for LLM and other security frameworks. Through a robust data collection methodology, the initiative seeks to enhance AI security guidelines and provide valuable insights for organizations to strengthen their LLM-based systems.

Agentic Security Initiative

The Agentic Security Research Initiative explores the emerging security implications of agentic systems, particularly those utilizing advanced frameworks (e.g., LangGraph, AutoGPT, CrewAI) and novel  capabilities like Llama 3’s agentic features.

Resources

LLM and Generative AI Security Center of Excellence Guide

As generative AI technologies evolve and integrate into various aspects of business and society, the need for robust governance, security, and policy management becomes...

Guide for Preparing and Responding to Deepfake Events

Deepfakes—hyper-realistic digital forgeries—have gained significant attention as the rapid development of generative AI has made it easier to produce convincingly realistic videos and audio...

LLM Applications Cybersecurity and Governance Checklist 1.0 – French

The OWASP Top 10 for LLM Applications Cybersecurity and Governance Checklist is for leaders across executive, tech, cybersecurity, privacy, compliance, and legal areas, DevSecOps,...

Initiative Blogs

No Posts Found! Sorry, but nothing matched your selection. Please try again with some different keywords.

Scroll to Top