Whitepapers/Guides

Whitepapers

Resources, Top 10 for LLM

die OWASP Top 10 für LLM & Generative KI (2025)

Dieses Update bietet eine aktualisierte und umfassende Ressource, die sich mit den größten Risiken, Schwachstellen und Gegenmaßnahmen für die Absicherung von Anwendungen für generative KI und LLM über ihren gesamten Entwicklungs-, Bereitstellungs- und Verwaltungslebenszyklus hinweg befasst. Ganz gleich, ob Sie mit RAG-basierten Anwendungen, Agentic-Architekturen oder komplexen LLM-Integrationen arbeiten, diese Liste ist ein Muss für Entwickler, […]

die OWASP Top 10 für LLM & Generative KI (2025) Read Post »

Resources, Top 10 for LLM

Top 10 2025 de riesgos y mitigaciones para LLMs y aplicaciones de IA Generativa

El OWASP Top 10 para Aplicaciones de Modelos de Lenguaje Grandes comenzó en 2023 como un esfuerzo impulsado por la comunidad para resaltar y abordar problemas de seguridad específicos para aplicaciones de IA. Desde ese momento, la tecnología ha seguido extendiéndose a través de industrias y aplicaciones, al igual que los riesgos asociados. A medida

Top 10 2025 de riesgos y mitigaciones para LLMs y aplicaciones de IA Generativa Read Post »

Resources

OWASP LLM Exploit Generation v1.0

This paper examines the practical implications of large language models (LLMs) in offensive cybersecurity, moving beyond theoretical possibilities to assess their real-world effectiveness. The research, conducted by the CTI Layer Team at OWASP Top Ten For LLMs, explores the ability of LLMs such as GPT-4o, Claude, and DeepSeek r-1 to exploit vulnerabilities in the OWASP

OWASP LLM Exploit Generation v1.0 Read Post »

Resources, Initiatives

Agentic AI – Threats and Mitigations

Agentic AI represents an advancement in autonomous systems, increasingly enabled by large language models (LLMs) and generative AI. While agentic AI predates modern LLMs, their integration with generative AI has significantly expanded their scale, capabilities, and associated risks. This document is the first in a series of guides from the OWASP Agentic Security Initiative (ASI)

Agentic AI – Threats and Mitigations Read Post »

Resources, Initiatives

LLM and Gen AI Data Security Best Practices

The rapid proliferation of Large Language Models (LLMs) across various industries has highlighted the critical need for advanced data security practices. As these AI systems become more sophisticated, they bring with them unprecedented risks, including potential breaches of sensitive information and challenges in meeting stringent data protection regulations. This white paper outlines a comprehensive set

LLM and Gen AI Data Security Best Practices Read Post »

Resources, Initiatives

GenAI Red Teaming Guide

This guide outlines the critical components of GenAI Red Teaming, with actionable insights for cybersecurity professionals, AI/ML engineers, Red Team practitioners, risk managers, adversarial attack researchers, CISOs, architecture teams, and business leaders. The guide emphasizes a holistic approach to Red Teaming in four areas: model evaluation, implementation testing, infrastructure assessment, and runtime behavior analysis.

GenAI Red Teaming Guide Read Post »

Resources, Initiatives

LLM and Generative AI Security Solutions Landscape – Q1,2025

Updated for Q1, 2025 – The LLM and Generative AI Security Solutions Landscape is tailored for a diverse audience comprising developers, AppSec professionals, DevSecOps and MLSecOps teams, data engineers, data scientists, CISOs, and security leaders who are focused on developing strategies to secure Large Language Models (LLMs) and Generative AI applications. It provides a reference

LLM and Generative AI Security Solutions Landscape – Q1,2025 Read Post »

Resources, Initiatives

OWASP Top 10 for LLM Applications 2025

The OWASP Top 10 for Large Language Model Applications started in 2023 as a community-driven effort to highlight and address security issues specific to AI applications. Since then, the technology has continued to spread across industries and applications, and so have the associated risks. As LLMs are embedded more deeply in everything from customer interactions to internal operations, developers and security professionals are discovering new vulnerabilities—and ways to counter them.

OWASP Top 10 for LLM Applications 2025 Read Post »

Resources, Initiatives

LLM and Generative AI Security Solutions Landscape

The LLM and Generative AI Security Solutions Landscape is tailored for a diverse audience comprising developers, AppSec professionals, DevSecOps and MLSecOps teams, data engineers, data scientists, CISOs, and security leaders who are focused on developing strategies to secure Large Language Models (LLMs) and Generative AI applications. It provides a reference guide of the solutions available

LLM and Generative AI Security Solutions Landscape Read Post »

Resources, Initiatives

LLM and Generative AI Security Center of Excellence Guide

As generative AI technologies evolve and integrate into various aspects of business and society, the need for robust governance, security, and policy management becomes paramount. Establishing a Center of Excellence (COE) for Generative AI Security aims to bring together diverse groups such as security, legal, data science, operations, and end-users to foster collaboration, develop best

LLM and Generative AI Security Center of Excellence Guide Read Post »

Scroll to Top