AI Security Solutions Landscape

The landscape includes traditional and emerging security controls addressing LLM and Generative AI risks in the OWASP Top 10. It is not a comprehensive list or an endorsement but a community resource of open source and proprietary solutions. Contributions are open and reviewed for accuracy.

Watch the video
Pangea
Secure authentication, with support for adaptive threat intelligence, built specifically to protect access to your AI application, protect your users, and your organization.
Protect AI
Recon runs automated and comprehensive penetration testing attacks on your LLM powered applications, to help protect you from unique security threats and vulnerabilities. It has
Microsoft
Defender for Cloud AI-SPM identifies vulnerabilities and misconfigurations in generative AI apps on Azure OpenAI, Azure Machine Learning, and Amazon Bedrock, providing actionable recommendations and
Eroun&Company
RedTeam solution to automate detection of malicious prompt attack vulnerabilities against LLM
Decisionbox
Decisionbox makes LLM applications learn from data by transforming zero-shot prompts into fine-tuned machine learning classifiers.
Citadel AI
Citadel Lens is a tool for multilingual, automated red teaming and evaluation of LLM applications.
AIFT
Vulcan is an LLM risk and vulnerability testing solution that enables AI project teams to perform automatic red teaming at scale.
Brand Engagement Networks
Red Teaming / Security Testing in the AI CI/CD. The SPLX.ai platform provides continuous testing, guard rail assessments, domain specific test scenarios, AI Inventory which
KELA
AiFort by KELA is an automated, intelligence-led red teaming platform designed to protect GenAI applications. AiFort allows organizations full protection through test simulations of their
AIShield,Powered by Bosch
AIShield Watchtower automates model and notebook discovery, performing thorough vulnerability scans to identify risks like hard-coded secrets, PII exposure, outdated libraries, serialization attacks, and unsafe
Noma Security
"Noma Security is a comprehensive application security solution for the Data and AI lifecycle. It offers , End-to-End Visibility: Scanning notebooks, source code, and other
SpiceDB
Open source, Google Zanzibar-inspired permissions database for scalably storing and querying fine-grained authorization data.
Palo Alto Networks
Palo Alto Networks AI Runtime Security provides continuous discovery, protection, and monitoring for genAI applications, preventing security risks such as prompt injections, sensitive data leakage,
modelscan
ModelScan is an open source project from Protect AI that scans models to determine if they contain unsafe code.
Infotect Security
IWS scans outbound response traffic in real time for undesirable content and confidential data at layer 4. It is a paradigm shift in web security,
Cisco Systems, Inc.
Cisco AI Runtime secures GenAI apps to address threats like prompt injections, sensitive data loss, and compliance concerns. Deploy guardrails around safety, privacy, relevancy, and
AISheild,Powered by Bosch
AIShield Guardian functions as an AI firewall and guardrail, providing secure access control, sensitive data protection, and live monitoring. It safeguards interactions between applications and

IronCore Labs Cloaked AI

IronCore Labs
Encrypts vector embeddings stored in databases while still allowing kNN/aNN searches and preventing vector inversion attacks.
TrojAI
TrojAI Defend protects AI models from evolving threats at runtime, including prompt injection, jailbreaking, DoS attacks, data leakage and loss, and toxic or offensive content.

LLM Vulnerability Scanner

Garak.ai
Garak helps you discover weaknesses and unwanted behaviors in anything using language model technology. With garak, you can scan a chatbot or model and quickly
Microsoft
Microsoft Security provides capabilities to discover, protect, and govern AI applications. Data Security, AI Security Posture Management, AI Threat Protection, AI governance and more.
Noma Security
"Noma Security is a comprehensive application security solution for the Data and AI lifecycle. It offers , End-to-End Visibility: Scanning notebooks, source code, and other
SpiceDB
Open source, Google Zanzibar-inspired permissions database for scalably storing and querying fine-grained authorization data.
Microsoft
Defender for Cloud AI-SPM identifies vulnerabilities and misconfigurations in generative AI apps on Azure OpenAI, Azure Machine Learning, and Amazon Bedrock, providing actionable recommendations and

Prisma Cloud AI-SPM

Palo Alto Networks
Prisma Cloud AI-SPM helps organizations discover, classify, protect and govern AI-powered applications. It provides visibility into the entire AI ecosystem including model, applications and resources,

Seezo Security Design Review

Seezo.io
Seezo leverages LLMs to provide context-specific security requirements to developers before they start coding

StrideGPT

Stride GPT
A threat model helps identify and evaluate potential security threats to applications / systems. It provides a systematic approach to understanding possible vulnerabilities and attack

Mitre ATLAS

Mitre
ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally accessible, living knowledge base of adversary tactics and techniques against Al-enabled systems based on real-world
Pillar Security
Pillar enables teams to rapidly adopt AI with minimal risk by providing a unified AI security layer across the organization
Securiti
Securiti Data Command Center provides unified intelligence, controls, and orchestration for enabling the safe use of data and AI across hybrid multi-clouds. Enterprises rely on

Unstructured.io

Unstructured.io
Unstructured is the leading provider of LLM data preprocessing solutions, empowering organizations to transform their internal unstructured data into formats compatible with large language models
Securiti
Securiti Data Command Center provides unified intelligence, controls, and orchestration for enabling the safe use of data and AI across hybrid multi-clouds. Enterprises rely on
AIandMe
AIandMe provides an end-to-end platform for testing, securing, and monitoring LLM-based AI systems—combining automated adversarial testing, real-time protection, and human-in-the-loop audits to ensure reliable, compliant,
Noma Security
"Noma Security is a comprehensive application security solution for the Data and AI lifecycle. It offers , End-to-End Visibility: Scanning notebooks, source code, and other
SpiceDB
Open source, Google Zanzibar-inspired permissions database for scalably storing and querying fine-grained authorization data.
Infosys
The Infosys Responsible AI Toolkit (Technical Guardrail) is API Based solution designed to ensure the ethical and responsible development of AI applications. By integrating safety,

Prisma Cloud AI-SPM

Palo Alto Networks
Prisma Cloud AI-SPM helps organizations discover, classify, protect and govern AI-powered applications. It provides visibility into the entire AI ecosystem including model, applications and resources,
TrojAI
TrojAI Detect secures AI behavior at build time. The AI security platform continuously red teams AI models to find security weaknesses in AI, ML, and

Operant 3D Runtime Defense

Operant AI
Operant provides runtime application defense with threat detection and remediation, automated policy enforcement, and in-line PII redaction. It secures cloud-native environments, protecting APIs, data flows,
Pangea
Utilize Pangea's Sanitize service to ensure that malicious scripts, malicious links, profanity, and regulated PII are not submitted in prompt inputs, prompt responses, or in
Pangea
Pangea's Authorization service is an access control engine that integrates with any AI application through easy-to-use APIs and SDKs. It is used to enforce access
Pangea
Secure authentication, with support for adaptive threat intelligence, built specifically to protect access to your AI application, protect your users, and your organization.
Pangea
Protect your users and application by redacting sensitive info from prompt inputs, prompt responses, and contextual data, using Pangea's Redact service.
Pangea
Prompt inputs, responses, and data ingestion from external sources can all be evaluated for malicious content with Pangea's Data Guard to protect LLMs and users

PurpleLlama CodeShield

Meta
CodeShield is an effort to mitigate against the insecure code generated by LLMs. CodeShield is a robust inference time filtering tool engineered to prevent the
Pangea
Pangea's Prompt Guard service utilizes a deep understanding of prompt templates, heuristics and trained models to detect direct or indirect prompt injection attacks and jailbreak
Cisco Systems
Cisco AI Validation assesses AI applications and models for security and safety vulnerabilities. We automatically analyze a model’s risk across hundreds of attack techniques and
Mend AI
Mend AI provides a shift-left solution for securing AI-driven applications. It enables discovery of shadow AI, security and compliance analysis through code scanning and red-teaming,
Aqua Security
Aqua facilitates secure application development and runtime protection by addressing vulnerabilities outlined in the OWASP Top 10 for LLM applications.
Trail of Bits
Fickling can help securing AI/ML codebases by automatically scanning pickle files contained in models. Fickling hooks the pickle module and verifies imports made when loading
Pillar Security
Pillar enables teams to rapidly adopt AI with minimal risk by providing a unified AI security layer across the organization
Securiti
Securiti Data Command Center provides unified intelligence, controls, and orchestration for enabling the safe use of data and AI across hybrid multi-clouds. Enterprises rely on
AIandMe
AIandMe provides an end-to-end platform for testing, securing, and monitoring LLM-based AI systems—combining automated adversarial testing, real-time protection, and human-in-the-loop audits to ensure reliable, compliant,
Eroun&Company
RedTeam solution to automate detection of malicious prompt attack vulnerabilities against LLM
Noma Security
"Noma Security is a comprehensive application security solution for the Data and AI lifecycle. It offers , End-to-End Visibility: Scanning notebooks, source code, and other
KELA
AiFort by KELA is an automated, intelligence-led red teaming platform designed to protect GenAI applications. AiFort allows organizations full protection through test simulations of their
Straiker Inc
Secure AI Applications using two products. Ascend AI provides pentesting/red teaming across all layers of the applications. Defend AI provides visibility, guardrails for AI applications.
AIM Intelligence
AIM Supervisor integrates AIM RED for automated AI vulnerability testing, AIM GUARD for real-time threat detection and mitigation, and AIM Benchmark for comprehensive safety evaluations,
Adversa AI
Adversa AI's Red Teaming platform provides automated security testing of Generative AI systems, identifying all possible vulnerabilities like jailbreaks, prompt injections, and adversarial attacks to
Infosys
The Infosys Responsible AI Toolkit (Technical Guardrail) is API Based solution designed to ensure the ethical and responsible development of AI applications. By integrating safety,
Dynamo AI
DynamoGuard offers real-time guardrailing for GenAI, customizable in natural language and capable of running in the cloud, hybrid, on-prem or fully on edge devices to

Prisma Cloud AI-SPM

Palo Alto Networks
Prisma Cloud AI-SPM helps organizations discover, classify, protect and govern AI-powered applications. It provides visibility into the entire AI ecosystem including model, applications and resources,
TrojAI
TrojAI Detect secures AI behavior at build time. The AI security platform continuously red teams AI models to find security weaknesses in AI, ML, and
modelscan
ModelScan is an open source project from Protect AI that scans models to determine if they contain unsafe code.
Meta
CyberSecEval is an extensive benchmark suite under Meta PurpleLlama, designed to evaluate various cybersecurity risks of LLMs, including several listed in the OWASP Top-10 for
Cisco Systems
Cisco AI Validation assesses AI applications and models for security and safety vulnerabilities. We automatically analyze a model’s risk across hundreds of attack techniques and
Enkrypt AI
Enkrypt AI secures enterprises against generative AI risks with its comprehensive security platform that detects threats, removes vulnerabilities, and monitors the latest insights on security,
Aqua Security
Aqua facilitates secure application development and runtime protection by addressing vulnerabilities outlined in the OWASP Top 10 for LLM applications.
Harmbench
HarmBench is a new evaluation framework for automated red teaming and robust refusal.
Prompt Security
Prompt Fuzzer is an interactive, open-source tool that empowers developers of GenAI applications to evaluate and enhance the resilience and safety of their system prompts.
Pillar Security
Pillar enables teams to rapidly adopt AI with minimal risk by providing a unified AI security layer across the organization
Noma Security
"Noma Security is a comprehensive application security solution for the Data and AI lifecycle. It offers , End-to-End Visibility: Scanning notebooks, source code, and other
Preamble
Preamble provides runtime guardrails for RAG, LLMs, and AI agents by enforcing safety, privacy, security, and compliance policies while mitigating real-time risks to ensure secure,
Infosys
The Infosys Responsible AI Toolkit (Technical Guardrail) is API Based solution designed to ensure the ethical and responsible development of AI applications. By integrating safety,

Prisma Cloud AI-SPM

Palo Alto Networks
Prisma Cloud AI-SPM helps organizations discover, classify, protect and govern AI-powered applications. It provides visibility into the entire AI ecosystem including model, applications and resources,
TrojAI
TrojAI Defend protects AI models from evolving threats at runtime, including prompt injection, jailbreaking, DoS attacks, data leakage and loss, and toxic or offensive content.

Operant 3D Runtime Defense

Operant AI
Operant provides runtime application defense with threat detection and remediation, automated policy enforcement, and in-line PII redaction. It secures cloud-native environments, protecting APIs, data flows,
Palo Alto Networks
Palo Alto Networks AI Runtime Security provides continuous discovery, protection, and monitoring for genAI applications, preventing security risks such as prompt injections, sensitive data leakage,

PurpleLlama CodeShield

Meta
CodeShield is an effort to mitigate against the insecure code generated by LLMs. CodeShield is a robust inference time filtering tool engineered to prevent the
Cisco Systems, Inc.
Cisco AI Runtime secures GenAI apps to address threats like prompt injections, sensitive data loss, and compliance concerns. Deploy guardrails around safety, privacy, relevancy, and
Aqua Security
Aqua facilitates secure application development and runtime protection by addressing vulnerabilities outlined in the OWASP Top 10 for LLM applications.

IronCore Labs Cloaked AI

IronCore Labs
Encrypts vector embeddings stored in databases while still allowing kNN/aNN searches and preventing vector inversion attacks.
Securiti
Securiti Data Command Center provides unified intelligence, controls, and orchestration for enabling the safe use of data and AI across hybrid multi-clouds. Enterprises rely on
Infotect Security
IWS scans outbound response traffic in real time for undesirable content and confidential data at layer 4. It is a paradigm shift in web security.
Noma Security
"Noma Security is a comprehensive application security solution for the Data and AI lifecycle. It offers , End-to-End Visibility: Scanning notebooks, source code, and other
Microsoft
Microsoft Security provides capabilities to discover, protect, and govern AI applications. Data Security, AI Security Posture Management, AI Threat Protection, AI governance and more.
Preamble
Preamble provides runtime guardrails for RAG, LLMs, and AI agents by enforcing safety, privacy, security, and compliance policies while mitigating real-time risks to ensure secure,
Cloudsine Pte Ltd
WebOrion® Protector Plus is a GenAI firewall, built to protect GenAI applications against cyber threats. Its ShieldPrompt™ add-on offers an advanced level of protection, including
Straiker Inc
Secure AI Applications using two products. Ascend AI provides pentesting/red teaming across all layers of the applications. Defend AI provides visibility, guardrails for AI applications.
Dyana
Dyana is a sandbox environment using Docker and Tracee for loading, running and profiling a wide range of files, including machine learning models, ELF executables,
F5
F5 AI Gateway is an advanced security solution that protects, accelerates, and observes AI-powered applications.
Dynamo AI
DynamoGuard offers real-time guardrailing for GenAI, customizable in natural language and capable of running in the cloud, hybrid, on-prem or fully on edge devices to
Knostic
Knostic identifies data leakage from LLM-powered enterprise search and provides need-to-know based access controls, ensuring employees receive only the information necessary for their roles, thereby

Prisma Cloud AI-SPM

Palo Alto Networks
Prisma Cloud AI-SPM helps organizations discover, classify, protect and govern AI-powered applications. It provides visibility into the entire AI ecosystem including model, applications and resources,
TrojAI
TrojAI Defend protects AI models from evolving threats at runtime, including prompt injection, jailbreaking, DoS attacks, data leakage and loss, and toxic or offensive content.
Palo Alto Networks
Palo Alto Networks AI Runtime Security provides continuous discovery, protection, and monitoring for genAI applications, preventing security risks such as prompt injections, sensitive data leakage,
Blueteam AI
Blueteam AI Gateway is a network-layer appliance that intercepts traffic to AI models and discovers AI use, safeguards data from leaking, and governs safe and
Aim Security
The Aim AI Security Platform enables enterprises to secure every AI interaction throughout their AI adoption journey, from AI applications used directly by employees to

Llama Guard

Meta
Llama Guard is a set of LLM system safeguards designed to support developers to detect various common types of violating content across multiple use cases
Cisco Systems, Inc.
Cisco AI Runtime secures GenAI apps to address threats like prompt injections, sensitive data loss, and compliance concerns. Deploy guardrails around safety, privacy, relevancy, and
NRI Secure
AI Blue Team Service provides continuous security monitoring for AI systems, specializing in Large Language Models. It detects AI-specific threats like prompt injection and sensitive
Pillar Security
Pillar enables teams to rapidly adopt AI with minimal risk by providing a unified AI security layer across the organization
ZenGuard AI
ZenGuard AI offers a dev-first API platform for the fastest low-latency GenAI guardrails and hassle-free vulnerability testing for AI applications.
AIandMe
AIandMe provides an end-to-end platform for testing, securing, and monitoring LLM-based AI systems—combining automated adversarial testing, real-time protection, and human-in-the-loop audits to ensure reliable, compliant,
Noma Security
"Noma Security is a comprehensive application security solution for the Data and AI lifecycle. It offers , End-to-End Visibility: Scanning notebooks, source code, and other
KELA
AiFort by KELA is an automated, intelligence-led red teaming platform designed to protect GenAI applications. AiFort allows organizations full protection through test simulations of their
Preamble
Preamble provides runtime guardrails for RAG, LLMs, and AI agents by enforcing safety, privacy, security, and compliance policies while mitigating real-time risks to ensure secure,
Straiker Inc
Secure AI Applications using two products. Ascend AI provides pentesting/red teaming across all layers of the applications. Defend AI provides visibility, guardrails for AI applications.
Infotect Security
IWS scans outbound response traffic in real time for undesirable content and confidential data at layer 4. It is a paradigm shift in web security,
Dynamo AI
DynamoGuard offers real-time guardrailing for GenAI, customizable in natural language and capable of running in the cloud, hybrid, on-prem or fully on edge devices to
AISheild,Powered by Bosch
AIShield Guardian functions as an AI firewall and guardrail, providing secure access control, sensitive data protection, and live monitoring. It safeguards interactions between applications and
TrojAI
TrojAI Defend protects AI models from evolving threats at runtime, including prompt injection, jailbreaking, DoS attacks, data leakage and loss, and toxic or offensive content.

Operant 3D Runtime Defense

Operant AI
Operant provides runtime application defense with threat detection and remediation, automated policy enforcement, and in-line PII redaction. It secures cloud-native environments, protecting APIs, data flows,
Palo Alto Networks
Palo Alto Networks AI Runtime Security provides continuous discovery, protection, and monitoring for genAI applications, preventing security risks such as prompt injections, sensitive data leakage,
Blueteam AI
Blueteam AI Gateway is a network-layer appliance that intercepts traffic to AI models and discovers AI use, safeguards data from leaking, and governs safe and
Aim Security
The Aim AI Security Platform enables enterprises to secure every AI interaction throughout their AI adoption journey, from AI applications used directly by employees to
Protect AI
Enable detection and response across all enterprise LLM applications.
Lakera
Lakera is an AI Application Firewall that protects against prompt attacks, data loss, and inappropriate content. Lakera integrates with a single line of code and
Meta
PromptGuard is a lightweight, low-latency model for detecting prompt injections and jailbreaks. The model sees significant iteration driven by community adoption and feedback, making it
Cisco Systems
Cisco AI Validation assesses AI applications and models for security and safety vulnerabilities. We automatically analyze a model’s risk across hundreds of attack techniques and
Aqua Security
Aqua facilitates secure application development and runtime protection by addressing vulnerabilities outlined in the OWASP Top 10 for LLM applications.
Brand Engagement Networks
Red Teaming / Security Testing in the AI CI/CD. The SPLX.ai platform provides continuous testing, guard rail assessments, domain specific test scenarios, AI Inventory which
Noma Security
"Noma Security is a comprehensive application security solution for the Data and AI lifecycle. It offers , End-to-End Visibility: Scanning notebooks, source code, and other
Cranium
Whether organizations are builders and/or consumers of AI, Cranium offers a comprehensive platform that enables complete security, compliance, and trust across the entire AI supply
Infotect Security
IWS scans outbound response traffic in real time for undesirable content and confidential data at layer 4. It is a paradigm shift in web security,
Dynamo AI
DynamoGuard offers real-time guardrailing for GenAI, customizable in natural language and capable of running in the cloud, hybrid, on-prem or fully on edge devices to
Unbound Security
Unbound AI gateways solves for guardrails, prompt injection, and jailbreaking attacks while helping customers create routing policies based on data sensitivity. For example, prompts containing
Palo Alto Networks
Palo Alto Networks AI Runtime Security provides continuous discovery, protection, and monitoring for genAI applications, preventing security risks such as prompt injections, sensitive data leakage,
Blueteam AI
Blueteam AI Gateway is a network-layer appliance that intercepts traffic to AI models and discovers AI use, safeguards data from leaking, and governs safe and
Aim Security
The Aim AI Security Platform enables enterprises to secure every AI interaction throughout their AI adoption journey, from AI applications used directly by employees to
Cisco Systems
Cisco AI Validation assesses AI applications and models for security and safety vulnerabilities. We automatically analyze a model’s risk across hundreds of attack techniques and
Pillar Security
Pillar enables teams to rapidly adopt AI with minimal risk by providing a unified AI security layer across the organization
AI Verify Foundation
AI Verify is an AI governance testing framework and software toolkit that validates the performance of AI systems against a set of internationally recognized principles
Securiti
Securiti Data Command Center provides unified intelligence, controls, and orchestration for enabling the safe use of data and AI across hybrid multi-clouds. Enterprises rely on

Lasso Secure Gateway for LLMs

Lasso Security
Lasso Security is a Secure Gateway for LLMs and provides Anomaly Detection, Insecure Output Handling, Prompt Injection Detection, Data & Knowledge Protection, Hallucination Detection, Supply-Chain
Scroll to Top