ProtectAI – Layer
Enable detection and response across all enterprise LLM applications.
LLM07: Insecure Plugin Design
A threat model helps identify and evaluate potential security threats to applications / systems. It provides a systematic approach to understanding possible vulnerabilities and attack vectors. Use this tab to generate a threat model using the STRIDE methodology.
ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally accessible, living knowledge base of adversary tactics and techniques against Al-enabled systems based on real-world attack observations and realistic demonstrations from Al red teams and security groups.
Lakera is an AI Application Firewall that protects against prompt attacks, data loss, and inappropriate content. Lakera integrates with a single line of code and offers no-code policy configuration for enterprise-wide security.
Llama Guard is a set of LLM system safeguards designed to support developers to detect various common types of violating content across multiple use cases including multilingual, image reasoning, or on-device deployments.
Pangea’s Authorization service is an access control engine that integrates with any AI application through easy-to-use APIs and SDKs. It is used to enforce access controls to LLMs, contextual data in RAG pipelines, and agent-based operations.
Pangea Authorization Read Post »
Secure authentication, with support for adaptive threat intelligence, built specifically to protect access to your AI application, protect your users, and your organization.
Pangea Authentication Read Post »
Prompt inputs, responses, and data ingestion from external sources can all be evaluated for malicious content with Pangea’s Data Guard to protect LLMs and users from threatening content.
CyberSecEval is an extensive benchmark suite under Meta PurpleLlama, designed to evaluate various cybersecurity risks of LLMs, including several listed in the OWASP Top-10 for LLMs.
Cisco AI Runtime secures GenAI apps to address threats like prompt injections, sensitive data loss, and compliance concerns. Deploy guardrails around safety, privacy, relevancy, and security to govern your AI operations.