Pangea Authentication
Secure authentication, with support for adaptive threat intelligence, built specifically to protect access to your AI application, protect your users, and your organization.
Pangea Authentication Read Post »
Secure authentication, with support for adaptive threat intelligence, built specifically to protect access to your AI application, protect your users, and your organization.
Pangea Authentication Read Post »
Protect your users and application by redacting sensitive info from prompt inputs, prompt responses, and contextual data, using Pangea’s Redact service.
Prompt inputs, responses, and data ingestion from external sources can all be evaluated for malicious content with Pangea’s Data Guard to protect LLMs and users from threatening content.
CodeShield is an effort to mitigate against the insecure code generated by LLMs. CodeShield is a robust inference time filtering tool engineered to prevent the introduction of insecure code generated by LLMs into production systems. LLMs, while instrumental in automating coding tasks and aiding developers, can sometimes output insecure code, even when they have been security-conditioned. CodeShield stands as a guardrail to help ensure that such code is intercepted and filtered out before making it into the codebase.
PurpleLlama CodeShield Read Post »
Pangea’s Prompt Guard service utilizes a deep understanding of prompt templates, heuristics and trained models to detect direct or indirect prompt injection attacks and jailbreak attempts.
Pangea Prompt Guard Read Post »
Cisco AI Validation assesses AI applications and models for security and safety vulnerabilities. We automatically analyze a model’s risk across hundreds of attack techniques and threat categories so you can defend against them.
Cisco AI Validation Read Post »
Mend AI provides a shift-left solution for securing AI-driven applications. It enables discovery of shadow AI, security and compliance analysis through code scanning and red-teaming, and remediation with guardrails and fix suggestions.
Aqua facilitates secure application development and runtime protection by addressing vulnerabilities outlined in the OWASP Top 10 for LLM applications.
Fickling can help securing AI/ML codebases by automatically scanning pickle files contained in models. Fickling hooks the pickle module and verifies imports made when loading a model.
Pillar enables teams to rapidly adopt AI with minimal risk by providing a unified AI security layer across the organization