GuardionAI
GuardionAI provides a realtime & adaptive LLM guardrails API against prompt attacks, data leaks, off-policy behavior, and content violations. The platform allows users to monitor, audit, and refine guardrails through continuous feedback.
GuardionAI provides a realtime & adaptive LLM guardrails API against prompt attacks, data leaks, off-policy behavior, and content violations. The platform allows users to monitor, audit, and refine guardrails through continuous feedback.
CalypsoAI secures GenAI across applications and agents. The CalypsoAI Inference Platform tests, defends, and monitors AI in development and production. With Defend, Red-Team, and Observe, enterprises gain control and confidence in their GenAI deployments.
The CalypsoAI Inference Platform Read Post »
AIandMe provides an end-to-end platform for testing, securing, and monitoring LLM-based AI systems—combining automated adversarial testing, real-time protection, and human-in-the-loop audits to ensure reliable, compliant, and safe AI deployments.
Preamble provides runtime guardrails for RAG, LLMs, and AI agents by enforcing safety, privacy, security, and compliance policies while mitigating real-time risks to ensure secure, reliable AI operations.
Secure AI Applications using two products. Ascend AI provides pentesting/red teaming across all layers of the applications. Defend AI provides visibility, guardrails for AI applications. With both approaches, we take a look at the threat vector at the application layer and not just the models
IWS scans outbound response traffic in real time for undesirable content and confidential data at layer 4. It is a paradigm shift in web security, allowing us to scan responses from LLM models for DLP/Malware.
Insight For Webservers (IWS) Read Post »
AIShield Guardian functions as an AI firewall and guardrail, providing secure access control, sensitive data protection, and live monitoring. It safeguards interactions between applications and LLMs, ensuring safety, compliance, and policy adherence.
TrojAI Defend protects AI models from evolving threats at runtime, including prompt injection, jailbreaking, DoS attacks, data leakage and loss, and toxic or offensive content.
Palo Alto Networks AI Runtime Security provides continuous discovery, protection, and monitoring for genAI applications, preventing security risks such as prompt injections, sensitive data leakage, harmful model outputs, and model DoS.
Palo Alto Networks AI Runtime Security Read Post »