TrojAI
TrojAI helps secure AI models, applications, and agents across both AI build time and AI runtime
“Noma Security is a comprehensive application security solution for the Data and AI lifecycle. It offers , End-to-End Visibility: Scanning notebooks, source code, and other assets to inventory AI/ML resources., AI Security Posture Management: Identifying and prioritizing AI/ML risks, including misconfigurations and vulnerable models, Runtime Protection: Safeguarding AI systems with an AI Firewall against adversarial prompts, Pre-Runtime Protection: Enhancing security through dynamic red-teaming and static analysis of models and code, Governance and Compliance: Ensuring adherence to regulatory and security standards.”
Preamble provides runtime guardrails for RAG, LLMs, and AI agents by enforcing safety, privacy, security, and compliance policies while mitigating real-time risks to ensure secure, reliable AI operations.
The Infosys Responsible AI Toolkit (Technical Guardrail) is API Based solution designed to
ensure the ethical and responsible development of AI applications. By integrating safety, security, explainability, fairness, bias and hallucination detection into AI workflows, it empowers us to build trustworthy and accountable AI systems.
Infosys Responsible AI Toolkit Read Post »
Prisma Cloud AI-SPM helps organizations discover, classify, protect and govern AI-powered applications. It provides visibility into the entire AI ecosystem including model, applications and resources, to reduce the risk of data exposure and compliance breaches. By identifying model vulnerabilities and prioritizing misconfigurations, it improves the integrity of the AI security framework.
Prisma Cloud AI-SPM Read Post »
TrojAI Defend protects AI models from evolving threats at runtime, including prompt injection, jailbreaking, DoS attacks, data leakage and loss, and toxic or offensive content.
Operant provides runtime application defense with threat detection and remediation, automated policy enforcement, and in-line PII redaction. It secures cloud-native environments, protecting APIs, data flows, and AI workloads against emerging threats without requiring instrumentation or integrations.
Operant 3D Runtime Defense Read Post »
Palo Alto Networks AI Runtime Security provides continuous discovery, protection, and monitoring for genAI applications, preventing security risks such as prompt injections, sensitive data leakage, harmful model outputs, and model DoS.
Palo Alto Networks AI Runtime Security Read Post »
CodeShield is an effort to mitigate against the insecure code generated by LLMs. CodeShield is a robust inference time filtering tool engineered to prevent the introduction of insecure code generated by LLMs into production systems. LLMs, while instrumental in automating coding tasks and aiding developers, can sometimes output insecure code, even when they have been security-conditioned. CodeShield stands as a guardrail to help ensure that such code is intercepted and filtered out before making it into the codebase.
PurpleLlama CodeShield Read Post »
Cisco AI Runtime secures GenAI apps to address threats like prompt injections, sensitive data loss, and compliance concerns. Deploy guardrails around safety, privacy, relevancy, and security to govern your AI operations.