Develop & Experiment

PurpleLlama CodeShield

CodeShield is an effort to mitigate against the insecure code generated by LLMs. CodeShield is a robust inference time filtering tool engineered to prevent the introduction of insecure code generated by LLMs into production systems. LLMs, while instrumental in automating coding tasks and aiding developers, can sometimes output insecure code, even when they have been security-conditioned. CodeShield stands as a guardrail to help ensure that such code is intercepted and filtered out before making it into the codebase.

PurpleLlama CodeShield Read Post »

Mend AI

Mend AI provides a shift-left solution for securing AI-driven applications. It enables discovery of shadow AI, security and compliance analysis through code scanning and red-teaming, and remediation with guardrails and fix suggestions.

Mend AI Read Post »

Scroll to Top