Agentic Security Open Workshop – Agentic AI Threats and Mitigations

About

Session 2 of 7

The session outlines a fictional case study involving “Finbot,” an AI finance assistant that was manipulated through prompt injection attacks, leading to fraudulent payments and data breaches. The presentation highlights how attackers poisoned Finbot’s memory, manipulated tools to execute unauthorized actions, and exploited identity misconfigurations to escalate privileges.

It underscores the interconnected nature of multi-agent systems, where compromised agents can propagate threats to others. Key takeaways emphasize the importance of threat modeling for agentic AI, with a focus on securing memory, tools, and identity, while advocating for proactive security measures that support innovation without stifling it.

Details

Supporting Materials

Scroll to Top