I’m pleased to announce the creation of a new project to research the most important security risks for the new generation of Artificial Intelligence applications as part of the OWASP Foundation.
Large Language Models (LLMs) are the underlying technology powering transformative AI technologies like OpenAI’s ChatGPT and Google’s Bard. These technologies have stormed onto the scene over the last few months. One thing that’s become clear is that organizations developing using these technologies will have a new and dangerous set of security headaches to contend with.
While there has been a lot written as of late on new LLM-related security threats, there hasn’t been a single, well-organized and vetted resource for coders and security researchers to learn about them. While the OWASP Top 10 Project is an outstanding resource “for developers and web application security” teams, these new LLM-based applications have their own unique set of requirements that differ from standard web apps. That’s why I proposed creating a new OWASP Top 10 List for Large Language Model Applications. The project was just approved by the OWASP board and you can visit the new homepage on the OWASP site. If you’d like to dive in and participate more directly we have a new GitHub repository as well.
If you’re already an OWASP member, we’ve set up a channel on the OWASP Slack Workspace. You can join the discussion on the hashtag#project-top10-for-llm channel.
We will be hosting a kick-off call for people interested in participating. The meeting will be from 9am to 10am pacific time on Wednesday May 31st.
One tap mobile: US: +16468769923,,95013860946# or +16469313860,,95013860946#
Meeting URL: Zoom Link
Meeting ID: 950 1386 0946
Passcode: 256955
Resources
If you’re new to LLM security and you’d like to learn more about security threats to LLMs here are some good resources to start to educate yourself so you can jump in and help with the project. I hope you find them interesting and useful.
- The Hacking of ChatGPT Is Just Getting Started
- Data Poisoning and Its Impact on the AI Ecosystem
- Protecting AI Models from “Data Poisoning”
- Here’s how anyone can Jailbreak ChatGPT with these top 4 methods
- How prompt injection attacks hijack today’s top-end AI – and it’s tough to fix
- Exploring Prompt Injection Attacks
- The Rise of Large Language Models ~ Part 2: Model Attacks, Exploits, and Vulnerabilities
- The Dark Side of Large Language Models: Part 1
- The Dark Side of Large Language Models: Part 2
- AI Injections: Direct and Indirect Prompt Injections and Their Implications
- Don’t blindly trust LLM responses. Threats to chatbots
- Security in the age of LLMs