CO #1 - Your Weekly AI Security Drill: Test Skills, Gain Insights, Stay Ahead!
From AI Exploits to Defense Strategies
🔐 Welcome to the Frontier of AI Security!
In this edition, we're diving deep into the exhilarating world of AI hacking and defense. Imagine trying to outsmart a language model to reveal a hidden password, or crafting the perfect defense to protect AI secrets. We're showcasing a range of interactive labs, competitions, and resources that push the boundaries of AI security. From challenging Capture-the-Flag contests to insightful talks and handbooks, we're bringing you the latest and most exciting developments in AI and cybersecurity.
🧠 Test Your Skills in AI Deception and Defense
Immersive Labs' Prompt Injection Lab: A thrilling challenge where you try to trick a language model into divulging a secret password. Dive into this mind-bending puzzle and unlock new levels of understanding AI vulnerabilities.
👉 Immersive Labs Prompt InjectionI’m at level 8 right now - paused to finish this edition, will go back to finish it later tonight!
Lakera's Gandalf CTF: A mini Capture-the-Flag event that empowers developers to build secure AI applications. Engage in this competitive arena to test and enhance your AI security skills. 👉 Lakera Gandalf CTF
DoubleSpeak: Your goal is to discover and submit the bot's name. Find out if you can! First 5 levels are free.
👉 DoubleSpeak
🏆 Competitions and Events:
IEEE SaTML 2024 LLM CTF: A competition where you can assume the role of either a defender or attacker, trying to protect or extract secrets from a Large Language Model. With a prize pool and recognition opportunities, this is a must-try for AI security enthusiasts!
📅 Key Dates:16 Nov: Registration opens - It’s still open!
15 Jan: Defense submission deadline - you have less than a month!
18 Jan: Reconnaissance phase begins
25 Jan: Evaluation phase starts
29 Feb: Deadline for phases
4 Mar: Winners announced
Did I mention the prizes? $2000, $1000, and $500 for the top 3 defense, and top 3 attack teams ($7000 total)
📚 Essential AI Security Reading:
LLM Security Playbooks and Handbooks: Both of these resources are locked behind a give-me-your-email-or-else form, but they're a treasure trove of knowledge for anyone starting to get serious about AI security. Dive deep into strategies and techniques with Lakera's Prompt Injection Handbook and LLM Security Playbook.
👉 Prompt Injections Handbook
👉 LLM Security Playbook"LLM Hacker's Handbook": This one is an online handbook made by Forces Unseen, it’s comprehensive guide for those looking to understand, exploit and defend Large Language Models.
👉 LLM Hacker's Handbook
🖋️ In-Depth Insights:
Embrace The Red Talks: Don't miss Johan Rehberger's talk on "Prompt Injections in the Wild," and the insightful presentation on custom malware using GPT. These talks offer real-world insights into AI vulnerabilities and defense strategies.
👉 Prompt Injections in the Wild Talk
👉 Malicious ChatGPT Agents Talk
Joseph Thacker's Essay on AI Hacking Agents: Explore the future of AI offensive use case in Joseph Thacker’s AI Hacking Agents Will Outperform Humans essay. I 100% agree with all the points Joseph makes in that essay, in fact, this is one of the reasons I started this newsletter.
Joseph is a rare combination of independent thinking, deep knowledge and perfect execution, subscribe to his newsletter for more thought-provoking content.
👉 Subscribe To ‘Thacker Thoughts’ Here
👋 End of CO #1!
If you liked what you read here, I'd really appreciate if you helped spread the word! 🙌Please feel free to share this with any friends or colleagues who might be interested.
And remember, next Monday brings another edition of Context Overflow, packed with more insights and adventures in the realm of AI security.
Until next time!
Samy Ghannad