CO #7 - $25 Million stolen with deepfake, AI who prefer war, ChatGPT Account Takeover, and Google's
Hi everyone,
First off, I owe you all an apology. Last week was rough on me health-wise, and despite my best efforts, I couldn't send out the newsletter. I even drafted it, but couldn't cross the finish line. Sorry for the ghosting, and thanks for sticking around. 🙏
This week, we're bouncing back with a heavyweight issue, packed with stories that underline the ever-evolving dance between cybersecurity and AI.
Let's dive in!
Table of Contents:
The Deepfake Deception Dilemma 🎭
AI’s Achilles Heel - Prompt Injection 🛡️
AI-Generated Voices in Robocalls Now Illegal 🚫
The Threat of AI in Warfare 🌐
OnlyFake’s Frighteningly Real IDs 🆔
ChatGPT Account Takeover Exposed 💥
Freelancer Faux Pas: AI in Job Scams? 🤖
Innovative Defense: AudioSeal’s Watermarking Wonders 🔊
Tackling AI Safety: Gradient-Based Language Model Red Teaming 🎯
🎯 Highlights of This Edition:
1. The Deepfake Deception Dilemma 🎭
A multinational firm in Hong Kong was scammed out of HK$200 million (~ $25.5 Million) thanks to deepfake tech.
Imagine sitting on a video call, and the person you're talking to looks and sounds exactly like your CFO, but it's not really your CFO.
This isn't sci-fi; it happened. It's a stark reminder of how real the threat of deepfakes has become.
The sophistication of this scam is a wake-up call for all of us in cybersecurity; We don’t have any real defenses (I mean technical defenses) against these attacks yet; however it doesn’t mean we are totally defenseless either:
Add more checks (in-person checks, multiple signatures/approvals, secondary secure channel confirmations, etc) to the decision-making critical paths in your organization.
Does it slow things down? Yes.
Is it inconvenient? Also Yes.
But do you know what’s even more inconvenient? Losing 25 Million Dollars.
Who would’ve thought one day “more bureaucracy” would be the (hopefully temporary) answer? sigh
Another thing I’m wondering about is the infrastructure for an attack like this. What does it look like? The preparation, software, and hardware stack, and the execution, how does it work exactly?
Drop me a line if you know!
Read more about this incident.
2. AI’s Achilles Heel - Prompt Injection 🛡️
Prompt Injection is a big deal in LLM security. It's tricking AI into saying or revealing things it shouldn’t.
These two articles dive deep into how we can protect against it, emphasizing the need for roles-based APIs and secure system prompt designs. The author also included curl
commands you can use with OpenAI API to test things yourself!
It's a must-read for developers and cybersecurity pros - don’t miss it.
Improve LLM Security Against Prompt Injection - Part 1, Part 2.
3. AI-Generated Voices in Robocalls Now Illegal 🚫
The FCC's move against AI-generated voices in robocalls marks a significant step towards clamping down on misleading communications.
I’m so happy to see governments and legislative bodies are not only moving fast to respond to AI Security needs, but they’re also doing it effectively. Cheers to that, and more to come!
4. The Threat of AI in Warfare 🌐
This paper explores how AI could escalate conflicts in military and diplomatic arenas. It seems like AI agents might lean towards escalatory actions, raising red flags about their integration into high-stakes decision-making.
One interesting part (that made me laugh, and also scared me) was the agents’ “sudden, hard-to-predict spikes of escalation” with launching nuclear attacks being a very feasible option.
I guess we need to retrain them on anger management data. or maybe add a “therapist” layer. sheesh.
5. OnlyFake’s Frighteningly Real IDs 🆔
OnlyFake’s neural network is churning out fake IDs so real they could pass for your driver's license. At $15 a pop, it’s an affordable service to bypass KYC.
Not a bit surprised about this - this will only get worse.
6. ChatGPT Account Takeover Exposed 💥
Well, this is not exactly “AI Security”, it falls more under web security.
I’m including it for two reasons, 1- it affected an AI-related product, and 2- as a reminder that vulnerabilities from all other attack surfaces are still very much relevant.
7. Freelancer Faux Pas: AI in Job Scams? 🤖
A developer’s tweet revealed a sneaky trend: freelancers using AI to auto-respond to job postings, without reading them. (Again, anyone’s surprised?)
James Potter, owner of meals.chat (an app that tracks your macro and calories by looking at the photos of your food), posted a job on a freelance platform.
He, however, added a small detail to the job description:
And he caught one!
Combine this with that invisible prompt injection technique I covered in previous issues and watch the stream of AI-generated responses tell on themselves!
8. Innovative Defense: AudioSeal’s Watermarking Wonders 🔊
Meta's AudioSeal uses watermarks to tell if an audio clip is AI-generated, adding a layer of security against voice cloning.
See? We’re getting there, we’re making progress. One step at a time.
It may not be very practical right now since scammers are not going to watermark their audio and deepfakes, but it’s a great step in the right direction.
Discover AudioSeal’s innovation
You can also see the code, and even install it here.
9. Tackling AI Safety: Gradient-Based Language Model Red Teaming 🎯
Google and Anthropic are pushing the envelope with Gradient-Based Red Teaming (GBRT), an automated approach to discover vulnerabilities in AI models. GBRT leverages the model's own gradient information to generate prompts that hit the right spot. This is huge since it has the potential to partially, if not completely automate the process which leads to better, faster, and more comprehensive testing.
Check out the research
GitHub Repo
🚀 Call to Action:
Don't let the conversation end here!
If you found value in this issue, consider sharing ContextOverflow with someone who’d appreciate it - it’d mean a lot to me.
Stay tuned for more insights, and stay secure!
Until next time,
Samy