Subscribe
Sign in
Home
Podcast
IPIM
BSIDES
Leaderboard
About
My New Project - InjectPrompt
Check out my new blog for content focused on AI Jailbreaks, Prompt Injections, and System Prompt Leaks
Apr 15, 2025
•
David Willis-Owen
3
Latest
Top
Discussions
Claude Sonnet 3.7 Jailbreak
How to One-Shot Jailbreak Claude Sonnet 3.7 in March 2025
Mar 15, 2025
•
David Willis-Owen
1
1
Jailbreaking Grok 3 | DeepSeek, ChatGPT, Claude & More
How easy is it to jailbreak frontier LLMs in 2025?
Mar 8, 2025
•
David Willis-Owen
1
10:54
Is Github Copilot Poisoned? Part 2
Scaling up my experiment to detect IOCs in larger code models
Feb 22, 2025
•
David Willis-Owen
1
2
14:51
Invite your friends to read AIBlade
Thank you for reading AIBlade — your support allows me to keep doing this work.
Feb 18, 2025
•
David Willis-Owen
How Secure Is DeepSeek?
Can we trust Chinese models with our personal data?
Feb 8, 2025
•
David Willis-Owen
1
3
9:33
Is Github Copilot Poisoned?
How to test code-suggestion models for Indicators of Compromise
Jan 25, 2025
•
David Willis-Owen
1
1
9:19
AI Poisoning - Is It Really A Threat?
Is the web too big to prevent AI models from being poisoned?
Jan 9, 2025
•
David Willis-Owen
2
1
9:57
See all
AIBlade
Cutting Edge AI Security
Subscribe
Recommendations
InjectPrompt
David Willis-Owen
AIBlade
Subscribe
About
Archive
Recommendations
Sitemap
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts