AIBlade

AIBlade

Home
Podcast
IPIM
BSIDES
Leaderboard
About
My New Project - InjectPrompt
Check out my new blog for content focused on AI Jailbreaks, Prompt Injections, and System Prompt Leaks
Apr 15, 2025 • David Willis-Owen
Claude Sonnet 3.7 Jailbreak
How to One-Shot Jailbreak Claude Sonnet 3.7 in March 2025
Mar 15, 2025 • David Willis-Owen
Jailbreaking Grok 3 | DeepSeek, ChatGPT, Claude & More
How easy is it to jailbreak frontier LLMs in 2025?
Mar 8, 2025 • David Willis-Owen
10:54
Is Github Copilot Poisoned? Part 2
Scaling up my experiment to detect IOCs in larger code models
Feb 22, 2025 • David Willis-Owen
14:51
Invite your friends to read AIBlade
Thank you for reading AIBlade — your support allows me to keep doing this work.
Feb 18, 2025 • David Willis-Owen
How Secure Is DeepSeek?
Can we trust Chinese models with our personal data?
Feb 8, 2025 • David Willis-Owen
9:33
Is Github Copilot Poisoned?
How to test code-suggestion models for Indicators of Compromise
Jan 25, 2025 • David Willis-Owen
9:19
AI Poisoning - Is It Really A Threat?
Is the web too big to prevent AI models from being poisoned?
Jan 9, 2025 • David Willis-Owen
9:57
AIBlade
AIBlade
Cutting Edge AI Security
Recommendations
InjectPrompt
InjectPrompt
David Willis-Owen

AIBlade

AboutArchiveRecommendationsSitemap
© 2026 AIBlade · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture