
· Manuel López Pérez · writeups
LLM Security: Threat Modeling and Prompt Injection
Comprehensive analysis of security threats in Large Language Models (LLMs), attack techniques like prompt injection, and practical case study from the A.D.I.C. 7 challenge at CyberH2O CTF.