(1)
Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs). JCISSE 2026, 13 (1), 7. https://doi.org/10.53735/cisse.v13i1.227.