[1]
“Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs)”, JCISSE, vol. 13, no. 1, p. 7, Mar. 2026, doi: 10.53735/cisse.v13i1.227.