“Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs)”. 2026. Journal of The Colloquium for Information Systems Security Education 13 (1): 7. https://doi.org/10.53735/cisse.v13i1.227.