[1]
2026. Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs). Journal of The Colloquium for Information Systems Security Education. 13, 1 (Mar. 2026), 7. DOI:https://doi.org/10.53735/cisse.v13i1.227.