Detecting and Mitigating AI Prompt Injection Attacks in Large Language Models (LLMs). Journal of The Colloquium for Information Systems Security Education, [S. l.], v. 13, n. 1, p. 7, 2026. DOI: 10.53735/cisse.v13i1.227. Disponível em: https://journal.cisse.info/jcisse/article/view/227. Acesso em: 11 may. 2026.