Abstract
AI is being interconnected with vital systems at an exponential rate, being described as the greatest shift in technology since the invention of the Internet. However, with the emergence of AI also involves the introduction of new critical vulnerabilities in the technology sector. This research will discuss the types of prompt injection attacks that AI can be subjected to, what they target and the possible repercussions of prompt injections. To counteract these attacks, solutions to detect different types of prompt injection will also be discussed, giving solutions to mitigate attacks that can expose critical data. Along with the solutions, different trade-offs between these solutions will be given. This research aims to expose the security issues that arise with the rapid implementation of experimental AI involving prompt injection and how to prevent it.
Open Access License Notice:
This article is © its author(s) and is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). Beginning with Volume 13 (2026), this license is included directly within all published PDFs. For earlier articles, a cover page has been added to indicate the correct licensing terms. Any legacy copyright or pricing statements appearing within the PDF reflect prior print production workflows and do not represent the Journal’s current open access policy. For full details, please see the Journal’s License Terms.