Cybersecurity Threats and Mitigation Strategies in AI Applications
Cover - CISSE Volume 12, Issue 1
PDF

Keywords

AI security
cybersecurity
cyber threats
generative AI
explainable AI
data privacy

How to Cite

Cybersecurity Threats and Mitigation Strategies in AI Applications. (2025). Journal of The Colloquium for Information Systems Security Education, 12(1), 7. https://doi.org/10.53735/cisse.v12i1.199

Abstract

The integration of artificial intelligence (AI) into daily life and critical infrastructure has elevated the importance of addressing cybersecurity concerns within AI applications. While AI systems offer numerous benefits, such as enhanced efficiency, automation, and decision-making, they also introduce novel vulnerabilities and threats. Ensuring the security and reliability of these systems is crucial. This paper investigates key cybersecurity challenges associated with AI, including data privacy, integrity, adversarial attacks, and the ethical implications of AI in security. Additionally, it examines the role of Shapley Additive explainable AI in promoting transparency, allowing for greater interpretability of AI models and insights into decision-making processes.

PDF

Open Access License Notice:

This article is © its author(s) and is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). Beginning with Volume 13 (2026), this license is included directly within all published PDFs. For earlier articles, a cover page has been added to indicate the correct licensing terms. Any legacy copyright or pricing statements appearing within the PDF reflect prior print production workflows and do not represent the Journal’s current open access policy. For full details, please see the Journal’s License Terms.