Abstract
Cybersecurity awareness training improves knowledge, yet human error continues to drive breaches. AI-enabled attacks such as deepfakes, voice-cloned vishing, and automated spear phishing magnify these risks. This review of 26 studies (2008–2025) introduces a residual-risk framework that measures outcomes beyond average effectiveness. Residual Insecure Behavior (RIB) captures risky practices that persist after training, while Residual Knowledge Gap (RKG) reflects remaining deficits. Across studies, residual risks were substantial—phishing susceptibility often above 10% and knowledge gaps over 30%. By applying RIB and RKG, future cybersecurity researchers can shift focus from statistical gains to reducing real-world exposure in an AI-driven landscape.
Open Access License Notice:
This article is © its author(s) and is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). Beginning with Volume 13 (2026), this license is included directly within all published PDFs. For earlier articles, a cover page has been added to indicate the correct licensing terms. Any legacy copyright or pricing statements appearing within the PDF reflect prior print production workflows and do not represent the Journal’s current open access policy. For full details, please see the Journal’s License Terms.