Abstract
This paper explores the evolving threat of deepfakes in the context of insider threats, particularly how advanced persistent threats (APTs) are leveraging AI-generated audio and video to impersonate job applicants and gain access to sensitive systems. While deepfakes have legitimate applications in entertainment, education, and business, they are increasingly being weaponized for deception and cyber intrusion. The paper outlines recent incidents, assesses technical vulnerabilities, and evaluates current risk management frameworks such as NIST RMF. It concludes with policy and technology recommendations to enhance detection and prevention strategies, especially during remote hiring and onboarding processes.
Open Access License Notice:
This article is © its author(s) and is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). Beginning with Volume 13 (2026), this license is included directly within all published PDFs. For earlier articles, a cover page has been added to indicate the correct licensing terms. Any legacy copyright or pricing statements appearing within the PDF reflect prior print production workflows and do not represent the Journal’s current open access policy. For full details, please see the Journal’s License Terms.