Abstract
Artificial Intelligence (AI) has become an increasingly powerful tool in various domains, particularly in image classification and object detection. As AI advances, novel methods to deceive machine learning models, such as adversarial patches, have emerged. These subtle modifications to images can lead to misclassification of objects, posing a substantial challenge to their reliability. In this paper, we present our research findings and literature on adversarial examples and object detection.
This research builds upon the previous work by investigating the impact of small patches on object detection using YOLOv8. We started by exploring patterns within images and their influence on model accuracy. Then a follow-up study evaluating how adversarial patches, particularly those targeting animal patterns, affect YOLOv8’s ability to accurately detect objects. Additionally, we explore how untrained patterns impact the model’s performance, aiming to identify vulnerabilities and enhance the robustness of object detection systems.
Open Access License Notice:
This article is © its author(s) and is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). Beginning with Volume 13 (2026), this license is included directly within all published PDFs. For earlier articles, a cover page has been added to indicate the correct licensing terms. Any legacy copyright or pricing statements appearing within the PDF reflect prior print production workflows and do not represent the Journal’s current open access policy. For full details, please see the Journal’s License Terms.