CodeWars: Using LLMs for Vulnerability Analysis in Cybersecurity Education
Cover - CISSE Volume 13, Issue 1
PDF

Keywords

Cybersecurity Education
Vulnerability analysis
Secure Software Development
Large Language Models
Cybersecurity Pedagogy
GenAI

How to Cite

CodeWars: Using LLMs for Vulnerability Analysis in Cybersecurity Education. (2026). Journal of The Colloquium for Information Systems Security Education, 13(1), 8. https://doi.org/10.53735/cisse.v13i1.224

Abstract

Large Language Models (LLMs) are increasingly explored as tools for software development and could further constitute a supplementary source for the development of varied examples intended for pedagogical use. While they can improve productivity, their ability to produce code that is both secure and compliant with Secure Software Development (SSD) practices remains uncertain, raising concerns about their role in cybersecurity education. If LLMs are to be integrated effectively, students must be trained to critically evaluate generated code for correctness and vulnerabilities, raising an important question: How can LLM-generated code be effectively and securely incorporated into Cybersecurity education for teaching vulnerability analysis? This paper introduces CodeWars, a novel teaching methodology that combines LLM-generated and human-written code to examine how students engage with vulnerability detection tasks. CodeWars was implemented as a pilot study with a total of 32 students at Cardiff University and the University of Waikato, where students analyzed flawed, secure, and mixed-origin code samples. By comparing student approaches, analysis, and perceptions, the study provides insights into how vulnerabilities are detected, how code origins are distinguished, and how SSD practices are applied. Our analysis of student feedback and interviews indicates that Codewars produced structured and accessible code, simplifying vulnerability identification and offering educators the means to efficiently develop varied SSD teaching applications. These findings illuminate both the advantages and constraints of employing LLMs in secure coding and position this study as a foundational step toward the responsible adoption of AI in Cybersecurity Education.

PDF

Open Access License Notice:

This article is © its author(s) and is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). Beginning with Volume 13 (2026), this license is included directly within all published PDFs. For earlier articles, a cover page has been added to indicate the correct licensing terms. Any legacy copyright or pricing statements appearing within the PDF reflect prior print production workflows and do not represent the Journal’s current open access policy. For full details, please see the Journal’s License Terms.