International Journal of
Information and Education Technology

Editor-In-Chief: Prof. Jon-Chao Hong
Frequency: Monthly
ISSN: 2010-3689 (Online)
E-mali: editor@ijiet.org
Publisher: IACSIT Press
 

OPEN ACCESS
3.2
CiteScore

IJIET 2026 Vol.16(3): 779-788
doi: 10.18178/ijiet.2026.16.3.2550

Malware Analysis Education Meets LLMs: Understanding Student Use of LLMs in Malware Analysis Education

Orçun Çetin* and Nazlı Bıyıklı
Faculty of Engineering and Natural Sciences, Sabancı University, Istanbul, Turkey
Email: orcun.cetin@sabanciuniv.edu (O.C.); nazlibiyikli@sabanciuniv.edu (N.B.)
*Corresponding author

Manuscript received August 14, 2025; revised September 1, 2025; accepted November 6, 2025; published March 17, 2026

Abstract—Large Language Models (LLMs) are increasingly used in cybersecurity, both as practical tools in real-world tasks like penetration testing and reverse engineering, and as educational aids for students learning complex analysis techniques. While recent research highlights their potential to automate code analysis, deobfuscation, and threat detection, less is known about how students actually use these models during malware analysis courses. To address this gap, we conducted a survey of 37 students enrolled in university-level malware analysis courses. Our findings show that all participants reported using LLMs, primarily for assignments and labs (70.2%) and to better understand course content (59.4%). Students primarily analyzed outputs from Interactive Disassembler Pro (IDA Pro) (83.7%), followed by OllyDbg and Wireshark (43.2%). They mainly used LLMs for advanced static analysis, especially for disassembled code interpretation (37.8%). While 59.4% of students reported no major issues when using LLMs, 27% encountered refusals to respond, primarily due to ethical safeguards built into the models, and others noted inaccurate or overly generic responses and token-size limits. In terms of satisfaction, 67.5% of students reported positive experiences with LLMs, and 81% indicated they were likely to continue using them for cybersecurity-related tasks in the future. These findings suggest the need for responsible integration of LLMs into cybersecurity education through lecturer guidance, ethical transparency, and effective assessment design. Overall, they highlight both the strengths and limitations of LLMs in supporting advanced technical learning.

Keywords—malware analysis, large language model, education, Artificial Intelligence (AI) in cyber security, generative AI


[PDF]

Cite: Orçun Çetin and Nazlı Bıyıklı, "Malware Analysis Education Meets LLMs: Understanding Student Use of LLMs in Malware Analysis Education," International Journal of Information and Education Technology, vol. 16, no. 3, pp. 779-788, 2026.


Copyright © 2026 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Article Metrics in Dimensions

Menu