Self-MCKD: Enhancing the Effectiveness and Efficiency of Knowledge Transfer in Malware Classification

Guardado en:
Bibliografiske detaljer
Udgivet i:Electronics vol. 14, no. 6 (2025), p. 1077
Hovedforfatter: Hyeon-Jin, Jeong
Andre forfattere: Han-Jin, Lee, Gwang-Nam, Kim, Choi, Seok-Hwan
Udgivet:
MDPI AG
Fag:
Online adgang:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!
Beskrivelse
Resumen:As malware continues to evolve, AI-based malware classification methods have shown significant promise in improving the malware classification performance. However, these methods lead to a substantial increase in computational complexity and the number of parameters, increasing the computational cost during the training process. Moreover, the maintenance cost of these methods also increases, as frequent retraining and transfer learning are required to keep pace with evolving malware variants. In this paper, we propose an efficient knowledge distillation technique for AI-based malware classification methods called Self-MCKD. Self-MCKD transfers output logits that are separated into the target class and non-target classes. With the separation of the output logits, Self-MCKD enables efficient knowledge transfer by assigning weighted importance to the target class and non-target classes. Also, Self-MCKD utilizes small and shallow AI-based malware classification methods as both the teacher and student models to overcome the need to use large and deep methods as the teacher model. From the experimental results using various malware datasets, we show that Self-MCKD outperforms the traditional knowledge distillation techniques in terms of the effectiveness and efficiency of its malware classification.
ISSN:2079-9292
DOI:10.3390/electronics14061077
Fuente:Advanced Technologies & Aerospace Database