Self-MCKD: Enhancing the Effectiveness and Efficiency of Knowledge Transfer in Malware Classification

Guardado en:
Detalles Bibliográficos
Publicado en:Electronics vol. 14, no. 6 (2025), p. 1077
Autor principal: Hyeon-Jin, Jeong
Otros Autores: Han-Jin, Lee, Gwang-Nam, Kim, Choi, Seok-Hwan
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:As malware continues to evolve, AI-based malware classification methods have shown significant promise in improving the malware classification performance. However, these methods lead to a substantial increase in computational complexity and the number of parameters, increasing the computational cost during the training process. Moreover, the maintenance cost of these methods also increases, as frequent retraining and transfer learning are required to keep pace with evolving malware variants. In this paper, we propose an efficient knowledge distillation technique for AI-based malware classification methods called Self-MCKD. Self-MCKD transfers output logits that are separated into the target class and non-target classes. With the separation of the output logits, Self-MCKD enables efficient knowledge transfer by assigning weighted importance to the target class and non-target classes. Also, Self-MCKD utilizes small and shallow AI-based malware classification methods as both the teacher and student models to overcome the need to use large and deep methods as the teacher model. From the experimental results using various malware datasets, we show that Self-MCKD outperforms the traditional knowledge distillation techniques in terms of the effectiveness and efficiency of its malware classification.
ISSN:2079-9292
DOI:10.3390/electronics14061077
Fuente:Advanced Technologies & Aerospace Database