Self-MCKD: Enhancing the Effectiveness and Efficiency of Knowledge Transfer in Malware Classification

Gorde:
Xehetasun bibliografikoak
Argitaratua izan da:Electronics vol. 14, no. 6 (2025), p. 1077
Egile nagusia: Hyeon-Jin, Jeong
Beste egile batzuk: Han-Jin, Lee, Gwang-Nam, Kim, Choi, Seok-Hwan
Argitaratua:
MDPI AG
Gaiak:
Sarrera elektronikoa:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiketak: Etiketa erantsi
Etiketarik gabe, Izan zaitez lehena erregistro honi etiketa jartzen!
Deskribapena
Laburpena:As malware continues to evolve, AI-based malware classification methods have shown significant promise in improving the malware classification performance. However, these methods lead to a substantial increase in computational complexity and the number of parameters, increasing the computational cost during the training process. Moreover, the maintenance cost of these methods also increases, as frequent retraining and transfer learning are required to keep pace with evolving malware variants. In this paper, we propose an efficient knowledge distillation technique for AI-based malware classification methods called Self-MCKD. Self-MCKD transfers output logits that are separated into the target class and non-target classes. With the separation of the output logits, Self-MCKD enables efficient knowledge transfer by assigning weighted importance to the target class and non-target classes. Also, Self-MCKD utilizes small and shallow AI-based malware classification methods as both the teacher and student models to overcome the need to use large and deep methods as the teacher model. From the experimental results using various malware datasets, we show that Self-MCKD outperforms the traditional knowledge distillation techniques in terms of the effectiveness and efficiency of its malware classification.
ISSN:2079-9292
DOI:10.3390/electronics14061077
Baliabidea:Advanced Technologies & Aerospace Database