Towards 3D Acceleration for low-power Mixture-of-Experts and Multi-Head Attention Spiking Transformers

محفوظ في:
التفاصيل البيبلوغرافية
الحاوية / القاعدة:arXiv.org (Dec 7, 2024), p. n/a
المؤلف الرئيسي: Xu, Boxun
مؤلفون آخرون: Hwang, Junyoung, Vanna-iampikul, Pruek, Yin, Yuxuan, Lim, Sung Kyu, Li, Peng
منشور في:
Cornell University Library, arXiv.org
الموضوعات:
الوصول للمادة أونلاين:Citation/Abstract
Full text outside of ProQuest
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
مستخلص:Spiking Neural Networks(SNNs) provide a brain-inspired and event-driven mechanism that is believed to be critical to unlock energy-efficient deep learning. The mixture-of-experts approach mirrors the parallel distributed processing of nervous systems, introducing conditional computation policies and expanding model capacity without scaling up the number of computational operations. Additionally, spiking mixture-of-experts self-attention mechanisms enhance representation capacity, effectively capturing diverse patterns of entities and dependencies between visual or linguistic tokens. However, there is currently a lack of hardware support for highly parallel distributed processing needed by spiking transformers, which embody a brain-inspired computation. This paper introduces the first 3D hardware architecture and design methodology for Mixture-of-Experts and Multi-Head Attention spiking transformers. By leveraging 3D integration with memory-on-logic and logic-on-logic stacking, we explore such brain-inspired accelerators with spatially stackable circuitry, demonstrating significant optimization of energy efficiency and latency compared to conventional 2D CMOS integration.
تدمد:2331-8422
المصدر:Engineering Database