A Stealthy Hardware Trojan Exploiting the Architectural Vulnerability of Deep Learning Architectures: Input Interception Attack (IIA)

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org (Jan 27, 2021), p. n/a
1. Verfasser: Odetola, Tolulope A
Weitere Verfasser: Hawzhin Raoof Mohammed, Syed Rafay Hasan
Veröffentlicht:
Cornell University Library, arXiv.org
Schlagworte:
Online-Zugang:Citation/Abstract
Full text outside of ProQuest
Tags: Tag hinzufügen
Keine Tags, Fügen Sie das erste Tag hinzu!

MARC

LEADER 00000nab a2200000uu 4500
001 2312073009
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2312073009 
045 0 |b d20210127 
100 1 |a Odetola, Tolulope A 
245 1 |a A Stealthy Hardware Trojan Exploiting the Architectural Vulnerability of Deep Learning Architectures: Input Interception Attack (IIA) 
260 |b Cornell University Library, arXiv.org  |c Jan 27, 2021 
513 |a Working Paper 
520 3 |a Deep learning architectures (DLA) have shown impressive performance in computer vision, natural language processing and so on. Many DLA make use of cloud computing to achieve classification due to the high computation and memory requirements. Privacy and latency concerns resulting from cloud computing has inspired the deployment of DLA on embedded hardware accelerators. To achieve short time-to-market and have access to global experts, state-of-the-art techniques of DLA deployment on hardware accelerators are outsourced to untrusted third parties. This outsourcing raises security concerns as hardware Trojans can be inserted into the hardware design of the mapped DLA of the hardware accelerator. We argue that existing hardware Trojan attacks highlighted in literature have no qualitative means how definite they are of the triggering of the Trojan. Also, most inserted Trojans show a obvious spike in the number of hardware resources utilized on the accelerator at the time of triggering the Trojan or when the payload is active. In this paper, we introduce a hardware Trojan attack called Input Interception Attack (IIA). In this attack, we make use of the statistical properties of layer-by-layer output to ensure that asides from being stealthy. Our IIA is able to trigger with some measure of definiteness. Moreover, this IIA attack is tested on DLA used to classify MNIST and Cifar-10 data sets. The attacked design utilizes approximately up to 2% more LUTs respectively compared to the un-compromised designs. Finally, this paper discusses potential defensive mechanisms that could be used to combat such hardware Trojans based attack in hardware accelerators for DLA. 
653 |a Computer vision 
653 |a Deep learning 
653 |a Machine learning 
653 |a Natural language processing 
653 |a Hardware 
653 |a Cloud computing 
653 |a Accelerators 
653 |a Interception 
700 1 |a Hawzhin Raoof Mohammed 
700 1 |a Syed Rafay Hasan 
773 0 |t arXiv.org  |g (Jan 27, 2021), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2312073009/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/1911.00783