Model-Free Deep Reinforcement Learning in Software-Defined Networks

Saved in:
Bibliographic Details
Published in:arXiv.org (Sep 3, 2022), p. n/a
Main Author: Borchjes, Luke
Other Authors: Nyirenda, Clement, Leenen, Louise
Published:
Cornell University Library, arXiv.org
Subjects:
Online Access:Citation/Abstract
Full text outside of ProQuest
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Abstract:This paper compares two deep reinforcement learning approaches for cyber security in software defined networking. Neural Episodic Control to Deep Q-Network has been implemented and compared with that of Double Deep Q-Networks. The two algorithms are implemented in a format similar to that of a zero-sum game. A two-tailed T-test analysis is done on the two game results containing the amount of turns taken for the defender to win. Another comparison is done on the game scores of the agents in the respective games. The analysis is done to determine which algorithm is the best in game performer and whether there is a significant difference between them, demonstrating if one would have greater preference over the other. It was found that there is no significant statistical difference between the two approaches.
ISSN:2331-8422
Source:Engineering Database