Reinforcement Learning for Algorithmic Trading in Financial Markets

Enregistré dans:
Détails bibliographiques
Publié dans:ProQuest Dissertations and Theses (2025)
Auteur principal: Gityforoze, Soheil
Publié:
ProQuest Dissertations & Theses
Sujets:
Accès en ligne:Citation/Abstract
Full Text - PDF
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
Description
Résumé:In multi-period algorithmic trading, determining algorithms that are ideal for riskaverse strategies is a challenging task. This study explored the application of model-free reinforcement learning (RL) in algorithmic trading and analyzed the relationship between risk-averse strategies and implementation of RL algorithms including Q-learning, Greedy-GQ, and SARSA. The data for this quantitative research included one year of the E-mini-NASDAQ-100 futures (2023-2024). Over 7,500 simulation results substantiated a proof of concept that Q-learning can successfully generate risk-adjusted trading signals in the highly liquid technology-focused futures market. With an optimized configuration of hyperparameters including look-back period, basis and reward functions, Q-learning delivered nearly twice the returns of the competing RL algorithms. Beyond absolute returns, Q-learning exhibited lower volatility across key risk metrics and outperformed the NASDAQ-100 benchmark by approximately 75 percentage points. These findings suggest reinforcement learning as a promising artificial intelligence and machine learning framework for alpha generating strategies in systematic trading.
ISBN:9798288882517
Source:ProQuest Dissertations & Theses Global