A Small-Sample Scenario Optimization Scheduling Method Based on Multidimensional Data Expansion

Guardado en:
Detalles Bibliográficos
Publicado en:Algorithms vol. 18, no. 6 (2025), p. 373
Autor principal: Liu Yaoxian
Otros Autores: Zhang Kaixin, Sun, Yue, Chen, Jingwen, Chen Junshuo
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Currently, deep reinforcement learning has been widely applied to energy system optimization and scheduling, and the DRL method relies more heavily on historical data. The lack of historical operation data in new integrated energy systems leads to insufficient DRL training samples, which easily triggers the problems of underfitting and insufficient exploration of the decision space and thus reduces the accuracy of the scheduling plan. In addition, conventional data-driven methods are also difficult to accurately predict renewable energy output due to insufficient training data, which further affects the scheduling effect. Therefore, this paper proposes a small-sample scenario optimization scheduling method based on multidimensional data expansion. Firstly, based on spatial correlation, the daily power curves of PV power plants with measured power are screened, and the meteorological similarity is calculated using multicore maximum mean difference (MK-MMD) to generate new energy output historical data of the target distributed PV system through the capacity conversion method; secondly, based on the existing daily load data of different types, the load historical data are generated using the stochastic and simultaneous sampling methods to construct the full historical dataset; subsequently, for the sample imbalance problem in the small-sample scenario, an oversampling method is used to enhance the data for the scarce samples, and the XGBoost PV output prediction model is established; finally, the optimal scheduling model is transformed into a Markovian decision-making process, which is solved by using the Deep Deterministic Policy Gradient (DDPG) algorithm. The effectiveness of the proposed method is verified by arithmetic examples.
ISSN:1999-4893
DOI:10.3390/a18060373
Fuente:Engineering Database