A Massively Parallel Implementation of Multilevel Monte Carlo for Finite Element Models

Guardat en:
Dades bibliogràfiques
Publicat a:arXiv.org (May 23, 2023), p. n/a
Autor principal: Badia, Santiago
Altres autors: Hampton, Jerrad, Principe, Javier
Publicat:
Cornell University Library, arXiv.org
Matèries:
Accés en línia:Citation/Abstract
Full text outside of ProQuest
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 2601723931
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2601723931 
045 0 |b d20230523 
100 1 |a Badia, Santiago 
245 1 |a A Massively Parallel Implementation of Multilevel Monte Carlo for Finite Element Models 
260 |b Cornell University Library, arXiv.org  |c May 23, 2023 
513 |a Working Paper 
520 3 |a The Multilevel Monte Carlo (MLMC) method has proven to be an effective variance-reduction statistical method for Uncertainty Quantification (UQ) in Partial Differential Equation (PDE) models, combining model computations at different levels to create an accurate estimate. Still, the computational complexity of the resulting method is extremely high, particularly for 3D models, which requires advanced algorithms for the efficient exploitation of High Performance Computing (HPC). In this article we present a new implementation of the MLMC in massively parallel computer architectures, exploiting parallelism within and between each level of the hierarchy. The numerical approximation of the PDE is performed using the finite element method but the algorithm is quite general and could be applied to other discretization methods as well, although the focus is on parallel sampling. The two key ingredients of an efficient parallel implementation are a good processor partition scheme together with a good scheduling algorithm to assign work to different processors. We introduce a multiple partition of the set of processors that permits the simultaneous execution of different levels and we develop a dynamic scheduling algorithm to exploit it. The problem of finding the optimal scheduling of distributed tasks in a parallel computer is an NP-complete problem. We propose and analyze a new greedy scheduling algorithm to assign samples and we show that it is a 2-approximation, which is the best that may be expected under general assumptions. On top of this result we design a distributed memory implementation using the Message Passing Interface (MPI) standard. Finally we present a set of numerical experiments illustrating its scalability properties. 
653 |a Scheduling 
653 |a Finite element method 
653 |a Message passing 
653 |a Task scheduling 
653 |a Partial differential equations 
653 |a Mathematical analysis 
653 |a Microprocessors 
653 |a Three dimensional models 
653 |a Greedy algorithms 
653 |a Statistical methods 
653 |a Approximation 
653 |a Mathematical models 
653 |a Processors 
653 |a Algorithms 
653 |a Distributed memory 
653 |a Parallel computers 
700 1 |a Hampton, Jerrad 
700 1 |a Principe, Javier 
773 0 |t arXiv.org  |g (May 23, 2023), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2601723931/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2111.11788