Inference Plans for Hybrid Particle Filtering

Guardado en:
Detalles Bibliográficos
Publicado en:arXiv.org (Dec 14, 2024), p. n/a
Autor principal: Cheng, Ellie Y
Otros Autores: Atkinson, Eric, Baudart, Guillaume, Mandel, Louis, Carbin, Michael
Publicado:
Cornell University Library, arXiv.org
Materias:
Acceso en línea:Citation/Abstract
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3095811744
003 UK-CbPIL
022 |a 2331-8422 
024 7 |a 10.1145/3704846  |2 doi 
035 |a 3095811744 
045 0 |b d20241214 
100 1 |a Cheng, Ellie Y 
245 1 |a Inference Plans for Hybrid Particle Filtering 
260 |b Cornell University Library, arXiv.org  |c Dec 14, 2024 
513 |a Working Paper 
520 3 |a Advanced probabilistic programming languages (PPLs) using hybrid particle filtering combine symbolic exact inference and Monte Carlo methods to improve inference performance. These systems use heuristics to partition random variables within the program into variables that are encoded symbolically and variables that are encoded with sampled values, and the heuristics are not necessarily aligned with the developer's performance evaluation metrics. In this work, we present inference plans, a programming interface that enables developers to control the partitioning of random variables during hybrid particle filtering. We further present Siren, a new PPL that enables developers to use annotations to specify inference plans the inference system must implement. To assist developers with statically reasoning about whether an inference plan can be implemented, we present an abstract-interpretation-based static analysis for Siren for determining inference plan satisfiability. We prove the analysis is sound with respect to Siren's semantics. Our evaluation applies inference plans to three different hybrid particle filtering algorithms on a suite of benchmarks. It shows that the control provided by inference plans enables speed ups of 1.76x on average and up to 206x to reach a target accuracy, compared to the inference plans implemented by default heuristics; the results also show that inference plans improve accuracy by 1.83x on average and up to 595x with less or equal runtime, compared to the default inference plans. We further show that our static analysis is precise in practice, identifying all satisfiable inference plans in 27 out of the 33 benchmark-algorithm evaluation settings. 
653 |a Semantics 
653 |a Performance evaluation 
653 |a Static code analysis 
653 |a Random variables 
653 |a Probabilistic inference 
653 |a Programming languages 
653 |a Monte Carlo simulation 
653 |a Algorithms 
653 |a Annotations 
653 |a Heuristic 
653 |a Filtration 
653 |a Coding 
653 |a Heuristic methods 
653 |a Benchmarks 
700 1 |a Atkinson, Eric 
700 1 |a Baudart, Guillaume 
700 1 |a Mandel, Louis 
700 1 |a Carbin, Michael 
773 0 |t arXiv.org  |g (Dec 14, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3095811744/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2408.11283