On improving generalization in a class of learning problems with the method of small parameters for weakly-controlled optimal gradient systems
Сохранить в:
| Опубликовано в:: | arXiv.org (Dec 11, 2024), p. n/a |
|---|---|
| Главный автор: | |
| Опубликовано: |
Cornell University Library, arXiv.org
|
| Предметы: | |
| Online-ссылка: | Citation/Abstract Full text outside of ProQuest |
| Метки: |
Нет меток, Требуется 1-ая метка записи!
|
| Краткий обзор: | In this paper, we provide a mathematical framework for improving generalization in a class of learning problems which is related to point estimations for modeling of high-dimensional nonlinear functions. In particular, we consider a variational problem for a weakly-controlled gradient system, whose control input enters into the system dynamics as a coefficient to a nonlinear term which is scaled by a small parameter. Here, the optimization problem consists of a cost functional, which is associated with how to gauge the quality of the estimated model parameters at a certain fixed final time w.r.t. the model validating dataset, while the weakly-controlled gradient system, whose the time-evolution is guided by the model training dataset and its perturbed version with small random noise. Using the perturbation theory, we provide results that will allow us to solve a sequence of optimization problems, i.e., a set of decomposed optimization problems, so as to aggregate the corresponding approximate optimal solutions that are reasonably sufficient for improving generalization in such a class of learning problems. Moreover, we also provide an estimate for the rate of convergence for such approximate optimal solutions. Finally, we present some numerical results for a typical case of nonlinear regression problem. |
|---|---|
| ISSN: | 2331-8422 |
| Источник: | Engineering Database |