Training Neural Networks to Perform Structured Prediction Task

Guardado en:
Bibliografiske detaljer
Udgivet i:ProQuest Dissertations and Theses (2024)
Hovedforfatter: Sargordi, Maziar
Udgivet:
ProQuest Dissertations & Theses
Fag:
Online adgang:Citation/Abstract
Full Text - PDF
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!
Beskrivelse
Resumen:Despite their numerous successes on various challenging tasks, deep neural networks still struggle to learn combinatorial structure, where multiple discrete outputs have interconnected relationships governed by constraints, especially when there is not enough data for the model to learn the output structure. Constraint programming, a type of non-learning algorithm, focuses on structure. It has a developed and successful past in recognizing combinatorial structures that frequently recur, and in developing advanced algorithms to extract information from these structures. In particular, we are interested in the relative frequency of a given variable-value assignment in that combinatorial structure. The constraint programming with belief propagation framework generalizes this model by propagating these relative frequencies from a constraint programming model to approximate the marginal probability mass functions of each variable. These estimated marginal probabilities are used as penalties within the loss function, improving the neural network’s learning and efficiency from samples. In this thesis, we propose to train a neural network to generate output that aligns with a combinatorial structure expressed as a constraint programming model. This is achieved by calculating a loss function that includes marginals determined by constraint programming with a belief propagation solver.We argue that this model offers a more natural integration of constraint programming and neural networks. We offer practical evidence that training the model using this approach significantly enhances its performance, especially when there is a limited amount of data available. Our results on the Partial Latin Square problem indicate consistent improvement in the accuracy of the model over the existing methods.
ISBN:9798293883752
Fuente:ProQuest Dissertations & Theses Global