Training Neural Networks to Perform Structured Prediction Task

Bewaard in:
Bibliografische gegevens
Gepubliceerd in:ProQuest Dissertations and Theses (2024)
Hoofdauteur: Sargordi, Maziar
Gepubliceerd in:
ProQuest Dissertations & Theses
Onderwerpen:
Online toegang:Citation/Abstract
Full Text - PDF
Tags: Voeg label toe
Geen labels, Wees de eerste die dit record labelt!

MARC

LEADER 00000nab a2200000uu 4500
001 3254319166
003 UK-CbPIL
020 |a 9798293883752 
035 |a 3254319166 
045 2 |b d20240101  |b d20241231 
084 |a 66569  |2 nlm 
100 1 |a Sargordi, Maziar 
245 1 |a Training Neural Networks to Perform Structured Prediction Task 
260 |b ProQuest Dissertations & Theses  |c 2024 
513 |a Dissertation/Thesis 
520 3 |a Despite their numerous successes on various challenging tasks, deep neural networks still struggle to learn combinatorial structure, where multiple discrete outputs have interconnected relationships governed by constraints, especially when there is not enough data for the model to learn the output structure. Constraint programming, a type of non-learning algorithm, focuses on structure. It has a developed and successful past in recognizing combinatorial structures that frequently recur, and in developing advanced algorithms to extract information from these structures. In particular, we are interested in the relative frequency of a given variable-value assignment in that combinatorial structure. The constraint programming with belief propagation framework generalizes this model by propagating these relative frequencies from a constraint programming model to approximate the marginal probability mass functions of each variable. These estimated marginal probabilities are used as penalties within the loss function, improving the neural network’s learning and efficiency from samples. In this thesis, we propose to train a neural network to generate output that aligns with a combinatorial structure expressed as a constraint programming model. This is achieved by calculating a loss function that includes marginals determined by constraint programming with a belief propagation solver.We argue that this model offers a more natural integration of constraint programming and neural networks. We offer practical evidence that training the model using this approach significantly enhances its performance, especially when there is a limited amount of data available. Our results on the Partial Latin Square problem indicate consistent improvement in the accuracy of the model over the existing methods. 
653 |a Computer science 
653 |a Artificial intelligence 
773 0 |t ProQuest Dissertations and Theses  |g (2024) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3254319166/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3254319166/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch