Analyzing the Impact of Approximate Arithmetic on Deep Neural Network Predictions

Guardado en:
Detalles Bibliográficos
Publicado en:ProQuest Dissertations and Theses (2025)
Autor principal: Garcia, Johnatan
Publicado:
ProQuest Dissertations & Theses
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:In recent times, we have seen the use of artificial intelligence in our daily lives. It helps us solve complicated problems. Some of these problems can be large and complex, requiring large models. As models grow in complexity, they require more computations and energy to be trained and tested. The execution of these models relies on floating-point arithmetic, which imposes constraints due to its finite precision. Due to these limitations, many of these computations are not exact. When this happens, computers are forced to round or approximate. We can use several number formats to circumvent this issue. For example, in single precision, we are allowed to use 24 binary digits, and in double precision, we are allowed to use 53 bits of precision. We can also explore small formats like FP8, which could have 3 or 4 bits of precision. The importance of choosing the right format can drastically reduce the resources needed and allow us to increase or decrease the precision depending on the model’s performance.As it propagates through the model, the error caused by rounding is compounded across the different layers and may have an impact on the model’s final prediction. If we can analyze the rounding errors, we are then able to increase or decrease the model’s precision to better optimize the resources and predictions. If we notice almost no error, we are then able to reduce the precision, optimizing the time and memory needed. In this work, we contributed by developing a software that uses the PyTorch C++ API to load and analyze the impact of the rounded error produced. We tested our software not only with standard forward-feeding models, but with deep learning models as well. We built this by using our implementation of the tensor core that allows custom floating-point operations to be performed. With this class, we can produce the relative error, absolute error, and an upper and lower bound of where the final answer may be. 
ISBN:9798290993232
Fuente:ProQuest Dissertations & Theses Global