Human Factors Engineering in Explainable AI: Putting People First

Guardado en:
Detalles Bibliográficos
Publicado en:International Conference on Cyber Warfare and Security (Mar 2025), p. 313
Autor principal: Nobles, Calvin
Publicado:
Academic Conferences International Limited
Materias:
Acceso en línea:Citation/Abstract
Full Text
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:[...]Section 6 presents the conclusion, summarizing key insights and implications. 2. Explainability is typically unnecessary in two key scenarios: (1) when the outcomes have minimal impact and do not carry significant consequences, and (2) when the problem is well-understood, and the system's decisions are considered reliable, such as in applications like advertisement systems and postal code sorting (Adadi & Berrada, 2018; Doshi-Velez & Kim, 2017). [...]evaluating contexts where explanations and interpretations offer meaningful value (Adadi 8: [...]explanation accuracy requires correctly representing the processes leading to the system's outputs and maintaining fidelity to the Al model's operations (Phillips et al., 2021). [...]the knowledge limits principle asserts that the system should recognize and signal when functioning beyond its design parameters or lacks sufficient confidence in its output, safeguarding against inappropriate or unreliable responses in uncertain conditions (Phillips et al., 2021). [...]industries may favor less accurate but more interpretable models.
Fuente:Political Science Database