Navigating AI conformity: A design framework to assess fairness, explainability, and performance

Gardado en:
Detalles Bibliográficos
Publicado en:Electronic Markets vol. 35, no. 1 (Dec 2025), p. 24
Autor Principal: von Zahn, Moritz
Outros autores: Zacharias, Jan, Lowin, Maximilian, Chen, Johannes, Hinz, Oliver
Publicado:
Springer Nature B.V.
Materias:
Acceso en liña:Citation/Abstract
Full Text
Full Text - PDF
Etiquetas: Engadir etiqueta
Sen Etiquetas, Sexa o primeiro en etiquetar este rexistro!
Descripción
Resumo:Artificial intelligence (AI) systems create value but can pose substantial risks, particularly due to their black-box nature and potential bias towards certain individuals. In response, recent legal initiatives require organizations to ensure their AI systems conform to overarching principles such as explainability and fairness. However, conducting such conformity assessments poses significant challenges for organizations, including a lack of skilled experts and ambiguous guidelines. In this paper, the authors help organizations by providing a design framework for assessing the conformity of AI systems. Specifically, building upon design science research, the authors conduct expert interviews, derive design requirements and principles, instantiate the framework in an illustrative software artifact, and evaluate it in five focus group sessions. The artifact is designed to both enable a fast, semi-automated assessment of principles such as fairness and explainability and facilitate communication between AI owners and third-party stakeholders (e.g., regulators). The authors provide researchers and practitioners with insights from interviews along with design knowledge for AI conformity assessments, which may prove particularly valuable in light of upcoming regulations such as the European Union AI Act.
ISSN:1019-6781
1422-8890
DOI:10.1007/s12525-025-00770-2
Fonte:ABI/INFORM Global