Game Theory Meets Explainable AI: An Enhanced Approach to Understanding Black Box Models Through Shapley Values

保存先:
書誌詳細
出版年:International Journal of Advanced Computer Science and Applications vol. 16, no. 7 (2025)
第一著者: PDF
出版事項:
Science and Information (SAI) Organization Limited
主題:
オンライン・アクセス:Citation/Abstract
Full Text - PDF
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!

MARC

LEADER 00000nab a2200000uu 4500
001 3240918457
003 UK-CbPIL
022 |a 2158-107X 
022 |a 2156-5570 
024 7 |a 10.14569/IJACSA.2025.0160770  |2 doi 
035 |a 3240918457 
045 2 |b d20250101  |b d20251231 
100 1 |a PDF 
245 1 |a Game Theory Meets Explainable AI: An Enhanced Approach to Understanding Black Box Models Through Shapley Values 
260 |b Science and Information (SAI) Organization Limited  |c 2025 
513 |a Journal Article 
520 3 |a The increasing complexity of machine learning models necessitates robust methods for interpretability, particularly in clustering applications, where understanding group characteristics is critical. To this end, this paper introduces a novel framework that integrates cooperative game theory and explainable artificial intelligence (XAI) to enhance the interpretability of black-box clustering models. Our framework integrates approximated Shapley values with multi-level clustering to reveal hierarchical feature interactions, enabling both local and global interpretability. The validity of this framework is achieved by conducting extensive empirical evaluations of two datasets, the Portuguese wine quality benchmark and Beijing Multi-Site Air Quality dataset the framework demonstrates improved clustering quality and interpretability, with features such as density and total sulfur dioxide emerging as dominant predictors in the wine analysis, while pollutants like PM2.5 and NO2 significantly influence air quality clustering. Key contributions include a multi-level clustering approach that reveals hierarchical feature attribution, use of interactive visualizations produced by Altair and a single interpretability framework that validate the state-of-art baselines. As a result, the framework forms a strong basis of interpretable clustering in essential fields like healthcare, finance, and environmental surveillance, which reinforces its generalization with respect to each domain. The results underline the need for interpretability in machine learning, providing actionable insights for stakeholders in a variety of fields. 
651 4 |a Beijing China 
651 4 |a China 
653 |a Datasets 
653 |a Game theory 
653 |a Air quality 
653 |a Black boxes 
653 |a Machine learning 
653 |a Nitrogen dioxide 
653 |a Explainable artificial intelligence 
653 |a Clustering 
653 |a Sulfur dioxide 
653 |a Validity 
653 |a Computer science 
653 |a Artificial intelligence 
653 |a Optimization techniques 
653 |a Business intelligence 
653 |a Decision making 
653 |a Neural networks 
653 |a Cluster analysis 
773 0 |t International Journal of Advanced Computer Science and Applications  |g vol. 16, no. 7 (2025) 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3240918457/abstract/embedded/J7RWLIQ9I3C9JK51?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3240918457/fulltextPDF/embedded/J7RWLIQ9I3C9JK51?source=fedsrch