Explainable Artificial Intelligence : Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part I

個数:1
紙書籍版価格
¥20,047
  • 電子書籍
  • ポイントキャンペーン

Explainable Artificial Intelligence : Second World Conference, xAI 2024, Valletta, Malta, July 17–19, 2024, Proceedings, Part I

  • 著者名:Longo, Luca (EDT)/Lapuschkin, Sebastian (EDT)/Seifert, Christin (EDT)
  • 価格 ¥28,487 (本体¥25,898)
  • Springer(2024/07/09発売)
  • 寒さに負けない!Kinoppy 電子書籍・電子洋書 全点ポイント30倍キャンペーン(~2/15)
  • ポイント 7,740pt (実際に付与されるポイントはご注文内容確認画面でご確認下さい)
  • 言語:ENG
  • ISBN:9783031637865
  • eISBN:9783031637872

ファイル: /

Description

This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. 

The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on:

Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI.

Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI.

Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI.

Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.

Table of Contents

.- Intrinsically interpretable XAI and concept-based global explainability.
.- Seeking Interpretability and Explainability in Binary Activated Neural Networks.
.- Prototype-based Interpretable Breast Cancer Prediction Models: Analysis and Challenges.
.- Evaluating the Explainability of Attributes and Prototypes for a Medical Classification Model.
.- Revisiting FunnyBirds evaluation framework for prototypical parts networks.
.- CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining Vision Models.
.- Unveiling the Anatomy of Adversarial Attacks: Concept-based XAI Dissection of CNNs.
.- AutoCL: AutoML for Concept Learning.
.- Locally Testing Model Detections for Semantic Global Concepts.
.- Knowledge graphs for empirical concept retrieval.
.- Global Concept Explanations for Graphs by Contrastive Learning.
.- Generative explainable AI and verifiability.
.- Augmenting XAI with LLMs: A Case Study in Banking Marketing Recommendation.
.- Generative Inpainting for Shapley-Value-Based Anomaly Explanation.
.- Challenges and Opportunities in Text Generation Explainability.
.- NoNE Found: Explaining the Output of Sequence-to-Sequence Models when No Named Entity is Recognized.
.- Notion, metrics, evaluation and benchmarking for XAI.
.- Benchmarking Trust: A Metric for Trustworthy Machine Learning.
.- Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI.
.- Conditional Calibrated Explanations: Finding a Path between Bias and Uncertainty.
.- Meta-evaluating stability measures: MAX-Sensitivity & AVG-Senstivity.
.- Xpression: A unifying metric to evaluate Explainability and Compression of AI models.
.- Evaluating Neighbor Explainability for Graph Neural Networks.
.- A Fresh Look at Sanity Checks for Saliency Maps.
.- Explainability, Quantified: Benchmarking XAI techniques.
.- BEExAI: Benchmark to Evaluate Explainable AI.
.- Associative Interpretability of Hidden Semantics with Contrastiveness Operators in Face Classification tasks.

最近チェックした商品