Explainable Artificial Intelligence : Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part III (Communications in Computer and Information Science) (2024)

個数:

Explainable Artificial Intelligence : Second World Conference, xAI 2024, Valletta, Malta, July 17-19, 2024, Proceedings, Part III (Communications in Computer and Information Science) (2024)

  • 提携先の海外書籍取次会社に在庫がございます。通常3週間で発送いたします。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合が若干ございます。
    2. 複数冊ご注文の場合、分割発送となる場合がございます。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版/ページ数 456 p.
  • 商品コード 9783031637995

Full Description

This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. 

The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on:

Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI.

Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI.

Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI.

Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.

Contents

.- Counterfactual explanations and causality for eXplainable AI.

.- Sub-SpaCE: Subsequence-based Sparse Counterfactual Explanations for Time Series Classification Problems.

.- Human-in-the-loop Personalized Counterfactual Recourse.

.- COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images.

.- Enhancing Counterfactual Explanation Search with Diffusion Distance and Directional Coherence.

.- CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests.

.- Causality-Aware Local Interpretable Model-Agnostic Explanations.

.- Evaluating the Faithfulness of Causality in Saliency-based Explanations of Deep Learning Models for Temporal Colour Constancy.

.- CAGE: Causality-Aware Shapley Value for Global Explanations.

.- Fairness, trust, privacy, security, accountability and actionability in eXplainable AI.

.- Exploring the Reliability of SHAP Values in Reinforcement Learning.

.- Categorical Foundation of Explainable AI: A Unifying Theory.

.- Investigating Calibrated Classification Scores through the Lens of Interpretability.

.- XentricAI: A Gesture Sensing Calibration Approach through Explainable and User-Centric AI.

.- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution.

.- ConformaSight: Conformal Prediction-Based Global and Model-Agnostic Explainability Framework.

.- Differential Privacy for Anomaly Detection: Analyzing the Trade-off Between Privacy and Explainability.

.- Blockchain for Ethical & Transparent Generative AI Utilization by Banking & Finance Lawyers.

.- Multi-modal Machine learning model for Interpretable Mobile Malware Classification.

.- Explainable Fraud Detection with Deep Symbolic Classification.

.- Better Luck Next Time: About Robust Recourse in Binary Allocation Problems.

.- Towards Non-Adversarial Algorithmic Recourse.

.- Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring.

.- XAI for Time Series Classification: Evaluating the Benefits of Model Inspection for End-Users.

最近チェックした商品