Assessing, Explaining, and Rating AI Systems for Trust : With Applications in Finance.DE (Synthesis Lectures on Computer Science)

個数:
  • 予約
  • ポイントキャンペーン

Assessing, Explaining, and Rating AI Systems for Trust : With Applications in Finance.DE (Synthesis Lectures on Computer Science)

  • 現在予約受付中です。出版後の入荷・発送となります。
    重要:表示されている発売日は予定となり、発売が延期、中止、生産限定品で商品確保ができないなどの理由により、ご注文をお取消しさせていただく場合がございます。予めご了承ください。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版
  • 商品コード 9783032210388

Description

This book discusses how to assess, explain, and rate the trustworthiness of artificial intelligence (AI) models and systems, and the authors use a causality-based rating approach to measure trust in AI models and tools, especially when using AI to make financial decisions.  AI systems are currently being deployed at large scale for practical applications, and it is important to define, measure, and communicate metrics that can indicate the trustworthiness of AI before using them to perform critical activities.  Despite their growing prevalence, there is a gap in understanding about how to assess AI-based systems effectively to ensure they are responsible, unbiased, and accurate.  This book provides  background information on cutting-edge AI trustworthiness to make essential decisions, and readers will learn how to think methodically with respect to explainability, causality, and factors affecting trustworthiness such as bias indication.  Additional topics include compliance with regulatory and market demands and an examination of the concept of a "trust score" or "trust rating" for AI systems where these metrics are reviewed, augmented, and applied to multiple AI examples.  

Preface.- AI and Trust for Finance.- White-box and Black-box Rating in Literature.- Data and Methods.- Demonstrating the ARC Tool.- Discussion and Concluding Remarks.

Kausik Lakkaraju is a doctoral candidate at the AI Institute of the University of South Carolina, specializing in evaluating AI systems through causal analysis. He has worked extensively with AI models spanning unimodal (text, image, and numeric/ time-series) and multimodal diversity in a variety of forms including foundations models, rule-based systems, and chatbots and with applications in finance, health, and education. He has worked closely with researchers at JP Morgan Chase Research leading to a tutorial on AI trustworthiness in finance at ICAIF, and he   has received many recognitions including best poster awards.

Biplav Srivastava, Ph.D., is a Professor of Computer Science at the AI Institute and Department of Computer Science at the University of South Carolina (USC). With over three decades of AI experience in industry and academia, he directs the 'AI for Society' group, which is investigating how to enable people to make rational decisions despite the real world complexities of poor data, changing goals, and limited resources by augmenting their cognitive limitations with technology. His work in AI spans the sub-fields of reasoning (planning, scheduling), knowledge extraction and representation (ontology, open data), learning (classification, deep, adversarial), and interaction (collaborative assistants), and extends their application for services (process automation, composition) and sustainability (governance - elections, water, traffic, health, power). Dr. Srivastava has been conducting research in trustworthy AI for a decade, introduced the first course on the topic at USC, and has led USC s participation in the National Institute of Standards and Technology (NIST)'s Artificial Intelligence Consortium (AIC) since its inception in 2024.


最近チェックした商品