Description
This book discusses how to assess, explain, and rate the trustworthiness of artificial intelligence (AI) models and systems, and the authors use a causality-based rating approach to measure trust in AI models and tools, especially when using AI to make financial decisions. AI systems are currently being deployed at large scale for practical applications, and it is important to define, measure, and communicate metrics that can indicate the trustworthiness of AI before using them to perform critical activities. Despite their growing prevalence, there is a gap in understanding about how to assess AI-based systems effectively to ensure they are responsible, unbiased, and accurate. This book provides background information on cutting-edge AI trustworthiness to make essential decisions, and readers will learn how to think methodically with respect to explainability, causality, and factors affecting trustworthiness such as bias indication. Additional topics include compliance with regulatory and market demands and an examination of the concept of a "trust score" or "trust rating" for AI systems where these metrics are reviewed, augmented, and applied to multiple AI examples.
Preface.- AI and Trust for Finance.- White-box and Black-box Rating in Literature.- Data and Methods.- Demonstrating the ARC Tool.- Discussion and Concluding Remarks.
Kausik Lakkaraju is a doctoral candidate at the AI Institute of the University of South Carolina, specializing in evaluating AI systems through causal analysis. He has worked extensively with AI models spanning unimodal (text, image, and numeric/ time-series) and multimodal diversity in a variety of forms including foundations models, rule-based systems, and chatbots and with applications in finance, health, and education. He has worked closely with researchers at JP Morgan Chase Research leading to a tutorial on AI trustworthiness in finance at ICAIF, and he has received many recognitions including best poster awards.
Biplav Srivastava, Ph.D., is a Professor of Computer Science at the AI Institute and Department of Computer Science at the University of South Carolina (USC). With over three decades of AI experience in industry and academia, he directs the 'AI for Society' group, which is investigating how to enable people to make rational decisions despite the real world complexities of poor data, changing goals, and limited resources by augmenting their cognitive limitations with technology. His work in AI spans the sub-fields of reasoning (planning, scheduling), knowledge extraction and representation (ontology, open data), learning (classification, deep, adversarial), and interaction (collaborative assistants), and extends their application for services (process automation, composition) and sustainability (governance - elections, water, traffic, health, power). Dr. Srivastava has been conducting research in trustworthy AI for a decade, introduced the first course on the topic at USC, and has led USC s participation in the National Institute of Standards and Technology (NIST)'s Artificial Intelligence Consortium (AIC) since its inception in 2024.



