Machine Unlearning for Governance of Foundation Models (Synthesis Lectures on Computer Vision) (2026. x, 140 S. X, 140 p. 240 mm)

個数:
  • 予約

Machine Unlearning for Governance of Foundation Models (Synthesis Lectures on Computer Vision) (2026. x, 140 S. X, 140 p. 240 mm)

  • 現在予約受付中です。出版後の入荷・発送となります。
    重要:表示されている発売日は予定となり、発売が延期、中止、生産限定品で商品確保ができないなどの理由により、ご注文をお取消しさせていただく場合がございます。予めご了承ください。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版
  • 商品コード 9783032172815

Full Description

This book provides a systematic and in-depth introduction to machine unlearning (MU) for foundation models, framed through an optimization-model-data tri-design perspective and complemented by assessments and applications. As foundation models are continuously adapted and reused, the ability to selectively remove unwanted data, knowledge, or model behavior, without full retraining, poses new theoretical and practical challenges. Thus, MU has become a critical capability for trustworthy, deployable, and regulation-ready artificial intelligence. From the optimization viewpoint, this book treats unlearning as a multi-objective and often adversarial problem that must simultaneously enforce targeted forgetting, preserve model utility, resist recovery attacks, and remain computationally efficient. From the model perspective, the book examines how knowledge is distributed across layers and latent subspaces, motivating modular and localized unlearning. From the data perspective, the book explores forget-set construction, data attribution, corruption, and coresets as key drivers of reliable forgetting. 

Bridging theory and practice, the book also provides a comprehensive review of benchmark datasets and evaluation metrics for machine unlearning, critically examining their strengths and limitations. The authors further survey a wide range of applications in computer vision and large language models, including AI safety, privacy, fairness, and industrial deployment, highlighting why post-training model modification is often preferred over repeated retraining in real-world systems. By unifying optimization, model, data, evaluation, and application perspectives, this book offers both a foundational framework and a practical toolkit for designing machine unlearning methods that are effective, robust, and ready for large-scale, regulated deployment.

Contents

Introduction.- Concept Dissection of MU.- Algorithmic Foundations of MU.- Evaluation Metrics and Methods of MU.- Applications.- Conclusion and Prospects.

最近チェックした商品