Introduction to Foundation Models (2025. xiii, 310 S. XIII, 310 p. 55 illus. 235 mm)

個数:

Introduction to Foundation Models (2025. xiii, 310 S. XIII, 310 p. 55 illus. 235 mm)

  • 在庫がございません。海外の書籍取次会社を通じて出版社等からお取り寄せいたします。
    通常6~9週間ほどで発送の見込みですが、商品によってはさらに時間がかかることもございます。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合がございます。
    2. 複数冊ご注文の場合、分割発送となる場合がございます。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 280 p.
  • 言語 ENG
  • 商品コード 9783031767692

Full Description

This book offers an extensive exploration of foundation models, guiding readers through the essential concepts and advanced topics that define this rapidly evolving research area. Designed for those seeking to deepen their understanding and contribute to the development of safer and more trustworthy AI technologies, the book is divided into three parts providing the fundamentals, advanced topics in foundation modes, and safety and trust in foundation models:

Part I introduces the core principles of foundation models and generative AI, presents the technical background of neural networks, delves into the learning and generalization of transformers, and finishes with the intricacies of transformers and in-context learning.



Part II introduces automated visual prompting techniques, prompting LLMs with privacy, memory-efficient fine-tuning methods, and shows how LLMs can be reprogrammed for time-series machine learning tasks. It explores how LLMs can be reused for speech tasks, how synthetic datasets can be used to benchmark foundation models, and elucidates machine unlearning for foundation models.



Part III provides a comprehensive evaluation of the trustworthiness of LLMs, introduces jailbreak attacks and defenses for LLMs, presents safety risks when find-tuning LLMs, introduces watermarking techniques for LLMs, presents robust detection of AI-generated text, elucidates backdoor risks in diffusion models, and presents red-teaming methods for diffusion models.



Mathematical notations are clearly defined and explained throughout, making this book an invaluable resource for both newcomers and seasoned researchers in the field.

 

Contents

Part I-Fundamentals of Foundation Models.-Chapter 1-Foundation Models and Generative AI.- Chapter 2-Neural Networks.- Chapter 3- Learning and Generalization of Vision Transformers.- Chapter 4-Formalizing In-Context Learning in Transformers.- Part II Advanced Topics in Foundation Model.- Chapter 5-Automated Visual Prompting.- Chapter 6-Prompting Large Language Models with Privacy.- Chapter 7- Memory-Efficient Fine-Tuning for Foundation Models.- Chapter 8 Large Language Models Meet Time Series.- Chapter 9-Large Language Models Meet Speech Recognition.- Chapter 10-Benchmarking Foundation Models using Synthetic Datasets.- Chapter 11-Machine Unlearning for Foundation Models.- Chapter 12-Part III Trust and Safety in Foundation Models.- Chapter 12-Trustworthiness Evaluation of Large Language Models.- Chapter 13-Attacks and Defenses on Aligned Large Language Models.- Chapter 14- Safety Risks in Fine-tuning Large Language Models.- Chapter15- Watermarks for Large Language Models.- Chapter 16- AI-Generated Text Detection.- Chapter 17- Backdoor Risks in Diffusion Models.- Chapter 18- Prompt Engineering for Safety Red-teaming: A Case Study on Text-to-Image Diffusion Models.

最近チェックした商品