Transformers in Action

個数:

Transformers in Action

  • 在庫がございません。海外の書籍取次会社を通じて出版社等からお取り寄せいたします。
    通常6~9週間ほどで発送の見込みですが、商品によってはさらに時間がかかることもございます。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合がございます。
    2. 複数冊ご注文の場合は、ご注文数量が揃ってからまとめて発送いたします。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 256 p.
  • 言語 ENG
  • 商品コード 9781633437883

Full Description

Transformer models power the chatbots, coders, and translators reshaping every industry today. Yet their architecture, math, and tuning often remain an intimidating black box. Stop copy pasting tutorials and start truly understanding what happens under the hood. Transformers in Action walks you through every layer with practical Python and clear analogies. Master small, large, and multimodal models, then optimize them for speed and cost. Build solutions that translate, summarize, and generate with confidence, efficiency, and rigor. 



Layer-by-layer walkthrough: See how attention, embeddings, and positional encodings produce fluent output. 



Task adaptation recipes: Fine-tune models for summarization, classification, or translation in minutes. 



Optimization strategies: Reduce latency, shrink memory, and cut cloud bills without sacrificing accuracy. 



Reinforcement learning techniques: Refine text generation quality using reward models and policy gradients. 



Multimodal expansion: Combine text and vision to build next-generation, cross-media applications. 



Complete code repository: Experiment instantly, tweak hyperparameters, and validate concepts on real datasets. 

Transformers in Action, by Quantmate CEO and Chief AI Officer Nicole Koenigstein, has clear math walkthroughs, annotated Python, and production-ready patterns that you can trust. 

The journey starts with encoder-only, decoder-only, and encoder-decoder variants, then moves to small language models for constrained environments. Each chapter couples theory with runnable notebooks, visual explanations, and performance benchmarks. Finish knowing exactly when to deploy a lightweight model, how to tune hyperparameters, and how to monitor costs. You will ship faster, safer, and leaner LLM solutions that impress users and stakeholders. 

Ideal for software engineers and data scientists comfortable with Python and basic machine learning, eager to unlock transformer power. 

Contents

PART 1 FOUNDATIONS OF MODERN TRANSFORMER MODELS 

1 THE NEED FOR TRANSFORMERS 

2 A DEEPER LOOK INTO TRANSFORMERS 

PART 2: GENERATIVE TRANSFORMERS 

3 MODEL FAMILIES AND ARCHITECTURE VARIANTS 

4 TEXT GENERATION STRATEGIES AND PROMPTING TECHNIQUES 

5 PREFERENCE ALIGNMENT AND RAG 

PART 3: SPECIALIZED MODELS 

6 MULTIMODAL MODELS 

7 EFFICIENT AND SPECIALIZED LARGE LANGUAGE MODELS 

8 TRAINING AND EVALUATING LARGE LANGUAGE MODELS

9 OPTIMIZING AND SCALING LARGE LANGUAGE MODELS 

10 ETHICAL AND RESPONSIBLE LARGE LANGUAGE MODELS 

最近チェックした商品