- ホーム
- > 洋書
- > 英文書
- > Computer / Languages
Full Description
Learn distributed AI through hands-on experience with training frameworks, inference engines, and orchestration tools to build production-ready training, inference and serving systems for modern large-scale AI.
Key Features
Understand GPU hardware, high-speed interconnects, and parallelism strategies
Learn distributed training with resource-optimized techniques
Deploy high-performance inference with advanced optimization and memory management
Build production serving stacks with job schedulers, orchestration, and observability
Purchase of the print or Kindle book includes a free PDF eBook
Book DescriptionAs AI models grow to billions and trillions of parameters, distributed systems are essential for training and serving them. Many resources cover fragments of this domain, but none provide a full path from distributed training to inference and production deployment. This book fills that gap with practical, production-focused examples.
It starts with GPU and memory estimation, data preparation, and an overview of GPU architecture, interconnects, and core parallelism strategies. You'll learn training techniques including data parallelism for single and multi-node setups, parameter sharding for memory-efficient scaling, and methods to reduce memory usage in large models.
The next section covers distributed inference and deployment. You'll build high-performance systems using optimized attention, caching, operator fusion, and router-based designs. You'll deploy on schedulers and container platforms with GPU-aware orchestration and assemble production stacks emphasizing reliability, scalability, and observability.
The final section covers benchmarking, performance tuning, and trends like MoE models, edge - cloud co-ordination, and advanced parallelism. Each chapter includes tested code and debugging guidance.
By the end, you'll be able to build distributed AI systems that scale from a single GPU to large clusters. What you will learn
Estimate memory and compute requirements for training and inference
Understand GPU hardware, interconnects, and parallelism strategies
Implement distributed training with parallel and sharded techniques
Build production inference systems with batching and memory management
Deploy via cluster orchestration with optimized GPU scheduling
Create production serving stacks with routing and observability
Benchmark distributed systems using industry-standard methodologies
Explore emerging model trends, distribution strategies, and future paths
Who this book is forThis book is designed for ML engineers, AI researchers, and DevOps professionals who need to train or serve large AI models at scale. Platform engineers, HPC cluster administrators, and cloud architects will also find it valuable for advancing their skill sets.
A basic understanding of Python and PyTorch is required to get started. Prior experience with distributed systems, cluster schedulers, or container orchestration is helpful but not necessary - the book introduces these concepts from the ground up, beginning with resource estimation, data preparation, and hardware fundamentals.
Contents
Table of Contents
Introduction to Modern Distributed AI
GPU Hardware, Networking, and Parallelism Strategies
Distributed Training with PyTorch DDP
Scaling with Fully Sharded Data Parallel (FSDP)
DeepSpeed and ZeRO Optimization
Distributed Inference Fundamentals and vLLM
SGLang and Advanced Inference Architectures
Kubernetes for AI Workloads
Production LLM Serving Stack
Distributed Benchmarking and Performance Optimization
-
- 電子書籍
- 公爵家の男たちに溺愛されて離婚できませ…
-
- 電子書籍
- 陰の実力者になりたくて!マスターオブガ…
-
- 電子書籍
- 気持ちイイ恋 教えて下さい~下手なあな…
-
- 電子書籍
- 他人の男を嵌めるのが仕事です。 (2)…



