Practical Deep Learning at Scale with MLflow : Bridge the gap between offline experimentation and online production

個数:

Practical Deep Learning at Scale with MLflow : Bridge the gap between offline experimentation and online production

  • オンデマンド(OD/POD)版です。キャンセルは承れません。
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版/ページ数 288 p.
  • 言語 ENG
  • 商品コード 9781803241333
  • DDC分類 006.31

Full Description

Train, test, run, track, store, tune, deploy, and explain provenance-aware deep learning models and pipelines at scale with reproducibility using MLflow

Key Features

Focus on deep learning models and MLflow to develop practical business AI solutions at scale
Ship deep learning pipelines from experimentation to production with provenance tracking
Learn to train, run, tune and deploy deep learning pipelines with explainability and reproducibility

Book DescriptionThe book starts with an overview of the deep learning (DL) life cycle and the emerging Machine Learning Ops (MLOps) field, providing a clear picture of the four pillars of deep learning: data, model, code, and explainability and the role of MLflow in these areas.

From there onward, it guides you step by step in understanding the concept of MLflow experiments and usage patterns, using MLflow as a unified framework to track DL data, code and pipelines, models, parameters, and metrics at scale. You'll also tackle running DL pipelines in a distributed execution environment with reproducibility and provenance tracking, and tuning DL models through hyperparameter optimization (HPO) with Ray Tune, Optuna, and HyperBand. As you progress, you'll learn how to build a multi-step DL inference pipeline with preprocessing and postprocessing steps, deploy a DL inference pipeline for production using Ray Serve and AWS SageMaker, and finally create a DL explanation as a service (EaaS) using the popular Shapley Additive Explanations (SHAP) toolbox.

By the end of this book, you'll have built the foundation and gained the hands-on experience you need to develop a DL pipeline solution from initial offline experimentation to final deployment and production, all within a reproducible and open source framework.

What you will learn

Understand MLOps and deep learning life cycle development
Track deep learning models, code, data, parameters, and metrics
Build, deploy, and run deep learning model pipelines anywhere
Run hyperparameter optimization at scale to tune deep learning models
Build production-grade multi-step deep learning inference pipelines
Implement scalable deep learning explainability as a service
Deploy deep learning batch and streaming inference services
Ship practical NLP solutions from experimentation to production

Who this book is forThis book is for machine learning practitioners including data scientists, data engineers, ML engineers, and scientists who want to build scalable full life cycle deep learning pipelines with reproducibility and provenance tracking using MLflow. A basic understanding of data science and machine learning is necessary to grasp the concepts presented in this book.

Contents

Table of Contents

Deep Learning Life Cycle and MLOps Challenges
Getting Started with MLflow for Deep Learning
Tracking Models, Parameters, and Metrics
Tracking Code and Data Versioning
Running DL Pipelines in Different Environments
Running Hyperparameter Tuning at Scale
Multi-Step Deep Learning Inference Pipeline
Deploying a DL Inference Pipeline at Scale
Fundamentals of Deep Learning Explainability
Implementing DL Explainability with MLflow

最近チェックした商品