LLM Engineer's Handbook : Master the art of engineering large language models from concept to production

個数:

LLM Engineer's Handbook : Master the art of engineering large language models from concept to production

  • 提携先の海外書籍取次会社に在庫がございます。通常3週間で発送いたします。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合が若干ございます。
    2. 複数冊ご注文の場合は、ご注文数量が揃ってからまとめて発送いたします。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて

  • 提携先の海外書籍取次会社に在庫がございます。通常約2週間で発送いたします。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合が若干ございます。
    2. 複数冊ご注文の場合は、ご注文数量が揃ってからまとめて発送いたします。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版/ページ数 522 p.
  • 言語 ENG
  • 商品コード 9781836200079
  • DDC分類 006.35

Full Description

Step into the world of LLMs with this practical guide that takes you from the fundamentals to deploying advanced applications using LLMOps best practices

Key Features

Build and refine LLMs step by step, covering data preparation, RAG, and fine-tuning
Learn essential skills for deploying and monitoring LLMs, ensuring optimal performance in production
Utilize preference alignment, evaluation, and inference optimization to enhance performance and adaptability of your LLM applications

Book DescriptionArtificial intelligence has undergone rapid advancements, and Large Language Models (LLMs) are at the forefront of this revolution. This LLM book offers insights into designing, training, and deploying LLMs in real-world scenarios by leveraging MLOps best practices. The guide walks you through building an LLM-powered twin that's cost-effective, scalable, and modular. It moves beyond isolated Jupyter notebooks, focusing on how to build production-grade end-to-end LLM systems.
Throughout this book, you will learn data engineering, supervised fine-tuning, and deployment. The hands-on approach to building the LLM Twin use case will help you implement MLOps components in your own projects. You will also explore cutting-edge advancements in the field, including inference optimization, preference alignment, and real-time data processing, making this a vital resource for those looking to apply LLMs in their projects.
By the end of this book, you will be proficient in deploying LLMs that solve practical problems while maintaining low-latency and high-availability inference capabilities. Whether you are new to artificial intelligence or an experienced practitioner, this book delivers guidance and practical techniques that will deepen your understanding of LLMs and sharpen your ability to implement them effectively.What you will learn

Implement robust data pipelines and manage LLM training cycles
Create your own LLM and refine it with the help of hands-on examples
Get started with LLMOps by diving into core MLOps principles such as orchestrators and prompt monitoring
Perform supervised fine-tuning and LLM evaluation
Deploy end-to-end LLM solutions using AWS and other tools
Design scalable and modularLLM systems
Learn about RAG applications by building a feature and inference pipeline

Who this book is forThis book is for AI engineers, NLP professionals, and LLM engineers looking to deepen their understanding of LLMs. Basic knowledge of LLMs and the Gen AI landscape, Python and AWS is recommended. Whether you are new to AI or looking to enhance your skills, this book provides comprehensive guidance on implementing LLMs in real-world scenarios

Contents

Table of Contents

Understanding the LLM Twin Concept and Architecture
Tooling and Installation
Data Engineering
RAG Feature Pipeline
Supervised Fine-Tuning
Fine-Tuning with Preference Alignment
Evaluating LLMs
Inference Optimization
RAG Inference Pipeline
Inference Pipeline Deployment
MLOps and LLMOps

最近チェックした商品