Enhancing LLM Performance : Efficacy, Fine-Tuning, and Inference Techniques (Machine Translation: Technologies and Applications 7) (2025. xxiv, 192 S. XXIV, 192 p. 37 illus., 33 illus. in color. 235 mm)

個数:

Enhancing LLM Performance : Efficacy, Fine-Tuning, and Inference Techniques (Machine Translation: Technologies and Applications 7) (2025. xxiv, 192 S. XXIV, 192 p. 37 illus., 33 illus. in color. 235 mm)

  • 在庫がございません。海外の書籍取次会社を通じて出版社等からお取り寄せいたします。
    通常6~9週間ほどで発送の見込みですが、商品によってはさらに時間がかかることもございます。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合がございます。
    2. 複数冊ご注文の場合は、ご注文数量が揃ってからまとめて発送いたします。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版
  • 商品コード 9783031857461

Full Description

This book is a pioneering exploration of the state-of-the-art techniques that drive large language models (LLMs) toward greater efficiency and scalability. Edited by three distinguished experts—Peyman Passban, Mehdi Rezagholizadeh, and Andy Way—this book presents practical solutions to the growing challenges of training and deploying these massive models. With their combined experience across academia, research, and industry, the authors provide insights into the tools and strategies required to improve LLM performance while reducing computational demands.

This book is more than just a technical guide; it bridges the gap between research and real-world applications. Each chapter presents cutting-edge advancements in inference optimization, model architecture, and fine-tuning techniques, all designed to enhance the usability of LLMs in diverse sectors. Readers will find extensive discussions on the practical aspects of implementing and deploying LLMs in real-world scenarios. The book serves as a comprehensive resource for researchers and industry professionals, offering a balanced blend of in-depth technical insights and practical, hands-on guidance. It is a go-to reference book for students, researchers in computer science and relevant sub-branches, including machine learning, computational linguistics, and more.

Contents

Introduction and Fundamentals.- SPEED: Speculative Pipelined Execution for Efficient Decoding.- Efficient LLM Inference on CPUs.- KronA: Parameter-Efficient Tuning with Kronecker Adapter.- LoDA: Low-Dimensional Adaptation of Large Language Models.- Sparse Fine-Tuning for Inference Acceleration of Large Language Models.- TCNCA: Temporal CNN with Chunked Attention for Efficient Training on Long Sequences.- Class-Based Feature Knowledge Distillation.- On the Use of Cross-Attentive Fusion Techniques for Audio-Visual Speaker Verification.- An Efficient Clustering Algorithm for Self-Supervised Speaker Recognition.- Remaining Issues for AI.

最近チェックした商品