深層学習理論の原理<br>The Principles of Deep Learning Theory : An Effective Theory Approach to Understanding Neural Networks

個数:

深層学習理論の原理
The Principles of Deep Learning Theory : An Effective Theory Approach to Understanding Neural Networks

  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 472 p.
  • 言語 ENG
  • 商品コード 9781316519332
  • DDC分類 006.31

Full Description

This textbook establishes a theoretical framework for understanding deep learning models of practical relevance. With an approach that borrows from theoretical physics, Roberts and Yaida provide clear and pedagogical explanations of how realistic deep neural networks actually work. To make results from the theoretical forefront accessible, the authors eschew the subject's traditional emphasis on intimidating formality without sacrificing accuracy. Straightforward and approachable, this volume balances detailed first-principle derivations of novel results with insight and intuition for theorists and practitioners alike. This self-contained textbook is ideal for students and researchers interested in artificial intelligence with minimal prerequisites of linear algebra, calculus, and informal probability theory, and it can easily fill a semester-long course on deep learning theory. For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning.

Contents

Preface; 0. Initialization; 1. Pretraining; 2. Neural networks; 3. Effective theory of deep linear networks at initialization; 4. RG flow of preactivations; 5. Effective theory of preactivations at initializations; 6. Bayesian learning; 7. Gradient-based learning; 8. RG flow of the neural tangent kernel; 9. Effective theory of the NTK at initialization; 10. Kernel learning; 11. Representation learning; ∞. The end of training; ε. Epilogue; A. Information in deep learning; B. Residual learning; References; Index.

最近チェックした商品