データからの推論と学習(テキスト・全3巻)第1巻:基礎編<br>Inference and Learning from Data: Volume 1 : Foundations

個数:
  • ポイントキャンペーン

データからの推論と学習(テキスト・全3巻)第1巻:基礎編
Inference and Learning from Data: Volume 1 : Foundations

  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 1010 p.
  • 言語 ENG
  • 商品コード 9781009218122
  • DDC分類 160

Full Description

This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This first volume, Foundations, introduces core topics in inference and learning, such as matrix theory, linear algebra, random variables, convex optimization and stochastic optimization, and prepares students for studying their practical application in later volumes. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 600 end-of-chapter problems (including solutions for instructors), 100 figures, 180 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Inference and Learning, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.

Contents

Contents; Preface; Notation; 1. Matrix theory; 2. Vector differentiation; 3. Random variables; 4. Gaussian distribution; 5. Exponential distributions; 6. Entropy and divergence; 7. Random processes; 8. Convex functions; 9. Convex optimization; 10. Lipschitz conditions; 11. Proximal operator; 12. Gradient descent method; 13. Conjugate gradient method; 14. Subgradient method; 15. Proximal and mirror descent methods; 16. Stochastic optimization; 17. Adaptive gradient methods; 18. Gradient noise; 19. Convergence analysis I: Stochastic gradient algorithms; 20. Convergence analysis II: Stochasic subgradient algorithms; 21: Convergence analysis III: Stochastic proximal algorithms; 22. Variance-reduced methods I: Uniform sampling; 23. Variance-reduced methods II: Random reshuffling; 24. Nonconvex optimization; 25. Decentralized optimization I: Primal methods; 26: Decentralized optimization II: Primal-dual methods; Author index; Subject index.

最近チェックした商品