深層学習:基礎と概念(テキスト)<br>Deep Learning : Foundations and Concepts (1st ed. 2024. 2023. xx, 649 S. XX, 649 p. 600 illus., 400 illus. in co)

個数:
  • ポイントキャンペーン

深層学習:基礎と概念(テキスト)
Deep Learning : Foundations and Concepts (1st ed. 2024. 2023. xx, 649 S. XX, 649 p. 600 illus., 400 illus. in co)

  • 国内在庫僅少。通常5~7日で発送いたします。
    (品切れや複数冊ご注文の場合には海外お取り寄せとなり時間がかかります。)
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 657 p.
  • 商品コード 9783031454677

Full Description

This book offers a comprehensive introduction to the central ideas that underpin deep learning. It is intended both for newcomers to machine learning and for those already experienced in the field. Covering key concepts relating to contemporary architectures and techniques, this essential book equips readers with a robust foundation for potential future specialization. The field of deep learning is undergoing rapid evolution, and therefore this book focusses on ideas that are likely to endure the test of time.

The book is organized into numerous bite-sized chapters, each exploring a distinct topic, and the narrative follows a linear progression, with each chapter building upon content from its predecessors. This structure is well-suited to teaching a two-semester undergraduate or postgraduate machine learning course, while remaining equally relevant to those engaged in active research or in self-study.

A full understanding of machine learning requires some mathematical background and so the book includes a self-contained introduction to probability theory. However, the focus of the book is on conveying a clear understanding of ideas, with emphasis on the real-world practical value of techniques rather than on abstract theory. Complex concepts are therefore presented from multiple complementary perspectives including textual descriptions, diagrams, mathematical formulae, and pseudo-code.

Chris Bishop is a Technical Fellow at Microsoft and is the Director of Microsoft Research AI4Science. He is a Fellow of Darwin College Cambridge, a Fellow of the Royal Academy of Engineering, and a Fellow of the Royal Society. 

Hugh Bishop is an Applied Scientist at Wayve, a deep learning autonomous driving company in London, where he designs and trains deep neural networks. He completed his MPhil in Machine Learning and Machine Intelligence at Cambridge University.

"Chris Bishop wrote a terrific textbook on neural networks in 1995 and has a deep knowledge of the field and its core ideas. His many years of experience in explaining neural networks have made him extremely skillful at presenting complicated ideas in the simplest possible way and it is a delight to see these skills applied to the revolutionary new developments in the field." -- Geoffrey Hinton

"With the recent explosion of deep learning and AI as a research topic, and the quickly growing importance of AI applications, a modern textbook on the topic was badly needed. The "New Bishop" masterfully fills the gap, covering algorithms for supervised and unsupervised learning, modern deep learning architecture families, as well as how to apply all of this to various application areas." - Yann LeCun

"This excellent and very educational book will bring the reader up to date with the main concepts and advances in deep learning with a solid anchoring in probability. Theseconcepts are powering current industrial AI systems and are likely to form the basis of further advances towards artificial general intelligence." --  Yoshua Bengio

Contents

Preface.- The Deep Learning Revolution.- Probabilities.- Standard Distributions.- Single-layer Networks: Regression.- Single-layer Networks: Classification.- Deep Neural Networks.- Gradient Descent.- Backpropagation.- Regularization.- Convolutional Networks.- Structured Distributions.- Transformers.- Graph Neural Networks.- Sampling.- Discrete Latent Variables.- Continuous Latent Variables.- Generative Adversarial Networks.- Normalizing Flows.- Autoencoders.- Diffusion Models.- Appendix A Linear Algebra.- Appendix B Calculus of Variations.- Appendix C Lagrange Multipliers.- Biblyography.- Index