Introduction to Deep Learning : Neural Networks, Large Language Models and Agentic AI (Undergraduate Topics in Computer Science) (2. Aufl.)

個数:
  • 予約

Introduction to Deep Learning : Neural Networks, Large Language Models and Agentic AI (Undergraduate Topics in Computer Science) (2. Aufl.)

  • 現在予約受付中です。出版後の入荷・発送となります。
    重要:表示されている発売日は予定となり、発売が延期、中止、生産限定品で商品確保ができないなどの理由により、ご注文をお取消しさせていただく場合がございます。予めご了承ください。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版
  • 商品コード 9783032254597

Full Description

This textbook introduces deep learning in a style that is accessible, rigorous, and grounded in working code. It walks through the most widely used algorithms and architectures step by step, with mathematical derivations kept intuitive and Python examples woven through every chapter. 

The second edition keeps everything from the first, including convolutional networks, LSTMs, Word2vec, RBMs, DBNs, neural Turing machines, memory networks, and autoencoders. It then covers the systems that have reshaped the field since: generative adversarial networks, the transformer architecture and its attention mechanism, the full training pipeline behind modern large language models (LLMs), prompt engineering with real-life guardrail scenarios, parameter-efficient fine-tuning with LoRA, retrieval-augmented generation with vector databases, knowledge graphs, and agentic AI systems illustrated through an industrial case study.

Topics and features:

Introduces fundamentals of machine learning and mathematical and computational prerequisites for deep learning
Discusses feed-forward neural networks, convolutional networks, and recurrent architectures, and explores the modifications applicable to any neural network
Covers the transformer architecture from first principles, including self-attention, multi-head attention, positional encoding, and a minimal annotated implementation
Reviews open research problems, from hallucinations and quadratic scaling to alignment faking and the interpretability of model internals

This proven, fully revised textbook is written for graduate and advanced undergraduate students of computer science, cognitive science, and mathematics.  It should prove equally valuable for readers in linguistics, logic, philosophy, and psychology.

Sandro Skansi is an Associate Professor at the University of Zagreb, Croatia, where he teaches logic, political philosophy, artificial intelligence, and cognitive science. Kristina Šekrst is a research associate at the University of Zagreb and a principal engineer at Preamble AI.

Contents

Part II Transformers, Language Models, and Neural Agents

.- Generative Adversarial Networks.
.- Transformers.
.- Large Language Models in Practice.
.- AI Agents and Retrieval Augmented Generation.
.- What We Do Not Know.

最近チェックした商品