- ホーム
- > 洋書
- > 英文書
- > Science / Mathematics
Full Description
This book offers a comprehensive introduction to Markov decision process and reinforcement learning fundamentals using common mathematical notation and language. Its goal is to provide a solid foundation that enables readers to engage meaningfully with these rapidly evolving fields. Topics covered include finite and infinite horizon models, partially observable models, value function approximation, simulation-based methods, Monte Carlo methods, and Q-learning. Rigorous mathematical concepts and algorithmic developments are supported by numerous worked examples. As an up-to-date successor to Martin L. Puterman's influential 1994 textbook, this volume assumes familiarity with probability, mathematical notation, and proof techniques. It is ideally suited for students, researchers, and professionals in operations research, computer science, engineering, and economics.
Contents
Preface; 1. Introduction; Part I. Fundamentals: 2. Markov decision process fundamentals; 3. Examples and applications; Part II. Classical Markov Decision Process Models: 4. Finite horizon models; 5. Infinite horizon models: expected discounted reward; 6. Infinite horizon models: expected total reward; 7. Infinite horizon models: long-run average reward; 8. Partially observable Markov decision processes; Part III. Reinforcement Learning: 9. Value function approximation; 10. Simulation in tabular models; 11. Simulation with function approximation; Appendix A. Notation and conventions; Appendix B. Markov chains; Appendix C. Linear programming; Bibliography; Index.



