Bandit Algorithms

個数:

Bandit Algorithms

  • オンデマンド(OD/POD)版です。キャンセルは承れません。

  • 提携先の海外書籍取次会社に在庫がございます。通常約2週間で発送いたします。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合が若干ございます。
    2. 複数冊ご注文の場合は、ご注文数量が揃ってからまとめて発送いたします。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 536 p.
  • 言語 ENG
  • 商品コード 9781108486828
  • DDC分類 519.3

Full Description

Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.

Contents

1. Introduction; 2. Foundations of probability; 3. Stochastic processes and Markov chains; 4. Finite-armed stochastic bandits; 5. Concentration of measure; 6. The explore-then-commit algorithm; 7. The upper confidence bound algorithm; 8. The upper confidence bound algorithm: asymptotic optimality; 9. The upper confidence bound algorithm: minimax optimality; 10. The upper confidence bound algorithm: Bernoulli noise; 11. The Exp3 algorithm; 12. The Exp3-IX algorithm; 13. Lower bounds: basic ideas; 14. Foundations of information theory; 15. Minimax lower bounds; 16. Asymptotic and instance dependent lower bounds; 17. High probability lower bounds; 18. Contextual bandits; 19. Stochastic linear bandits; 20. Confidence bounds for least squares estimators; 21. Optimal design for least squares estimators; 22. Stochastic linear bandits with finitely many arms; 23. Stochastic linear bandits with sparsity; 24. Minimax lower bounds for stochastic linear bandits; 25. Asymptotic lower bounds for stochastic linear bandits; 26. Foundations of convex analysis; 27. Exp3 for adversarial linear bandits; 28. Follow the regularized leader and mirror descent; 29. The relation between adversarial and stochastic linear bandits; 30. Combinatorial bandits; 31. Non-stationary bandits; 32. Ranking; 33. Pure exploration; 34. Foundations of Bayesian learning; 35. Bayesian bandits; 36. Thompson sampling; 37. Partial monitoring; 38. Markov decision processes.

最近チェックした商品