Risk-Sensitive Reinforcement Learning via Policy Gradient Search (Foundations and Trends® in Machine Learning)

個数:

Risk-Sensitive Reinforcement Learning via Policy Gradient Search (Foundations and Trends® in Machine Learning)

  • オンデマンド(OD/POD)版です。キャンセルは承れません。
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版/ページ数 170 p.
  • 言語 ENG
  • 商品コード 9781638280262
  • DDC分類 006.31

Full Description

Reinforcement learning (RL) is one of the foundational pillars of artificial intelligence and machine learning. An important consideration in any optimization or control problem is the notion of risk, but its incorporation into RL has been a fairly recent development. This monograph surveys research on risk-sensitive RL that uses policy gradient search. The authors survey some of the recent work in this area specifically where policy gradient search is the solution approach. In the first risk-sensitive RL setting, they cover popular risk measures based on variance, conditional value at-risk and chance constraints, and present a template for policy gradient-based risk-sensitive RL algorithms using a Lagrangian formulation. For the setting where risk is incorporated directly into the objective function, they consider an exponential utility formulation, cumulative prospect theory, and coherent risk measures. Written for novices and experts alike the authors have made the text completely self-contained but also organized in a manner that allows expert readers to skip background chapters. This is a complete guide for students and researchers working on this aspect of machine learning.

Contents

1. Introduction
2. Markov Decision Processes
3. Risk Measures
4. Background on Policy Evaluation and Gradient Estimation
5. Policy Gradient Templates for Risk-sensitive RL
6. MDPs with Risk as the Constraint
7. MDPs with Risk as the Objective
8. Conclusions and Future Challenges
Acknowledgements
References

最近チェックした商品