- ホーム
- > 洋書
- > ドイツ書
- > Humanities, Arts & Music
- > Philosophy
- > general surveys & lexicons
Full Description
This book brings normative ethical theory to AI system development. It shows how we can align artificial intelligence (AI) systems with normative human values by training AI to follow human goals and values. It specifies these normative ethical and philosophical foundations. It then provides techniques to implement these values using state-of-the-art methods for aligning general-purpose language systems. All of this is introduced in a straightforward way through the book's original concept of dynamic normativity. This book is useful for advanced students and researchers in the fields of the ethics of technology, artificial intelligence, and responsible innovation, as well as technology professionals and policymakers seeking to direct artificial intelligence towards common virtues and the public good.
Contents
1. Worldwide AI Ethics, Principles, and Blank Spots.- 2. AGI, Existential Risks, and the Control Problem.- 3. Gradient-based Learning and Alignment.- 4. Roots: Building Philosophical Foundations for AI Alignment.- 5. Dynamic Normativity: Learning Human Preferences.- 6. Dynamic Normativity: Aggregating Human Preferences.- 7. Dynamic Normativity: Impact Mitigation.- 8. Moral risk.



