情報理論:コーディングから学習へ(テキスト)<br>Information Theory : From Coding to Learning

個数:

情報理論:コーディングから学習へ(テキスト)
Information Theory : From Coding to Learning

  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 748 p.
  • 言語 ENG
  • 商品コード 9781108832908
  • DDC分類 003.54

Full Description

This enthusiastic introduction to the fundamentals of information theory builds from classical Shannon theory through to modern applications in statistical learning, equipping students with a uniquely well-rounded and rigorous foundation for further study. Introduces core topics such as data compression, channel coding, and rate-distortion theory using a unique finite block-length approach. With over 210 end-of-part exercises and numerous examples, students are introduced to contemporary applications in statistics, machine learning and modern communication theory. This textbook presents information-theoretic methods with applications in statistical learning and computer science, such as f-divergences, PAC Bayes and variational principle, Kolmogorov's metric entropy, strong data processing inequalities, and entropic upper bounds for statistical estimation. Accompanied by a solutions manual for instructors, and additional standalone chapters on more specialized topics in information theory, this is the ideal introductory textbook for senior undergraduate and graduate students in electrical engineering, statistics, and computer science.

Contents

Part I. Information measures: 1. Entropy; 2. Divergence; 3. Mutual information; 4. Variational characterizations and continuity of information measures; 5. Extremization of mutual information: capacity saddle point; 6. Tensorization and information rates; 7. f-divergences; 8. Entropy method in combinatorics and geometry; 9. Random number generators; Part II. Lossless Data Compression: 10. Variable-length compression; 11. Fixed-length compression and Slepian-Wolf theorem; 12. Entropy of ergodic processes; 13. Universal compression; Part III. Hypothesis Testing and Large Deviations: 14. Neyman-Pearson lemma; 15. Information projection and large deviations; 16. Hypothesis testing: error exponents; Part IV. Channel Coding: 17. Error correcting codes; 18. Random and maximal coding; 19. Channel capacity; 20. Channels with input constraints. Gaussian channels; 21. Capacity per unit cost; 22. Strong converse. Channel dispersion. Error exponents. Finite blocklength; 23. Channel coding with feedback; Part V. Rate-distortion Theory and Metric Entropy: 24. Rate-distortion theory; 25. Rate distortion: achievability bounds; 26. Evaluating rate-distortion function. Lossy Source-Channel separation; 27. Metric entropy; Part VI. : 28. Basics of statistical decision theory, 29. Classical large-sample asymptotics; 30. Mutual information method; 31. Lower bounds via reduction to hypothesis testing, 32. Entropic bounds for statistical estimation; 33. Strong data processing inequality.

最近チェックした商品