CUDA by Example : An Introduction to General-Purpose GPU Programming

個数:

CUDA by Example : An Introduction to General-Purpose GPU Programming

  • 在庫がございません。海外の書籍取次会社を通じて出版社等からお取り寄せいたします。
    通常6~9週間ほどで発送の見込みですが、商品によってはさらに時間がかかることもございます。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合がございます。
    2. 複数冊ご注文の場合、分割発送となる場合がございます。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版/ページ数 320 p.
  • 言語 ENG
  • 商品コード 9780131387683
  • DDC分類 005.275

Full Description

"This book is required reading for anyone working with accelerator-based computing systems."

-From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory CUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required-just the ability to program in a modestly extended version of C.

CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You'll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance.

Major topics covered include



Parallel programming
Thread cooperation
Constant memory and events
Texture memory
Graphics interoperability
Atomics
Streams
CUDA C on multiple GPUs
Advanced atomics
Additional CUDA resources

All the CUDA software tools you'll need are freely available for download from NVIDIA.

http://developer.nvidia.com/object/cuda-by-example.html

Contents

Foreword xiii

Preface xv

Acknowledgments xvii

About the Authors xix

Chapter 1: Why CUDA? Why Now? 1

1.1 Chapter Objectives 2

1.2 The Age of Parallel Processing 2

1.3 The Rise of GPU Computing 4

1.4 CUDA 6

1.5 Applications of CUDA 8

1.6 Chapter Review 11

Chapter 2: Getting Started 13

2.1 Chapter Objectives 14

2.2 Development Environment 14

2.3 Chapter Review 19

Chapter 3: Introduction to CUDA C 21

3.1 Chapter Objectives 22

3.2 A First Program 22

3.3 Querying Devices 27

3.4 Using Device Properties 33

3.5 Chapter Review 35

Chapter 4: Parallel Programming in CUDA C 37

4.1 Chapter Objectives 38

4.2 CUDA Parallel Programming 38

4.3 Chapter Review 57

Chapter 5: Thread Cooperation 59

5.1 Chapter Objectives 60

5.2 Splitting Parallel Blocks 60

5.3 Shared Memory and Synchronization 75

5.4 Chapter Review 94

Chapter 6: Constant Memory and Events 95

6.1 Chapter Objectives 96

6.2 Constant Memory 96

6.3 Measuring Performance with Events 108

6.4 Chapter Review 114

Chapter 7: Texture Memory 115

7.1 Chapter Objectives 116

7.2 Texture Memory Overview 116

7.3 Simulating Heat Transfer 117

7.4 Chapter Review 137

Chapter 8: Graphics Interoperability 139

8.1 Chapter Objectives 140

8.2 Graphics Interoperation 140

8.3 GPU Ripple with Graphics Interoperability 147

8.4 Heat Transfer with Graphics Interop 154

8.5 DirectX Interoperability 160

8.6 Chapter Review 161

Chapter 9: Atomics 163

9.1 Chapter Objectives 164

9.2 Compute Capability 164

9.3 Atomic Operations Overview 168

9.4 Computing Histograms 170

9.5 Chapter Review 183

Chapter 10: Streams 185

10.1 Chapter Objectives 186

10.2 Page-Locked Host Memory 186

10.3 CUDA Streams 192

10.4 Using a Single CUDA Stream 192

10.5 Using Multiple CUDA Streams 198

10.6 GPU Work Scheduling 205

10.7 Using Multiple CUDA Streams Effectively 208

10.8 Chapter Review 211

Chapter 11: CUDA C on Multiple GPUs 213

11.1 Chapter Objectives 214

11.2 Zero-Copy Host Memory 214

11.3 Using Multiple GPUs 224

11.4 Portable Pinned Memory 230

11.5 Chapter Review 235

Chapter 12: The Final Countdown 237

12.1 Chapter Objectives 238

12.2 CUDA Tools 238

12.3 Written Resources 244

12.4 Code Resources 246

12.5 Chapter Review 248

Appendix A: Advanced Atomics 249

A.1 Dot Product Revisited 250

A.2 Implementing a Hash Table 258

A.3 Appendix Review 277

Index 279

最近チェックした商品