- ホーム
- > 洋書
- > 英文書
- > Science / Mathematics
Full Description
The rapid growth in big data from mobile, Internet of things (IoT), and edge devices, and the continued demand for higher computing power, have established deep learning as the cornerstone of most artificial intelligence (AI) applications today. Recent years have seen a push towards deep learning implemented on domain-specific AI accelerators that support custom memory hierarchies, variable precision, and optimized matrix multiplication. Commercial AI accelerators have shown superior energy and footprint efficiency compared to GPUs for a variety of inference tasks.
In this monograph, roadblocks that need to be understood and analyzed to ensure functional robustness in emerging AI accelerators are discussed. State-of-the-art practices adopted for structural and functional testing of the accelerators are presented, as well as methodologies for assessing the functional criticality of hardware faults in AI accelerators for reducing the test time by targeting the functionally critical faults.
This monograph highlights recent research on efforts to improve test and reliability of neuromorphic computing systems built using non-volatile memory (NVM) devices like spin-transfer-torque (STT-MRAM) and resistive RAM (ReRAM) devices. Also are the robustness of silicon-photonic neural networks and the reliability concerns with manufacturing defects and process variations in monolithic 3D (M3D) based near-memory computing systems.
Contents
1. Introduction
2. Advances in Robustness Analysis of Von-Neumann Systolic Array-based Accelerators
3. Graph Convolutional Network (GCN) for Criticality Evaluation
4. Neural Twin-driven Robustness Analysis
5. Advances in Testing of Von-Neumann Systolic Array-based Accelerators
6. Robustness of Near-Memory Computing Paradigm
7. Testing and Robustness for Compute-in-Memory AI Accelerators
8. Conclusion
Acknowledgements
References