- ホーム
- > 洋書
- > ドイツ書
- > Mathematics, Sciences & Technology
- > Technology
- > electronics, electrical engineering, telecommunications
Full Description
This book addresses the technological contributions and developments of advanced hardware for Machine Learning (ML) computing systems. The authors discuss state-of-the-art progress in this area (and related topics) as well as reporting on their application to diverse fields. This is achieved by chapters covering the entire spectrum of research activities to include design and applications, with a focus on high performance and dependable operation. The entire hardware stack (from circuit, to architecture, up to the system level) is discussed in detail. The book will cover innovative material as well as tutorials, reviews and surveys of current theoretical/experimental results, design methodologies and applications over a wide spectrum of scope for an enlarged readership, so to include engineers as well as scientists.
Discusses a wide range of topics for Machine Learning-based computer systems;
Covers design and application for the entire hardware stack, with a focus on high performance and dependability;
Includes innovative material, as well as tutorials, reviews and surveys of current theoretical/experimental results.
Contents
High performance machine learning accelerators on fpga.- Floating point arithmetic in deep neural networks evaluation and implementation of conventional and emerging formats with mixed precision strategies.- High performance computing architectures for ml.- High performance domain specific computing architectures for machine learning.- Edge ai training accelerator design.- Accelerating machine learning with unconventional architectures.- Mram based energy efficient computing architectures for machine learning accelerators.- Energy efficient data aware computation in computing in memory architecture.- Stochastic computing applied to morphological neural networks.- Approximate computing in machine learning - More than you bargained for.- Edge computing meets giant ai innovations in large language model efficiency.- Chiplet based accelerator design for scalable training of transformer based generative adversarial networks.- Energy consumption in generative ai insights from large language models inference.- Towards sustainable and responsible gen ai an investigation on energy efficient computing and cost effective complementary components.- Low power machine learning realization techniques on biomedical wearable devices.- Application of algorithm and hardware co design in the hardware accelerator of visual slam front end.- Application of Aagorithm and hardware co design in the hardware accelerator of visual slam back end.- Hardware efficient designs for spiking neural networks.- Understanding neural network fault tolerance from the analog hardware to the gpu.- On the use of ml techniques in safety critical systems.- Fault injection and tolerance techniques for deep learning models deployed on sram based fpgas.- Dependability evaluation of parameters and variables in large language models llms to soft errors on memory.- Lightweight algorithm based fault tolerance abft for resilient ml systems.- Using error correction code schemes in dependable machine learning systems.- Analog error correcting codes for dependable in memory computing of neural networks.- Perturbation based error tolerance for large scale networks.- Quantization aided cost efficient reliability of cnn accelerators for edge ai.- Trustworthy ai at the cloud.- Exploring hardware driven privacy techniques for trustworthy machine learning.- Machine learning and hardware security the role of ai for hardware in the security era.- Machine learning systems for high performance and dependability the role of fpga design.



