Cross-Layer Approximation and In-Network Acceleration : Enabling the Next Generation of Sustainable and High-Performance Reconfigurable Systems

個数:
  • 予約
  • ポイントキャンペーン

Cross-Layer Approximation and In-Network Acceleration : Enabling the Next Generation of Sustainable and High-Performance Reconfigurable Systems

  • 現在予約受付中です。出版後の入荷・発送となります。
    重要:表示されている発売日は予定となり、発売が延期、中止、生産限定品で商品確保ができないなどの理由により、ご注文をお取消しさせていただく場合がございます。予めご了承ください。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版
  • 商品コード 9783032217110

Description

This book presents a novel Cross-Layer Approximation and Distribution architecture and methodology that advances the design of high-performance, high-throughput, energy-efficient, and sustainable reconfigurable computing systems. By leveraging the error tolerance inherent in modern AI and signal processing workloads, it enables performance and energy gains across hardware and software layers. The authors introduce innovative approximate multipliers, dividers, and coarse-grained processing elements for FPGA and CGRA platforms, coupled with an error-resiliency analysis and a heuristic-driven optimization framework that dynamically balance performance and accuracy. Extending beyond conventional architectures, the methodology described also integrates novel In-Network Computing (INC) techniques to bring computation closer to data sources within 5G/6G infrastructures. The result is a cohesive, scalable approach that redefines how energy-efficient and adaptive computing can be achieved across the edge-to-cloud continuum.

  • Describes a unified perspective on how approximation techniques can be applied across multiple abstraction levels;
  • Discusses how FPGAs, CGRAs and In-Network Computing can enable scalable, distributed, and low-latency computation;
  • Introduces architectural designs such as approximate multipliers, dividers, and hybrid SIMD/MIMD processing elements.

Introduction.- Background and Preliminaries.- Related Work.- Circuit-Level Approximations.- Architecture-Level Approximations.- Application-Level and Cross-Layer Approximations.- Toward Cross-Layer Approximation for the Distributed and In-Network Acceleration of Multi-Kernel Applications.- Conclusions and Future Work.

Zahra Ebrahimi received her B.Sc. and M.Sc. degrees in Computer Engineering from Sharif University of Technology (SUT), Iran, where she also worked as a Research Assistant at the Data Storage, Networks, and Processing Laboratory. She earned her Ph.D. in Computer Science from the Center for Advancing Electronics Dresden (cfaed), Technische Universität Dresden, Germany, in 2025. During her Ph.D., she was awarded two entrepreneurial research fellowships from German ministries for the X-DNet (Software Campus Program, collaborated with Huawei) and GREEN-DNN (Mission KI startup acceleration) projects. Zahra is currently a Postdoctoral Research Associate at Ruhr-Universität Bochum. Her research focuses on cross-layer approximation, reconfigurable accelerator design, and energy-efficient, high-performance architectures for the edge-to-cloud continuum.

Akash Kumar (SM 13) received the joint Ph.D. degree in electrical engineering and embedded systems from the Eindhoven University of Technology, Eindhoven, The Netherlands, and the National University of Singapore (NUS), Singapore, in 2009. From 2009 to 2015, he was with NUS. From October 2015 until March 2024, he was a Professor with Technische Universität Dresden, Dresden, Germany, where he was directing the Chair for Processor Design. Since April 2024, he is directing the chair of Embedded Systems at Ruhr University Bochum, Germany. His research interests include the design and analysis of low-power embedded multiprocessor systems, and designing secure systems with emerging nano-technologies.


最近チェックした商品