- ホーム
- > 洋書
- > ドイツ書
- > Mathematics, Sciences & Technology
- > Technology
- > electronics, electrical engineering, telecommunications
Full Description
This book focuses on the challenges and solutions for scheduling tasks in distributed cloud and edge computing systems, with a particular emphasis on predicting workload and resources and optimizing performance and resource utilization through innovative algorithms and methodologies. The book provides an in-depth exploration of theoretical and practical aspects across seven comprehensive parts. The book first introduces the key concepts of cloud computing, edge computing, and their convergence in distributed cloud-edge systems. The authors then lay the groundwork for understanding workload prediction, energy management, and integrating cloud-edge infrastructures with large artificial intelligence (AI) models. The book then presents a detailed examination of workload and resource prediction techniques. Next, task scheduling is explored with a focus on energy efficiency and performance in unmanned aerial vehicles (UAVs), satellite-terrestrial edge networks, etc. The book also delves into integrating large-scale AI models within cloud-edge systems and introduces innovative practices of new infrastructure in cloud-edge systems. Finally, real-world applications of distributed cloud-edge systems are discussed across various domains. This book provides valuable resources for researchers, engineers, and professionals seeking to advance their knowledge of distributed cloud and edge computing systems and their applications in emerging areas.
Contents
Introduction.- Part I. DISTRIBUTED CLOUD AND EDGE COMPUTING SYSTEMS.- Preliminaries.- Part II. SINGLE-TASK AND MULTI-TASK PREDICTION.- Multiapplication Workload Prediction with Wavelet Decomposition.- Network Traffic Prediction with Temporal Convolutional Networks and LSTM.- Multivariate Resource Usage Prediction with Frequency-Enhanced and Attention-Assisted Transformer.- LSTM-Based Prediction for Large-Scale Resources and Workloads.- Workload and Resource Prediction with Multi-Head Attention and LSTM.- Spatio-temporal Prediction with Bi-directional and Grid LSTM for Workloads and Resources in Clouds.- Part III. TASK SCHEDULING STRATEGIES.- Energy-Efficient Offloading for Static and Dynamic Applications.- Cost-Minimized Offloading and User Association.- Energy-Optimized Partial Computation Offloading.- Cost-Minimized Microservice Migration with Autoencoder-Assisted Evolution.- Cost-Efficient Offloading with Layered Unmanned Aerial Vehicles.- Energy-Minimized Partial Offloading in Satellite-Terrestrial Edge Networks.- Part IV. CLOUD AND EDGE SYSTEMS FOR LARGE AI MODELS.- Multimodal Large Models and Their Applications.- Large Prediction Models and Their Applications.- Inference Offloading and Resource Allocation with Large Models.- Part V. INNOVATIVE PRACTICES IN CLOUD-EDGE SYSTEMS.- Applications of Cloud-Edge Systems in CDNs.- Applications of Cloud-Edge Systems in Industrial Internet.- Applications of Cloud-Edge Systems in Energy Internet.- Applications of Cloud-Edge Systems in Smart Buildings.- Applications of Cloud-Edge Systems in Smart Transportation.- Part VI. INNOVATIVE PRACTICES IN CLOUD-EDGE SYSTEMS: INDUSTRIAL APPLICATIONS.- Applications of Cloud-Edge Systems in Security Monitoring.- Applications of Cloud-Edge Systems in Agricultural Production.- Applications of Cloud-Edge Systems in Ecological Environment.- Applications of Cloud-Edge Systems in Healthcare.- Applications of Cloud-Edge Systems in Smart Education.- Part VII. CONCLUSIONS AND OPEN PROBLEMS.- Conclusion.



