Optimizing Databricks Workloads : Harness the power of Apache Spark in Azure and maximize the performance of modern big data workloads

個数:

Optimizing Databricks Workloads : Harness the power of Apache Spark in Azure and maximize the performance of modern big data workloads

  • オンデマンド(OD/POD)版です。キャンセルは承れません。
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版/ページ数 230 p.
  • 言語 ENG
  • 商品コード 9781801819077
  • DDC分類 005.7

Full Description

Accelerate computations and make the most of your data effectively and efficiently on Databricks

Key Features

Understand Spark optimizations for big data workloads and maximizing performance
Build efficient big data engineering pipelines with Databricks and Delta Lake
Efficiently manage Spark clusters for big data processing

Book DescriptionDatabricks is an industry-leading, cloud-based platform for data analytics, data science, and data engineering supporting thousands of organizations across the world in their data journey. It is a fast, easy, and collaborative Apache Spark-based big data analytics platform for data science and data engineering in the cloud.

In Optimizing Databricks Workloads, you will get started with a brief introduction to Azure Databricks and quickly begin to understand the important optimization techniques. The book covers how to select the optimal Spark cluster configuration for running big data processing and workloads in Databricks, some very useful optimization techniques for Spark DataFrames, best practices for optimizing Delta Lake, and techniques to optimize Spark jobs through Spark core. It contains an opportunity to learn about some of the real-world scenarios where optimizing workloads in Databricks has helped organizations increase performance and save costs across various domains.

By the end of this book, you will be prepared with the necessary toolkit to speed up your Spark jobs and process your data more efficiently.

What you will learn

Get to grips with Spark fundamentals and the Databricks platform
Process big data using the Spark DataFrame API with Delta Lake
Analyze data using graph processing in Databricks
Use MLflow to manage machine learning life cycles in Databricks
Find out how to choose the right cluster configuration for your workloads
Explore file compaction and clustering methods to tune Delta tables
Discover advanced optimization techniques to speed up Spark jobs

Who this book is forThis book is for data engineers, data scientists, and cloud architects who have working knowledge of Spark/Databricks and some basic understanding of data engineering principles. Readers will need to have a working knowledge of Python, and some experience of SQL in PySpark and Spark SQL is beneficial.

Contents

Table of Contents

Discovering Databricks
Batch and Real-Time Processing in Databricks
Learning about Machine Learning and Graph Processing in Databricks
Managing Spark Clusters
Big Data Analytics
Databricks Delta Lake
Spark Core
Case Studies

最近チェックした商品