Data Engineering with Databricks Cookbook : Build effective data and AI solutions using Apache Spark, Databricks, and Delta Lake

個数:

Data Engineering with Databricks Cookbook : Build effective data and AI solutions using Apache Spark, Databricks, and Delta Lake

  • 提携先の海外書籍取次会社に在庫がございます。通常3週間で発送いたします。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合が若干ございます。
    2. 複数冊ご注文の場合、分割発送となる場合がございます。
    3. 美品のご指定は承りかねます。
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版/ページ数 438 p.
  • 言語 ENG
  • 商品コード 9781837633357
  • DDC分類 005.745

Full Description

Work through 70 recipes for implementing reliable data pipelines with Apache Spark, optimally store and process structured and unstructured data in Delta Lake, and use Databricks to orchestrate and govern your data

Key Features

Learn data ingestion, data transformation, and data management techniques using Apache Spark and Delta Lake
Gain practical guidance on using Delta Lake tables and orchestrating data pipelines
Implement reliable DataOps and DevOps practices, and enforce data governance policies on Databricks
Purchase of the print or Kindle book includes a free PDF eBook

Book DescriptionWritten by a Senior Solutions Architect at Databricks, Data Engineering with Databricks Cookbook will show you how to effectively use Apache Spark, Delta Lake, and Databricks for data engineering, starting with comprehensive introduction to data ingestion and loading with Apache Spark.
What makes this book unique is its recipe-based approach, which will help you put your knowledge to use straight away and tackle common problems. You'll be introduced to various data manipulation and data transformation solutions that can be applied to data, find out how to manage and optimize Delta tables, and get to grips with ingesting and processing streaming data. The book will also show you how to improve the performance problems of Apache Spark apps and Delta Lake. Advanced recipes later in the book will teach you how to use Databricks to implement DataOps and DevOps practices, as well as how to orchestrate and schedule data pipelines using Databricks Workflows. You'll also go through the full process of setup and configuration of the Unity Catalog for data governance.
By the end of this book, you'll be well-versed in building reliable and scalable data pipelines using modern data engineering technologies.What you will learn

Perform data loading, ingestion, and processing with Apache Spark
Discover data transformation techniques and custom user-defined functions (UDFs) in Apache Spark
Manage and optimize Delta tables with Apache Spark and Delta Lake APIs
Use Spark Structured Streaming for real-time data processing
Optimize Apache Spark application and Delta table query performance
Implement DataOps and DevOps practices on Databricks
Orchestrate data pipelines with Delta Live Tables and Databricks Workflows
Implement data governance policies with Unity Catalog

Who this book is forThis book is for data engineers, data scientists, and data practitioners who want to learn how to build efficient and scalable data pipelines using Apache Spark, Delta Lake, and Databricks. To get the most out of this book, you should have basic knowledge of data architecture, SQL, and Python programming.

Contents

Table of Contents

Data Ingestion and Data Extraction with Apache Spark
Data Transformation and Data Manipulation with Apache Spark
Data Management with Delta Lake
Ingesting Streaming Data
Processing Streaming Data
Performance Tuning with Apache Spark
Performance Tuning in Delta Lake
Orchestration and Scheduling Data Pipeline with Databricks Workflows
Building Data Pipelines with Delta Live Tables
Data Governance with Unity Catalog
Implementing DataOps and DevOps on Databricks