Librería: California Books, Miami, FL, Estados Unidos de America
EUR 47,94
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
EUR 40,36
Convertir monedaCantidad disponible: 4 disponibles
Añadir al carritoCondición: New.
EUR 55,50
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoPAP. Condición: New. New Book. Shipped from UK. Established seller since 2000.
Librería: Romtrade Corp., STERLING HEIGHTS, MI, Estados Unidos de America
EUR 62,16
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoCondición: New. This is a Brand-new US Edition. This Item may be shipped from US or any other country as we have multiple locations worldwide.
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
EUR 45,54
Convertir monedaCantidad disponible: 4 disponibles
Añadir al carritoCondición: As New. Unread book in perfect condition.
EUR 64,90
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoCondición: New. Brand New Original US Edition. Customer service! Satisfaction Guaranteed.
Librería: Rarewaves USA, OSWEGO, IL, Estados Unidos de America
Original o primera edición
EUR 61,84
Convertir monedaCantidad disponible: 8 disponibles
Añadir al carritoPaperback. Condición: New. 1st ed. Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.What You Will LearnImplement the Data Lakehouse Paradigm on Microsoft's Azure cloud platformBenefit from the new Delta Lake open-source storage layer for data lakehouses Take advantage of schema evolution, change feeds, live tables, and moreWritefunctional PySpark code for data lakehouse ELT jobsOptimize Apache Spark performance through partitioning, indexing, and other tuning optionsChoose between alternatives such as Databricks, Synapse Analytics, and SnowflakeWho This Book Is ForData, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform.
Librería: Rarewaves USA United, OSWEGO, IL, Estados Unidos de America
Original o primera edición
EUR 63,81
Convertir monedaCantidad disponible: 8 disponibles
Añadir al carritoPaperback. Condición: New. 1st ed. Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.What You Will LearnImplement the Data Lakehouse Paradigm on Microsoft's Azure cloud platformBenefit from the new Delta Lake open-source storage layer for data lakehouses Take advantage of schema evolution, change feeds, live tables, and moreWritefunctional PySpark code for data lakehouse ELT jobsOptimize Apache Spark performance through partitioning, indexing, and other tuning optionsChoose between alternatives such as Databricks, Synapse Analytics, and SnowflakeWho This Book Is ForData, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform.
Librería: Rarewaves.com UK, London, Reino Unido
Original o primera edición
EUR 65,16
Convertir monedaCantidad disponible: 8 disponibles
Añadir al carritoPaperback. Condición: New. 1st ed. Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.What You Will LearnImplement the Data Lakehouse Paradigm on Microsoft's Azure cloud platformBenefit from the new Delta Lake open-source storage layer for data lakehouses Take advantage of schema evolution, change feeds, live tables, and moreWritefunctional PySpark code for data lakehouse ELT jobsOptimize Apache Spark performance through partitioning, indexing, and other tuning optionsChoose between alternatives such as Databricks, Synapse Analytics, and SnowflakeWho This Book Is ForData, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform.
Librería: ALLBOOKS1, Direk, SA, Australia
EUR 69,30
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoBrand new book. Fast ship. Please provide full street address as we are not able to ship to P O box address.
Librería: Ria Christie Collections, Uxbridge, Reino Unido
EUR 64,08
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. In.
EUR 60,71
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoCondición: New. pp. 465.
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
EUR 53,33
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: As New. Unread book in perfect condition.
EUR 61,39
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoCondición: New. pp. 465.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
Original o primera edición
EUR 70,33
Convertir monedaCantidad disponible: 8 disponibles
Añadir al carritoPaperback. Condición: New. 1st ed. Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.What You Will LearnImplement the Data Lakehouse Paradigm on Microsoft's Azure cloud platformBenefit from the new Delta Lake open-source storage layer for data lakehouses Take advantage of schema evolution, change feeds, live tables, and moreWritefunctional PySpark code for data lakehouse ELT jobsOptimize Apache Spark performance through partitioning, indexing, and other tuning optionsChoose between alternatives such as Databricks, Synapse Analytics, and SnowflakeWho This Book Is ForData, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform.
Librería: GreatBookPricesUK, Woodford Green, Reino Unido
EUR 55,48
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Librería: BargainBookStores, Grand Rapids, MI, Estados Unidos de America
EUR 63,39
Convertir monedaCantidad disponible: 5 disponibles
Añadir al carritoPaperback or Softback. Condición: New. The Azure Data Lakehouse Toolkit: Building and Scaling Data Lakehouses with Delta Lake, Apache Spark, Azure Databricks and Synapse Analytics, and Snow. Book.
Librería: Revaluation Books, Exeter, Reino Unido
EUR 67,85
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: Brand New. 487 pages. 10.00x7.01x0.98 inches. In Stock.
Librería: Chiron Media, Wallingford, Reino Unido
EUR 63,09
Convertir monedaCantidad disponible: 10 disponibles
Añadir al carritoPF. Condición: New.
EUR 64,50
Convertir monedaCantidad disponible: 4 disponibles
Añadir al carritoCondición: New. pp. 465.
Librería: CitiRetail, Stevenage, Reino Unido
Original o primera edición
EUR 55,63
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.What You Will LearnImplement the Data Lakehouse Paradigm on Microsofts Azure cloud platformBenefit from the new Delta Lake open-source storage layer for data lakehouses Take advantage of schema evolution, change feeds, live tables, and moreWritefunctional PySpark code for data lakehouse ELT jobsOptimize Apache Spark performance through partitioning, indexing, and other tuning optionsChoose between alternatives such as Databricks, Synapse Analytics, and SnowflakeWho This Book Is ForData, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform. Intermediate user level Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability.
Publicado por Apress, Apress Jul 2022, 2022
ISBN 10: 1484282329 ISBN 13: 9781484282328
Idioma: Inglés
Librería: buchversandmimpf2000, Emtmannsberg, BAYE, Alemania
EUR 64,19
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. Neuware -Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.APress in Springer Science + Business Media, Heidelberger Platz 3, 14197 Berlin 488 pp. Englisch.
Librería: Lakeside Books, Benton Harbor, MI, Estados Unidos de America
EUR 39,17
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. Brand New! Not Overstocks or Low Quality Book Club Editions! Direct From the Publisher! We're not a giant, faceless warehouse organization! We're a small town bookstore that loves books and loves it's customers! Buy from Lakeside Books!
Librería: Lucky's Textbooks, Dallas, TX, Estados Unidos de America
EUR 42,81
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New.
Librería: AussieBookSeller, Truganina, VIC, Australia
Original o primera edición
EUR 83,44
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.What You Will LearnImplement the Data Lakehouse Paradigm on Microsofts Azure cloud platformBenefit from the new Delta Lake open-source storage layer for data lakehouses Take advantage of schema evolution, change feeds, live tables, and moreWritefunctional PySpark code for data lakehouse ELT jobsOptimize Apache Spark performance through partitioning, indexing, and other tuning optionsChoose between alternatives such as Databricks, Synapse Analytics, and SnowflakeWho This Book Is ForData, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform. Intermediate user level Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
Original o primera edición
EUR 51,03
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.What You Will LearnImplement the Data Lakehouse Paradigm on Microsofts Azure cloud platformBenefit from the new Delta Lake open-source storage layer for data lakehouses Take advantage of schema evolution, change feeds, live tables, and moreWritefunctional PySpark code for data lakehouse ELT jobsOptimize Apache Spark performance through partitioning, indexing, and other tuning optionsChoose between alternatives such as Databricks, Synapse Analytics, and SnowflakeWho This Book Is ForData, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform. Intermediate user level Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Publicado por Springer, Berlin|Apress, 2022
ISBN 10: 1484282329 ISBN 13: 9781484282328
Idioma: Inglés
Librería: moluna, Greven, Alemania
EUR 52,37
Convertir monedaCantidad disponible: Más de 20 disponibles
Añadir al carritoCondición: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to effici.
Librería: Revaluation Books, Exeter, Reino Unido
EUR 65,10
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: Brand New. 487 pages. 10.00x7.01x0.98 inches. In Stock. This item is printed on demand.
Librería: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Alemania
EUR 64,19
Convertir monedaCantidad disponible: 2 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.What You Will LearnImplement the Data Lakehouse Paradigm on Microsoft's Azure cloud platformBenefit from the new Delta Lake open-source storage layer for data lakehousesTake advantage of schema evolution, change feeds, live tables, and moreWritefunctional PySpark code for data lakehouse ELT jobsOptimize Apache Spark performance through partitioning, indexing, and other tuning optionsChoose between alternatives such as Databricks, Synapse Analytics, and SnowflakeWho This Book Is ForData, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform. 488 pp. Englisch.
Librería: AHA-BUCH GmbH, Einbeck, Alemania
EUR 64,96
Convertir monedaCantidad disponible: 1 disponibles
Añadir al carritoTaschenbuch. Condición: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - Design and implement a modern data lakehouse on the Azure Data Platform using Delta Lake, Apache Spark, Azure Databricks, Azure Synapse Analytics, and Snowflake. This book teaches you the intricate details of the Data Lakehouse Paradigm and how to efficiently design a cloud-based data lakehouse using highly performant and cutting-edge Apache Spark capabilities using Azure Databricks, Azure Synapse Analytics, and Snowflake. You will learn to write efficient PySpark code for batch and streaming ELT jobs on Azure. And you will follow along with practical, scenario-based examples showing how to apply the capabilities of Delta Lake and Apache Spark to optimize performance, and secure, share, and manage a high volume, high velocity, and high variety of data in your lakehouse with ease.The patterns of success that you acquire from reading this book will help you hone your skills to build high-performing and scalable ACID-compliant lakehouses using flexible and cost-efficient decoupled storage and compute capabilities. Extensive coverage of Delta Lake ensures that you are aware of and can benefit from all that this new, open source storage layer can offer. In addition to the deep examples on Databricks in the book, there is coverage of alternative platforms such as Synapse Analytics and Snowflake so that you can make the right platform choice for your needs.After reading this book, you will be able to implement Delta Lake capabilities, including Schema Evolution, Change Feed, Live Tables, Sharing, and Clones to enable better business intelligence and advanced analytics on your data within the Azure Data Platform.What You Will LearnImplement the Data Lakehouse Paradigm on Microsoft's Azure cloud platformBenefit from the new Delta Lake open-source storage layer for data lakehousesTake advantage of schema evolution, change feeds, live tables, and moreWritefunctional PySpark code for data lakehouse ELT jobsOptimize Apache Spark performance through partitioning, indexing, and other tuning optionsChoose between alternatives such as Databricks, Synapse Analytics, and SnowflakeWho This Book Is ForData, analytics, and AI professionals at all levels, including data architect and data engineer practitioners. Also for data professionals seeking patterns of success by which to remain relevant as they learn to build scalable data lakehouses for their organizations and customers who are migrating into the modern Azure Data Platform.