LEARN APACHE SPARK Build Scalable Pipelines with PySpark and Optimization
This book is designed for students, developers, data engineers, data scientists, and technology professionals who want to master Apache Spark in practice, in corporate environments, public cloud, and modern integrations.
You will learn to build scalable pipelines for large-scale data processing, orchestrating distributed workloads with AWS EMR, Databricks, Azure Synapse, and Google Cloud Dataproc. The content covers integration with Hadoop, Hive, Kafka, SQL, Delta Lake, MongoDB, and Python, as well as advanced techniques in tuning, job optimization, real-time analysis, machine learning with MLlib, and workflow automation.
Includes:
• Implementation of ETL and ELT pipelines with Spark SQL and DataFrames
• Data streaming processing and integration with Kafka and AWS Kinesis
• Optimization of distributed jobs, performance tuning, and use of Spark UI
• Integration of Spark with S3, Data Lake, NoSQL, and relational databases
• Deployment on managed clusters in AWS, Azure, and Google Cloud
• Applied Machine Learning with MLlib, Delta Lake, and Databricks
• Automation of routines, monitoring, and scalability for Big Data
By the end, you will master Apache Spark as a professional solution for data analysis, process automation, and machine learning in complex, high-performance environments.
Content reviewed by A.I. with technical supervision.
apache spark, big data, pipelines, distributed processing, aws emr, databricks, streaming, etl, machine learning, cloud integration Google Data Engineer, AWS Data Analytics, Azure Data Engineer, Big Data Engineer, MLOps, DataOps Professional
"Sinopsis" puede pertenecer a otra edición de este libro.
EUR 6,81 gastos de envío desde Estados Unidos de America a España
Destinos, gastos y plazos de envíoLibrería: California Books, Miami, FL, Estados Unidos de America
Condición: New. Print on Demand. Nº de ref. del artículo: I-9798289704603
Cantidad disponible: Más de 20 disponibles
Librería: Best Price, Torrance, CA, Estados Unidos de America
Condición: New. SUPER FAST SHIPPING. Nº de ref. del artículo: 9798289704603
Cantidad disponible: 2 disponibles
Librería: CitiRetail, Stevenage, Reino Unido
Paperback. Condición: new. Paperback. LEARN APACHE SPARK Build Scalable Pipelines with PySpark and OptimizationThis book is designed for students, developers, data engineers, data scientists, and technology professionals who want to master Apache Spark in practice, in corporate environments, public cloud, and modern integrations.You will learn to build scalable pipelines for large-scale data processing, orchestrating distributed workloads with AWS EMR, Databricks, Azure Synapse, and Google Cloud Dataproc. The content covers integration with Hadoop, Hive, Kafka, SQL, Delta Lake, MongoDB, and Python, as well as advanced techniques in tuning, job optimization, real-time analysis, machine learning with MLlib, and workflow automation.Includes: - Implementation of ETL and ELT pipelines with Spark SQL and DataFrames- Data streaming processing and integration with Kafka and AWS Kinesis- Optimization of distributed jobs, performance tuning, and use of Spark UI- Integration of Spark with S3, Data Lake, NoSQL, and relational databases- Deployment on managed clusters in AWS, Azure, and Google Cloud- Applied Machine Learning with MLlib, Delta Lake, and Databricks- Automation of routines, monitoring, and scalability for Big DataBy the end, you will master Apache Spark as a professional solution for data analysis, process automation, and machine learning in complex, high-performance environments.apache spark, big data, pipelines, distributed processing, aws emr, databricks, streaming, etl, machine learning, cloud integration Google Data Engineer, AWS Data Analytics, Azure Data Engineer, Big Data Engineer, MLOps, DataOps Professional Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Nº de ref. del artículo: 9798289704603
Cantidad disponible: 1 disponibles
Librería: Grand Eagle Retail, Mason, OH, Estados Unidos de America
Paperback. Condición: new. Paperback. LEARN APACHE SPARK Build Scalable Pipelines with PySpark and OptimizationThis book is designed for students, developers, data engineers, data scientists, and technology professionals who want to master Apache Spark in practice, in corporate environments, public cloud, and modern integrations.You will learn to build scalable pipelines for large-scale data processing, orchestrating distributed workloads with AWS EMR, Databricks, Azure Synapse, and Google Cloud Dataproc. The content covers integration with Hadoop, Hive, Kafka, SQL, Delta Lake, MongoDB, and Python, as well as advanced techniques in tuning, job optimization, real-time analysis, machine learning with MLlib, and workflow automation.Includes: - Implementation of ETL and ELT pipelines with Spark SQL and DataFrames- Data streaming processing and integration with Kafka and AWS Kinesis- Optimization of distributed jobs, performance tuning, and use of Spark UI- Integration of Spark with S3, Data Lake, NoSQL, and relational databases- Deployment on managed clusters in AWS, Azure, and Google Cloud- Applied Machine Learning with MLlib, Delta Lake, and Databricks- Automation of routines, monitoring, and scalability for Big DataBy the end, you will master Apache Spark as a professional solution for data analysis, process automation, and machine learning in complex, high-performance environments.apache spark, big data, pipelines, distributed processing, aws emr, databricks, streaming, etl, machine learning, cloud integration Google Data Engineer, AWS Data Analytics, Azure Data Engineer, Big Data Engineer, MLOps, DataOps Professional Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Nº de ref. del artículo: 9798289704603
Cantidad disponible: 1 disponibles