Learn how to configure your Hadoop cluster to run optimal MapReduce jobsAbout This Book
- Optimize your MapReduce job performance
- Identify your Hadoop cluster's weaknesses
- Tune your MapReduce configuration
Who This Book Is For
If you are a Hadoop administrator, developer, MapReduce user, or beginner, this book is the best choice available if you wish to optimize your clusters and applications. Having prior knowledge of creating MapReduce applications is not necessary, but will help you better understand the concepts and snippets of MapReduce class template code.
What You Will Learn
- Learn about the factors that affect MapReduce performance
- Utilize the Hadoop MapReduce performance counters to identify resource bottlenecks
- Size your Hadoop cluster's nodes
- Set the number of mappers and reducers correctly
- Optimize mapper and reducer task throughput and code size using compression and Combiners
- Understand the various tuning properties and best practices to optimize clusters
In Detail
MapReduce is the distribution system that the Hadoop MapReduce engine uses to distribute work around a cluster by working parallel on smaller data sets. It is useful in a wide range of applications, including distributed pattern-based searching, distributed sorting, web link-graph reversal, term-vector per host, web access log stats, inverted index construction, document clustering, machine learning, and statistical machine translation.
This book introduces you to advanced MapReduce concepts and teaches you everything from identifying the factors that affect MapReduce job performance to tuning the MapReduce configuration. Based on real-world experience, this book will help you to fully utilize your cluster's node resources to run MapReduce jobs optimally.
This book details the Hadoop MapReduce job performance optimization process. Through a number of clear and practical steps, it will help you to fully utilize your cluster's node resources.
Starting with how MapReduce works and the factors that affect MapReduce performance, you will be given an overview of Hadoop metrics and several performance monitoring tools. Further on, you will explore performance counters that help you identify resource bottlenecks, check cluster health, and size your Hadoop cluster. You will also learn about optimizing map and reduce tasks by using Combiners and compression.
The book ends with best practices and recommendations on how to use your Hadoop cluster optimally.
Khaled Tannir has been working with computers since 1980. He began programming with the legendary Sinclair Zx81 and after with all Commodore home computers products (Vic 20, Commodore 64, Commodore 128D and Amiga 500). He has a Bachelor's degree in Electronics, a Master degree in System Information Architectures in which graduated with a professional thesis and completed its education with a Research Master degree.
He is a Microsoft Certified Solution Developer (MCSD) and has more than twenty years of technical experience leading the development, implementation of software solutions and giving technical presentations. He works as an independent IT Consultant and has worked as an infrastructure engineer, senior developer, and enterprise / solution architect for many companies in France and Canada.
With a very significant experience in Microsoft .Net/Servers and Oracle Java technologies, he has extensive skills in online/offline applications design, system conversions and multi-language applications in both industries Internet and Desktops. He is always researching new technologies, learns about them and looking for new adventures between France, North America and the Middle-east area. He owns an IT and electronics laboratory with many servers, monitors, open electronics board such Arduino, Netduino, RaspBerry Pi, .Net Gadgeteer and some Smartphone devices based on Windows Phone, Android and iOS operating systems.
In 2012 he contributes to the EGC 2012 (International Complex Data Mining forum at Bordeaux University - France) and presented, in a workshop session, his work about How to optimize data distribution in a cloud computing environment . This work aims to define an approach to optimize using of Data Mining algorithms such as k-means and Apriori in a cloud computing environment. He is the author of the RavenDB 2.x Beginner s Guide book (Packt Publishing) and is a technical reviewer for the Pentaho+MongoDB transformation & reporting book(Packt Publishing) He aims to get a PhD in Cloud Computing, Big Data and wants to learn more and more about these technologies.
He enjoys taking landscape and night photos, travelling, playing video games, creating funny electronics gadgets with Arduino /.Net Gadgeteer and of course spending time with his wife and family.
You can reach him at: contact@khaledtannir.net