Big Data & Hadoop (Developer)

Reviews        585 Learners

Logixpedia’s Big Data Hadoop online training is designed to help you become a top Hadoop developer. During this course, our expert instructors will help you:

There will be 24hrs of Online Live Instructor-led Classes. Depending on the batch you select, it can be:
i) 8 live classes of 3 hrs each over Weekend or, ii) 12 live classes of 2 hrs each on Weekdays.
Personal assistance/installation guides for setting up the required environment for Assignments / Projects
Live project involving creation, set-up, troubleshooting, etc. of a multi node Apache Hadoop Cluster. Setup and configure using Pig and Hive, Ganglia configuration and troubleshooting the common Cluster Problems
Lifetime access to the learning management system including Class recordings, presentations, sample code and projects
Lifetime access to the support team (available 24/7) in resolving queries during and after the course completion
Towards the end of the course, you will work on a project. logixpedia certifies you in Hadoop Administration course based on the project reviewed by our expert panel. Anyone certified by logixpedia will be able to demonstrate practical expertise in Hadoop Administration.
Enroll Now

Big Data & Hadoop (Administrator)

Course Details
Logixpedia’s Big Data Hadoop online training is designed to help you become a top Hadoop developer. During this course, our expert instructors will help you:

1. Master the concepts of HDFS and MapReduce framework 2. Understand Hadoop

2.x Architecture

3. Setup Hadoop Cluster and write Complex MapReduce programs

4. Learn data loading techniques using Sqoop and Flume

5. Perform data analytics using Pig, Hive and YARN

6. Implement HBase and MapReduce integration

7. Implement Advanced Usage and Indexing

8. Schedule jobs using Oozie

9. Implement best practices for Hadoop development

10. Work on a real life Project on Big Data Analytics

11. Understand Spark and its Ecosystem

12. Learn how to work in RDD in Spark
Who should go for this Hadoop Course?

Market for Big Data analytics is growing across the world and this strong growth pattern translates into a great opportunity for all the IT Professionals. Here are the few Professional IT groups, who are continuously enjoying the benefits moving into Big data domain:

1. Developers and Architects

2. BI /ETL/DW professionals

3. Senior IT Professionals

4. Testing professionals

5. Mainframe professionals

6. Freshers

Why learn Big Data and Hadoop?

Big Data & Hadoop Market is expected to reach $99.31B by 2022 growing at a CAGR of 42.1% from 2015

Forbes

McKinsey predicts that by 2018 there will be a shortage of 1.5M data experts

Mckinsey Report

Avg salary of Big Data Hadoop Developers is $135k

Indeed.com Salary Data

What are the pre-requisites for the Hadoop Course?

As such, there are no pre-requisites for learning Hadoop. Knowledge of Core Java and SQL will be beneficial, but certainly not a mandate. If you wish to brush-up Core-Java skills, Logixpedia offer you a complimentary self-paced course, i.e. "Java essentials for Hadoop" when you enroll in Big Data Hadoop Certification course.

How will I do practicals in Online Training?

For practicals, we will help you to setup Logixpedia's Virtual Machine in your System with local access. The detailed installation guide will be present in LMS for setting up the environment. In case, your system doesn't meet the pre-requisites e.g. 4GB RAM, you will be provided remote access to the Logixpedia cluster for doing practical. For any doubt, the 24*7 support team will promptly assist you. Logixpedia Virtual Machine can be installed on Mac or Windows machine and the VM access will continue even after the course is over, so that you can practice.

1.Understanding Big Data and Hadoop

Learning Objectives - In this module, you will understand Big Data, the limitations of the existing solutions for Big Data problem, how Hadoop solves the Big Data problem, the common Hadoop ecosystem components, Hadoop Architecture, HDFS, Anatomy of File Write and Read, how MapReduce Framework works.

Topics - Big Data, Limitations and Solutions of existing Data Analytics Architecture, Hadoop, Hadoop Features, Hadoop Ecosystem, Hadoop 2.x core components, Hadoop Storage: HDFS, Hadoop Processing: MapReduce Framework, Hadoop Different Distributions.

2.Hadoop Architecture and HDFS

Learning Objectives - In this module, you will learn the Hadoop Cluster Architecture, Important Configuration files in a Hadoop Cluster, Data Loading Techniques, how to setup single node and multi node hadoop cluster.

Topics - Hadoop 2.x Cluster Architecture - Federation and High Availability, A Typical Production Hadoop Cluster, Hadoop Cluster Modes, Common Hadoop Shell Commands, Hadoop 2.x Configuration Files, Single node cluster and Multi node cluster set up Hadoop Administration.

3.Hadoop MapReduce Framework

Learning Objectives – In this module, you will understand Hadoop MapReduce framework and the working of MapReduce on data stored in HDFS. You will understand concepts like Input Splits in MapReduce, Combiner & Partitioner and Demos on MapReduce using different data sets.

Topics - Hadoop 2.x MapReduce Components, YARN MR Application Execution Flow, YARN Workflow, Anatomy of MapReduce Program, Demo on MapReduce. Input Splits, Relation between Input Splits and HDFS Blocks, MapReduce Use Cases, Traditional way Vs MapReduce way, Why MapReduce, Hadoop 2.x MapReduce Architecture MapReduce: Combiner & Partitioner, Demo on de-identifying Health Care Data set, Demo on Weather Data set.

4.Backup, Recovery and Maintenance

Learning Objectives - In this module, you will understand day to day cluster administration tasks, balancing data in cluster, protecting data by enabling trash, attempting a manual failover, creating backup within or across clusters, safe guarding your metadata and doing metadata recovery or manual failover of NameNode recovery, learn how to restrict the usage of HDFS in terms of count and volume of data, and more.

Topics – Key admin commands like Balancer, Trash, Import Check Point, Distcp, data backup and recovery, enabling trash, namespace count quota or space quota, manual failover or metadata recovery.

5.Advanced MapReduce

Learning Objectives - In this module, you will learn Advanced MapReduce concepts such as Counters, Distributed Cache, MRunit, Reduce Join, Custom Input Format, Sequence Input Format and XML parsing.

Topics - Counters, Distributed Cache, MRunit, Reduce Join, Custom Input Format, Sequence Input Format, Xml file Parsing using MapReduce

6.Pig

Learning Objectives - In this module, you will learn Pig, types of use case we can use Pig, tight coupling between Pig and MapReduce, and Pig Latin scripting, PIG running modes, PIG UDF, Pig Streaming, Testing PIG Scripts. Demo on healthcare dataset.

Topics – About Pig, MapReduce Vs Pig, Pig Use Cases, Programming Structure in Pig, Pig Running Modes, Pig components, Pig Execution, Pig Latin Program, Data Models in Pig, Pig Data Types, Shell and Utility Commands, Pig Latin : Relational Operators, File Loaders, Group Operator, COGROUP Operator, Joins and COGROUP, Union, Diagnostic Operators, Specialized joins in Pig, Built In Functions ( Eval Function, Load and Store Functions, Math function, String Function, Date Function, Pig UDF, Piggybank, Parameter Substitution ( PIG macros and Pig Parameter substitution ), Pig Streaming, Testing Pig scripts with Punit, Aviation use case in PIG, Pig Demo on Healthcare Data set.

7.Hive

Learning Objectives - This module will help you in understanding Hive concepts, Hive Data types, Loading and Querying Data in Hive, running hive scripts and Hive UDF.

Topics – Hive Background, Hive Use Case, About Hive, Hive Vs Pig, Hive Architecture and Components, Metastore in Hive, Limitations of Hive, Comparison with Traditional Database, Hive Data Types and Data Models, Partitions and Buckets, Hive Tables(Managed Tables and External Tables), Importing Data, Querying Data, Managing Outputs, Hive Script, Hive UDF, Retail use case in Hive, Hive Demo on Healthcare Data set.

8. Advanced Hive and HBase

Learning Objectives - In this module, you will understand Advanced Hive concepts such as UDF, Dynamic Partitioning, Hive indexes and views, optimizations in hive. You will also acquire in-depth knowledge of HBase, HBase Architecture, running modes and its components.

Topics - Hive QL: Joining Tables, Dynamic Partitioning, Custom Map/Reduce Scripts, Hive Indexes and views Hive query optimizers, Hive : Thrift Server, User Defined Functions, HBase: Introduction to NoSQL Databases and HBase, HBase v/s RDBMS, HBase Components, HBase Architecture, Run Modes & Configuration, HBase Cluster Deployment.

9.Advanced HBase

Learning Objectives - This module will cover Advanced HBase concepts. We will see demos on Bulk Loading , Filters. You will also learn what Zookeeper is all about, how it helps in monitoring a cluster, why HBase uses Zookeeper.

Topics -HBase Data Model, HBase Shell, HBase Client API, Data Loading Techniques, ZooKeeper Data Model, Zookeeper Service, Zookeeper, Demos on Bulk Loading, Getting and Inserting Data, Filters in HBase.

10. Processing Distributed Data with Apache Spark

Learning Objectives - In this module you will learn Spark ecosystem and its components, how scala is used in Spark, SparkContext. You will learn how to work in RDD in Spark. Demo will be there on running application on Spark Cluster, Comparing performance of MapReduce and Spark.

Topics - What is Apache Spark, Spark Ecosystem, Spark Components, History of Spark and Spark Versions/Releases, Spark a Polyglot, What is Scala?, Why Scala?, SparkContext, RDD.

11.Oozie and Hadoop Project Learning Objectives -

In this module, you will understand working of multiple Hadoop ecosystem components together in a Hadoop implementation to solve Big Data problems. We will discuss multiple data sets and specifications of the project. This module will also cover Flume & Sqoop demo, Apache Oozie Workflow Scheduler for Hadoop Jobs, and Hadoop Talend integration.

Topics -

Flume and Sqoop Demo, Oozie, Oozie Components, Oozie Workflow, Scheduling with Oozie, Demo on Oozie Workflow, Oozie Co-ordinator, Oozie Commands, Oozie Web Console, Oozie for MapReduce, PIG, Hive, and Sqoop, Combine flow of MR, PIG, Hive in Oozie, Hadoop Project Demo, Hadoop Integration with Talend.

We will help you to setup Logixpedia's Virtual Machine in your System with local access. The detailed installation guides are provided in the LMS for setting up the environment. In case your system doesn't meet the pre-requisites e.g. 4GB RAM, you will be provided remote access to the Logixpedia cluster for the practicals. For any doubt, the 24*7 support team will promptly assist you.Logixpedia Virtual Machine can be installed on Mac or Windows machine

Logixpedia Certification Process:

Once you are successfully through the project (Reviewed by a logixpedia expert), you will be awarded with edureka’s Big Data and Hadoop certificate. logixpedia certification has industry recognition and we are the preferred training partner for many MNCs e.g.Cisco, Ford, Mphasis, Nokia, Wipro, Accenture, IBM, Philips, Citi, Ford, Mindtree, BNYMellon etc. Please be assured.