Learn HDFS, ZooKeeper, Hive, HBase, NoSQL, Oozie, Flume, Sqoop, Spark, Spark RDD, Spark Streaming, Kafka, SparkR, SparkSQL, MLlib, and GraphX From Industry Experts
As humans, we are immersed in data in our every-day lives. As per IBM, the data doubles every two years on this planet. The value that data holds can only be understood when we can start to identify patterns and trends in the data. Normal computing principles do not work when data becomes huge.
There is massive growth in the big data space, and job opportunities are skyrocketing, making this the perfect time to launch your career in this space.
In this specialization, you will learn Hadoop and Spark to drive better business decisions and solve real-world problems.
This is the first course in the specialization. In this course, we start with Big Data introduction and then we dive into Big Data ecosystem tools and technologies like ZooKeeper, HDFS, YARN, MapReduce, Pig, Hive, HBase, NoSQL, Sqoop, Flume, Oozie.
Each topic consists of high-quality videos, slides, hands-on assessments, quizzes and case studies to make learning effective and for life. With this course, you also get access to real-world production lab so that you will learn by doing.
1.1 Big Data Introduction
1.2 Distributed systems
1.3 Big Data Use Cases
1.4 Various Solutions
1.5 Overview of Hadoop Ecosystem
1.6 Spark Ecosystem Walkthrough
1.7 Quiz
2.1 Understanding the CloudxLab
2.2 Getting Started - Hands on
2.3 Hadoop & Spark Hands-on
2.4 Quiz and Assessment
2.5 Basics of Linux - Quick Hands-On
2.6 Understanding Regular Expressions
2.7 Quiz and Assessment
2.8 Setting up VM (optional)
3.1 ZooKeeper - Race Condition
3.2 ZooKeeper - Deadlock
3.3 Hands-On
3.4 Quiz & Assessment
3.5 How does election happen - Paxos Algorithm?
3.6 Use cases
3.7 When not to use
3.8 Quiz & Assessment
4.1 Why HDFS or Why not existing file systems?
4.2 HDFS - NameNode & DataNodes
4.3 Quiz
4.4 Advance HDFS Concepts (HA, Federation)
4.5 Quiz
4.6 Hands-on with HDFS (Upload, Download, SetRep)
4.7 Quiz & Assessment
4.8 Data Locality (Rack Awareness)
5.1 YARN - Why not existing tools?
5.2 YARN - Evolution from MapReduce 1.0
5.3 Resource Management: YARN Architecture
5.4 Advance Concepts - Speculative Execution
5.5 Quiz
6.1 MapReduce - Understanding Sorting
6.2 MapReduce - Overview
6.3 Quiz
6.4 Example 0 - Word Frequency Problem - Without MR
6.5 Example 1 - Only Mapper - Image Resizing
6.6 Example 2 - Word Frequency Problem
6.7 Example 3 - Temperature Problem
6.8 Example 4 - Multiple Reducer
6.9 Example 5 - Java MapReduce Walkthrough
6.10 Quiz
7.1 Writing MapReduce Code Using Java
7.2 Building MapReduce project using Apache Ant
7.3 Concept - Associative & Commutative
7.4 Quiz
7.5 Example 8 - Combiner
7.6 Example 9 - Hadoop Streaming
7.7 Example 10 - Adv. Problem Solving - Anagrams
7.8 Example 11 - Adv. Problem Solving - Same DNA
7.9 Example 12 - Adv. Problem Solving - Similar DNA
7.10 Example 12 - Joins - Voting
7.11 Limitations of MapReduce
7.12 Quiz
8.1 Pig - Introduction
8.2 Pig - Modes
8.3 Getting Started
8.4 Example - NYSE Stock Exchange
8.5 Concept - Lazy Evaluation
9.1 Hive - Introduction
9.2 Hive - Data Types
9.3 Getting Started
9.4 Loading Data in Hive (Tables)
9.5 Example: Movielens Data Processing
9.6 Advance Concepts: Views
9.7 Connecting Tableau and HiveServer 2
9.8 Connecting Microsoft Excel and HiveServer 2
9.9 Project: Sentiment Analyses of Twitter Data
9.10 Advanced - Partition Tables
9.11 Understanding HCatalog & Impala
9.12 Quiz
10.1 NoSQL - Scaling Out / Up
10.2 NoSQL - ACID Properties and RDBMS Story
10.3 CAP Theorem
10.4 HBase Architecture - Region Servers etc
10.5 Hbase Data Model - Column Family Orientedness
10.6 Getting Started - Create table, Adding Data
10.7 Adv Example - Google Links Storage
10.8 Concept - Bloom Filter
10.9 Comparison of NOSQL Databases
10.10 Quiz
11.1 Sqoop - Introduction
11.2 Sqoop Import - MySQL to HDFS
11.3 Exporting to MySQL from HDFS
11.4 Concept - Unbounding Dataset Processing or Stream Processing
11.5 Flume Overview: Agents - Source, Sink, Channel
11.6 Example 1 - Data from Local network service into HDFS
11.7 Example 2 - Extracting Twitter Data
11.8 Quiz
11.9 Example 3 - Creating workflow with Oozie
This is the second course in the specialization. In this course, we start with Big Data and Spark introduction and then we dive into Scala and Spark concepts like RDD, transformations, actions, persistence and deploying Spark applications. We then cover Spark Streaming, Kafka, various data formats like JSON, XML, Avro, Parquet and Protocol Buffers. We conclude the course with very important topics such as Dataframes, SparkSQL, SparkR, MLlib and GraphX.
Each topic consists of high-quality videos, slides, hands-on assessments, quizzes and case studies to make learning effective and for life. With this course, you also get access to real-world production lab so that you will learn by doing.
1.1 Apache Spark ecosystem walkthrough
1.2 Spark Introduction - Why Spark?
1.3 Quiz
2.1 Scala - Quick Introduction - Access Scala on CloudxLab
2.2 Scala - Quick Introduction - Variables and Methods
2.3 Getting Started: Interactive, Compilation, SBT
2.4 Types, Variables & Values
2.5 Functions
2.6 Collections
2.7 Classes
2.8 Parameters
2.9 More Features
2.10 Quiz and Assessment
3.1 Apache Spark ecosystem walkthrough
3.2 Spark Introduction - Why Spark?
3.3 Using the Spark Shell on CloudxLab
3.4 Example 1 - Performing Word Count
3.5 Understanding Spark Cluster Modes on YARN
3.6 RDDs (Resilient Distributed Datasets)
3.7 General RDD Operations: Transformations & Actions
3.8 RDD lineage
3.9 RDD Persistence Overview
3.10 Distributed Persistence
4.1 Creating the SparkContext
4.2 Building a Spark Application (Scala, Java, Python)
4.3 The Spark Application Web UI
4.4 Configuring Spark Properties
4.5 Running Spark on Cluster
4.6 RDD Partitions
4.7 Executing Parallel Operations
4.8 Stages and Tasks
5.1 Common Spark Use Cases
5.2 Example 1 - Data Cleaning (Movielens)
5.3 Example 2 - Understanding Spark Streaming
5.4 Understanding Kafka
5.5 Example 3 - Spark Streaming from Kafka
5.6 Iterative Algorithms in Spark
5.7 Project: Real-time analytics of orders in an e-commerce company
6.1 InputFormat and InputSplit
6.2 JSON
6.3 XML
6.4 AVRO
6.5 How to store many small files - SequenceFile?
6.6 Parquet
6.7 Protocol Buffers
6.8 Comparing Compressions
6.9 Understanding Row Oriented and Column Oriented Formats - RCFile?
7.1 Spark SQL - Introduction
7.2 Spark SQL - Dataframe Introduction
7.3 Transforming and Querying DataFrames
7.4 Saving DataFrames
7.5 DataFrames and RDDs
7.6 Comparing Spark SQL, Impala, and Hive-on-Spark
8.1 Machine Learning Introduction
8.2 Applications Of Machine Learning
8.3 MlLib Example: k-means
8.4 SparkR Example
1. Sentiment analysis of "Iron Man 3" movie using Hive and visualizing the sentiment data using BI tools such as Tableau
2. Process the NYSE (New York Stock Exchange) data using Hive for various insights
3. Analyze MovieLens data using Hive
4. Generate movie recommendations using Spark MLlib
5. Churn the logs of NASA Kennedy Space Center WWW server using Spark to find out useful business and devops metrics
6. Write end-to-end Spark application starting from writing code on your local machine to deploying to the cluster
7. Build real-time analytics dashboard for an e-commerce company using Apache Spark, Kafka, Spark Streaming, Node.js, Socket.IO and Highcharts
Our Specialization is exhaustive and the certificate rewarded by us is proof that you have taken a big leap in Big Data domain.
The knowledge you have gained from working on projects, videos, quizzes, hands-on assessments and case studies gives you a competitive edge.
Highlight your new skills on your resume, LinkedIn, Facebook and Twitter. Tell your friends and colleagues about it.
It is a self-paced course. You will get access to videos, quizzes, hands-on assessments and projects. If you have any doubts during your learning journey, you can post it on the discussion forum. Our experts and community will assist you over there.
Basics Of SQL. You should know the basics of SQL and databases. If you know about filters in SQL, you are expected to understand the course.
A know-how of the basics of programming. If you understand 'loops' in any programming language, and if you are able to create a directory and see what's inside a file from the command line, you are good to get the concepts of this course even if you have not really touched programming for the last 10 years! In addition, we will be providing video classes on the basics of Python and Scala.
The tools and components available in the cluster include Hadoop, Spark, Kafka, Hive, Pig, HBase, Oozie, ZooKeeper, Flume, Sqoop, Mahout, R, Linux, Python, Scala, MongoDB, NumPy, SciPy, Pandas, Scikit-learn etc. Again, if you are looking for other tools please contact us at reachus@cloudxlab.com.
If you are unhappy with the product for any reason, let us know within 7 days of purchasing or upgrading your account, and we'll cancel your account and issue a full refund. Please contact us at reachus@cloudxlab.com to request a refund within the stipulated time. We will be sorry to see you go though!
No, we will provide you with the access to our online lab and BootML so that you do not have to install anything on your local machine
We have created a set of Guided Projects on our platform. You may complete these guided projects and earn the certificate for free. Check it out here
Have more questions? Please contact us at reachus@cloudxlab.com
I have started learning 3 months ago and I really gained much info and practical experience. I completed the “Big Data with Spark” course and the learning journey really exceeded my expectations.
The course structure and topics were great, well organized and comprehensive, even the basics of Linux were covered in a very simple way. There were always exercises and hands-on that build better understanding, also the lab environment and provided online tools were great help and let you practice everything without having to install anything on your PC except the web browser.
In addition, for the live sessions, it was really a joy attending them each weekend, our instructor “Sandeep Giri”, besides his great experience and knowledge, he was generous, helpful and patient answering all attendees questions in such a way that he could go for more examples and hands-on or even searching the documentation and try new things, I gained much from other attendees’ questions and the way Sandeep responded to them.
This was a great experience having this course and I’m going for more courses in Big Data and Machine Learning with CloudxLab and I recommend it for all my friends and colleagues who look for better learning.
This course is suitable for everyone. Me being a product manager had not done hands-on coding since quite some time. Python was completely new to me. However, Sandeep Giri gave us a crash course to Python and then introduced us to Machine Learning. Also, the CloudxLab’s environment was very useful to just log in and start practising coding and playing with things learnt. A good mix of theory and practical exercises and specifically the sequence of starting straight away with a project and then going deeper was a very good way of teaching. I would recommend this course to all.
Must have for practicing and perfecting hadoop. To setup in PC you need to have a very high end configuration and setup will be pseudo node setup.. For better understanding I recomend CloudxLab
Machine learning courses in especially the Artificial Intelligence for the manager course is excellent in CloudxLab. I have attended some of the course and able to understand as Sandeep Giri sir has taught AI course from scratch and related to our data to day life…
He even takes free sessions to helps students and provides career guidance.
His courses are worthy and even just by watching YouTube video anyone can easily crack the AI interview.
They are great. They take care of all the Big Data technologies (Hadoop, Spark, Hive, etc.) so you do not have to worry about installing and running them correclty on your pc. Plus, they have a fantastic customer support. Even when I have had problems debugging my own programs, they have answered me with the correct solution in a few hours, and all of this for a more than reasonable price. I personally recommend it to everyone :)