Learn Python, NumPy, Pandas, Scikit-learn, HDFS, ZooKeeper, Hive, HBase, NoSQL, Oozie, Flume, Sqoop, Spark, Spark RDD, Spark Streaming, Kafka, SparkR, SparkSQL, MLlib, Regression, Clustering, Classification, SVM, Random Forests, Decision Trees, Dimensionality Reduction, TensorFlow 2, Keras, Convolutional & Recurrent Neural Networks, Autoencoders, and Reinforcement Learning
Learn HDFS, ZooKeeper, Hive, HBase, NoSQL, Oozie, Flume, Sqoop, Spark, Spark RDD, Spark Streaming, Kafka, SparkR, SparkSQL, MLlib, and GraphX.
In this chapter, we learn the basics of Big Data which include various concepts, use-cases and understanding of the eco-system.
This chapter doesn't require any knowledge of programming or technology. We believe it is very useful for every to learn the basics of Big Data. So, jump in!
Happy Learning!
Spark Streaming is an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant processing of live data streams. Learn Spark Streaming from the industry experts.
MapReduce is Framework as well as a paradigm of computing. By the way of map-reduce, we are able to break-down complex computation into distributed computing.
As part of this chapter, we are going to learning how to build MapReduce programmes using Java.
Please make sure you work along with the course instead of just sitting back and watching.
Happy Learning!
Learn to load and save data using Spark, compression, and how to handle various file formats using Spark from the industry experts.
Whenever you make a request to a web server for a page, it records it in a file which is called logs.
The logs of a webserver are the gold mines for gaining insights in the user behaviour. Every data scientists usually look at the logs first to understand the behaviour of the users. But since the logs are humongous in size, it takes a distributed framework like Hadoop or Spark to process it.
As part of this project, you will learn to parse the text data stored in logs of a web server using the Apache Spark.