How to handle Command Line Arguments in Python?

When you are running python programs from the command line, you can pass various arguments to the program and your program can handle it.

Here is a quick snippet of code that I will be explaining later:

import sys
if __name__ == "__main__":
    print("You passed: ", sys.argv)

When you run this program from the command line, you will get this kind of results:

$ python cmdargs.py
 You passed:  ['cmdargs.py']

Notice that the sys.argv is an array of strings containing all arguments passed to the program. And the first value(at zeroth index) of this array is the name of the program itself. You can put all kinds of check on it.

Continue reading “How to handle Command Line Arguments in Python?”

How does YARN interact with Zookeeper to support High Availability?

In the Hadoop ecosystem, YARN, short for Yet Another Resource negotiator, holds the responsibility of resource allocation and job scheduling/management. The Resource Manager(RM), one of the components of YARN, is primarily responsible for accomplishing these tasks of coordinating with the various nodes and interacting with the client.

To learn more about YARN, feel free to visit here.

Architecture of YARN

Hence, Resource Manager in YARN is a single point of failure – meaning, if the Resource Manager is down for some reason, the whole of the system gets disturbed due to interruption in the resource allocation or job management, and thus we cannot run any jobs on the cluster. 

To avoid this issue, we need to enable the High Availability(HA) feature in YARN. When HA is enabled, we run another Resource Manager parallelly on another node, and this is known as Standby Resource Manager. The idea is that, when the Active Resource Manager is down, the Standby Resource Manager becomes active, and ensures smooth operations on the cluster. And the process continues.

Continue reading “How does YARN interact with Zookeeper to support High Availability?”

Parallel Computing with Dask

Dask collections and schedulers
Source: dask.org

I recently discovered a nice simple library called Dask.

Parallel computing basically means performing multiple tasks in parallel – it could be on the same machine or on multiple machines. When it is on multiple machines, it is called distributed computing.

There are various libraries that support parallel computing such as Apache Spark, Tensorflow. A common characteristic you would find in most parallel computing libraries you would is the computational graph. A computational graph is essentially a directed acyclic graph or dependency graph.

Continue reading “Parallel Computing with Dask”

How to use a library in Apache Spark and process Avro and XML Files

What is Serialization? And why it’s needed?

Before we start with the main topic, let me explain a very important idea called serialization and its utility.

The data in the RAM is accessed based on the address that is why the name Random Access Memory but the data in the disc is stored sequentially. In the disc, the data is accessed using a file name and the data inside a file is kept in a sequence of bits. So, there is inherent mismatch in the format in which data is kept in memory and data is kept in the disc. You can watch this video to understand serialization further.

Serialization is converting an object into a sequence of bytes.
Continue reading “How to use a library in Apache Spark and process Avro and XML Files”

How to access databases using Jupyter Notebook

SQL is a very important skill. You not only can access the relational databases but also big data using Hive, Spark-SQL etcetera. Learning SQL could help you excel in various roles such as Business Analytics, Web Developer, Mobile Developer, Data Engineer, Data Scientist, and Data Analyst. Therefore having access to SQL client is very important via browser. In this blog, we are going to walk through the examples of interacting with SQLite and MySQL using Jupyter notebook.

A Jupyter notebook is a great tool for analytics and interactive computing. You can interact with various tools such as Python, Linux, File System, Scala, Lua, Spark, R, and SQL from the comfort of the browser. For almost every interactive tool, there is a kernel in Jupyter. Let us walk through how would you use SQL to interact with various databases from the comfort of your browser.

Using Jupyter to access databases such SQLite and MySQL.
Continue reading “How to access databases using Jupyter Notebook”

Getting Started with Apache Airflow

Apache Airflow

When you are building a production system whether it’s a machine learning model deployment or simple data cleaning, you would need to run multiple steps with multiple different tools and you would want to trigger some processes periodically. This is not possible to do it manually more than once. Therefore, you need a workflow manager and a scheduler. In workflow manager, you would define which processes to run and their interdependencies and in scheduler, you would want to execute them at a certain schedule.

When I started using Apache Hadoop in 2012, we used to get the HDFS data cleaned using our multiple streaming jobs written in Python, and then there were shell scripts and so on. It was cumbersome to run these manually. So, we started using Azkaban for the same, and later on Oozie came. Honestly, Oozie was less than impressive but it stayed due to the lack of alternatives.

As of today, Apache Airflow seems to be the best solution for creating your workflow. Unlike Oozie, Airflow is not really specific to Hadoop. It is an independent tool – more like a combination of Apache Ant and Unix Cron jobs. It has many more integrations. Check out Apache Airflow’s website.

Continue reading “Getting Started with Apache Airflow”

How to design a large-scale system to process emails using multiple machines [Zookeeper Use Case Study]?

Introduction

As part of this blog we are going to discuss various ways of large scale system design and the pros-cons of each.

To get a fair understanding of this post, you should know what is distributed computing, what is deadlock and race conditions, locking in distributed systems and Zookeeper etc. Let’s get started.

Scenario

Consider a situation where we have an email inbox that consists of emails, and emails are to be processed. For example, processing those emails and classifying each of the emails as spam or non-spam. The other example of the processing could be we are indexing the email so that the search could be performed.

We have an email-processor program, running on various machines distributed physically from each other.

Email processor program running on distributed systems

Now these machines need to somehow coordinate such that:

  • No email is processed two times
  • No email is left unprocessed
Continue reading “How to design a large-scale system to process emails using multiple machines [Zookeeper Use Case Study]?”

Introduction to Apache Zookeeper

In the Hadoop ecosystem, Apache Zookeeper plays an important role in coordination amongst distributed resources. Apart from being an important component of Hadoop, it is also a very good concept to learn for a system design interview.

If you would prefer the videos with hands-on, feel free to jump in here.

Alright, so let’s get started.

Goals

In this post, we will understand the following:

  • What is Apache Zookeeper?
  • How Zookeeper achieves coordination?
  • Zookeeper Architecture
  • Zookeeper Data Model
  • Some Hands-on with Zookeeper
  • Election & Majority in Zookeeper
  • Zookeeper Sessions
  • Application of Zookeeper
  • What kind of guarantees does ZooKeeper provide?
  • Operations provided by Zookeeper
  • Zookeeper APIs
  • Zookeeper Watches
  • ACL in Zookeeper
  • Zookeeper Usecases
Continue reading “Introduction to Apache Zookeeper”

Distributed Computing with Locks

Introduction

Having known of the prevalence of BigData in real-world scenarios, it’s time for us to understand how they work. This is a very important topic in understanding the principles behind system design and coordination among machines in big data. So let’s dive in.

Scenario:

Consider a scenario where there is a resource of data, and there is a worker machine that has to accomplish some task using that resource. For example, this worker is to process the data by accessing that resource. Remember that the data source is having huge data; that is, the data to be processed for the task is very huge.

Continue reading “Distributed Computing with Locks”