We are happy to announce that we have come up with a new consolidated playlist, which summaries about various tools present at CloudxLab environment, how to use them and where to learn about them.
This would be incrementally improved as new technologies keep getting installed on the lab.
In this playlist, there is a dedicated slide for each technology. For example, if you want to understand how to use Pandas on the lab, go to the slide named Pandas.
Upon clicking on Pandas, you would be able to see the Pandas guide as follows:
As you could see, this slide contains all the basic information needed such as:
the purpose of the library
link for the official home page
link for the official documentation
related resources you could use to learn about the library.
instructions on how to use it on the CloudxLab environment.
1-2 lines of sample examples to use it, such as how to inport the library and how to check the version.
We hope that this will be a great starting guide for our users and makes their job of getting started easier.
Python has a really sophisticated way of handling iterations. The only thing it does not have “GOTO Labels” which I think is good.
Let us compare the three common ways of iterations in Python: While, For and Map by the way of an example. Imagine that you have a list of numbers and you would like to find the square of each number.
nums = [1,2,3,5,10]
result = []
for num in nums:
result.append(num*num)
print(result)
When you are running python programs from the command line, you can pass various arguments to the program and your program can handle it.
Here is a quick snippet of code that I will be explaining later:
import sys
if __name__ == "__main__":
print("You passed: ", sys.argv)
When you run this program from the command line, you will get this kind of results:
$ python cmdargs.py
You passed: ['cmdargs.py']
Notice that the sys.argv is an array of strings containing all arguments passed to the program. And the first value(at zeroth index) of this array is the name of the program itself. You can put all kinds of check on it.
In the Hadoop ecosystem, YARN, short for Yet Another Resource negotiator, holds the responsibility of resource allocation and job scheduling/management. The Resource Manager(RM), one of the components of YARN, is primarily responsible for accomplishing these tasks of coordinating with the various nodes and interacting with the client.
To learn more about YARN, feel free to visit here.
Hence, Resource Manager in YARN is a single point of failure – meaning, if the Resource Manager is down for some reason, the whole of the system gets disturbed due to interruption in the resource allocation or job management, and thus we cannot run any jobs on the cluster.
To avoid this issue, we need to enable the High Availability(HA) feature in YARN. When HA is enabled, we run another Resource Manager parallelly on another node, and this is known as Standby Resource Manager. The idea is that, when the Active Resource Manager is down, the Standby Resource Manager becomes active, and ensures smooth operations on the cluster. And the process continues.
I recently discovered a nice simple library called Dask.
Parallel computing basically means performing multiple tasks in parallel – it could be on the same machine or on multiple machines. When it is on multiple machines, it is called distributed computing.
There are various libraries that support parallel computing such as Apache Spark, Tensorflow. A common characteristic you would find in most parallel computing libraries you would is the computational graph. A computational graph is essentially a directed acyclic graph or dependency graph.
Before we start with the main topic, let me explain a very important idea called serialization and its utility.
The data in the RAM is accessed based on the address that is why the name Random Access Memory but the data in the disc is stored sequentially. In the disc, the data is accessed using a file name and the data inside a file is kept in a sequence of bits. So, there is inherent mismatch in the format in which data is kept in memory and data is kept in the disc. You can watch this video to understand serialization further.
SQL is a very important skill. You not only can access the relational databases but also big data using Hive, Spark-SQL etcetera. Learning SQL could help you excel in various roles such as Business Analytics, Web Developer, Mobile Developer, Data Engineer, Data Scientist, and Data Analyst. Therefore having access to SQL client is very important via browser. In this blog, we are going to walk through the examples of interacting with SQLite and MySQL using Jupyter notebook.
A Jupyter notebook is a great tool for analytics and interactive computing. You can interact with various tools such as Python, Linux, File System, Scala, Lua, Spark, R, and SQL from the comfort of the browser. For almost every interactive tool, there is a kernel in Jupyter. Let us walk through how would you use SQL to interact with various databases from the comfort of your browser.
When you are building a production system whether it’s a machine learning model deployment or simple data cleaning, you would need to run multiple steps with multiple different tools and you would want to trigger some processes periodically. This is not possible to do it manually more than once. Therefore, you need a workflow manager and a scheduler. In workflow manager, you would define which processes to run and their interdependencies and in scheduler, you would want to execute them at a certain schedule.
When I started using Apache Hadoop in 2012, we used to get the HDFS data cleaned using our multiple streaming jobs written in Python, and then there were shell scripts and so on. It was cumbersome to run these manually. So, we started using Azkaban for the same, and later on Oozie came. Honestly, Oozie was less than impressive but it stayed due to the lack of alternatives.
As of today, Apache Airflow seems to be the best solution for creating your workflow. Unlike Oozie, Airflow is not really specific to Hadoop. It is an independent tool – more like a combination of Apache Ant and Unix Cron jobs. It has many more integrations. Check out Apache Airflow’s website.
Consider a situation where we have an email inbox that consists of emails, and emails are to be processed. For example, processing those emails and classifying each of the emails as spam or non-spam. The other example of the processing could be we are indexing the email so that the search could be performed.
We have an email-processor program, running on various machines distributed physically from each other.
Now these machines need to somehow coordinate such that:
Consider a situation where we have an email inbox that consists of emails. We have the task of processing those emails and classifying each of the emails as spam or non-spam. This email inbox is read-only.
We have an email-processor program, running on various machines distributed physically from each other.
Now these machines need to somehow coordinate such that:
No email is processed two times
No email is left unprocessed
Solution 1:
Usage of flags: we could mark the emails to be read or unread by any machine previously, and only consider those emails which are not yet read.
CONS:
While processor 1 reads an email and mark it as read, and then the processor dies, then the email would not touched by any other processor in future, because it was already marked as read by the first processor, and thus this email would be left unprocessed.
SOLUTION 2:
There should be a manager that could handle the workload and distribute the work to workers.
Cons:
This manager could be a bottleneck as it has to maintain a large number of systems, and thus it would be overloaded. Also, what is the manager dies?
SOLUTION 3:
We need a central storage which could note down who is doing what, like email id, timestamp it was taken up by a processor, status of completion of processing, etc.
CONS:
The central storage system can be a bottleneck. Say the email processor programs are running on a lot of machines, then the central storage system would be on high demand and thus it will be overloaded, and it may also die.
Solution 4:
Distributed storage system like Zookeeper could be an ideal solution for the problem.
Zookeeper :
provides simple primitives like set/get, so easy to program
has an easy data model, like a directory tree
is a resilient and highly available tool
How it could solve the problem?
Suppose the process on Machine 1 wants to read some data from the email inbox. Say it has successfully picked 100 emails to process and it noted down this information with Zookeeper. This could be done by creating a sequential ephemeral znode, along with the info about the email_id, timestamp, status, etc. Since this process is creating a znode, it is obviously a write operation. When a process is carrying out a write operation on Zookeeper, then it acquires a lock(with its session id to identify who performing this write). In the meanwhile, another process (maybe from another machine) may want to read emails and make a note of it in zookeeper. This means another process wants to create a znode about the emails it wants to pick. This would not be possible, as the first process has still not released to lock for the second process to conduct a write operation in zookeeper. Also, once the lock acquired by the first process is released, the second process would check if the emails it has picked up are already processed by some other process, to ensure no email is processed more than once. Also, the second process also could check the timestamps when the emails were taken up by other processes and what is status(if the email is processed successfully after being picked by some other process). If the timestamp was made long ago and still the status is unsuccessful, the next process could pick that email, so as to make sure that no email is left unprocessed. In this way, the zookeeper makesure that no email is processed more than once, and n email is left behind unprocessed.
As long as the first process has acquired to like to perform some write operation, all the other processes – those who wish to acquire the lock and perform some write operation – will have to wait, by creating sequential ephemeral znodes. The sequential znodes would have suffixes with the incremental numbers for each of the newly created znodes. Once the current process releases its lock, that znode could be removed, and then the process whose znode is having the minimum number could acquire the lock. Thus, by creating sequential znodes, the order of operations could be preserved. Further, ephemeral znodes help in tracking the clients if they are active or dead. If a client is active, it sends regular signals(called heartbeats) to Zookeeper to mark its presence. If it could not send the heartbeats due to network some temporary network failure or likewise, the session is still alive, but if the heartbeat is ceased for a duration longer than the session timeout, Zookeeper understands that the client is dead. This means the session times out and the ephemeral znode disappears. Thus, the reason for creating sequential ephemeral znodes is that, sequentiality preserves the order in which the operations should be performed, and ephemerality ensures that all the clients are alive(a watcher could be placed to track if any of the processes get disconnected. Then, a notification could be sent to appropriate resources so that a new process could be up and continue the work which was previously handled by the dead process, thus making the whole system fault-tolerant and highly available).
If a zookeeper server dies, then a new server could come up, or the client could connect to some other server in the ensemble. Thus zookeeper is distributed service so that even if a zookeeper server fails, it could still manage resiliently to maintain coordination amongst the distributed systems.
Conclusions
Zookeeper is a distributed coordination service that provides the following mechanisms to promote coordination amongst distributes systems:
distributed key-value store to store small JSON data
Various types of znodes suitable for different use cases
monotonically increasing unique ids to the znodes
Zookeeper ensemble
watches
notifications
The above mechanisms thus make Zookeeper:
resilient
highly available
fault tolerant
efficient intermediary for coordination amongst distributed systems
If you are still more eager to know about Zookeeper, feel free to visit here. To know more about CloudxLab courses, here you go!