Introduction to Pig & Pig Latin

What is PIG?

Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs.

PIG is an engine for executing data flows in parallel on Hadoop. It runs on Hadoop and uses HDFS & MapReduce.

PIG Philosophy

  • Pigs eat anything
    • Data: Relational, nested, or unstructured.
  • Pigs live anywhere
    • Parallel processing Language. Not only Hadoop
  • Pigs are domestic animals
    • Controllable, provides custom functions in java/python
    • Custom load and store methods
  • Pigs fly
    • Designed for performance

PIG Data Types

  • Scaler
    • Int,long,Float – 4 bytes,Double – 8 bytes,Chararray,Bytearray
  • Complex
    • Map – [‘name’#’bob’,’age’#55],chararray => another complex type or scaler
    • Tuple – Fixed length, ordered collection.Made up of fields and their values- (‘bob’,55,12.3)
    • Bag – Unordered collection of tuples – {(‘ram’,55,12.3),(‘sally’,52,11.2)}
  • Schemas
    • PIG is not particular about schema, you can either tell upfront or it will make best type guesses based on data.

Topics Covered

  • Introduction to PIG
  • Use Cases
  • Installation
  • Using from CloudLabs and local mode
  • Schema
  • Filter & Joins
  • Stream
  • Non-linear Execution
For last 12 years, Sandeep has been building products and churning large amounts of data for various product firms. He has an all-around experience of software development and big data analysis.
Apart from digging data and technologies, Sandeep enjoys conducting interviews and explaining difficult concepts in simple ways.
A few resources from Know Big Data
Introduction to Apache PIG & PIG Latin Presentation

Introduction to Apache PIG & PIG Latin Video

10 things to look for when choosing a Big Data course / Institute

Every now and then, I keep seeing a new company coming up with Hadoop classes/courses. Also, my friends keep asking me which of these courses is good to take. I gave them a few tips to choose the best course suitable for them. Here are the few tips to decide which course you should attend to:

 

  • Does the instructor have domain expertise?

 

Know your instructor. You must know about the instructor’s background. Has (s)he done any big data related work? I have seen a lot of instructors who just attend a course somewhere and become instructors.

If the instructor never worked in the domain, do not take such classes. Also, avoid training institutes that do not tell you details about the instructor.

 

  • Is the instructor hands on? When did she/he code last time?

 

In the domain of technology, there is a humongous difference between one instructor who is hands-on in coding and another who is delivering based on theoretical knowledge. Also, know when the instructor worked on codes the last time. If instructor never coded, do not attend the class.

 

  • Does the instructor encourage & answer your questions?

 

There are many recorded free videos available across the internet. The only reason you would go for live classes would be to get your questions answered and doubts cleared immediately.

If the instructor does not encourage questions and answers, such classes are fairly useless.

 

  • Do they provide a cloud-based lab with multiple computer setups?

 

A cloud is basically a computer setup at someone else’s place. When I say my data is in the cloud, it means my data is on a computer that is remotely available.

In olden days, people use to have a physical laboratory of computers for learning basic computer skills. In today’s times, while learning advanced technologies, we require a similar setup but on the cloud i.e. at a remote location. A cloud-based lab provides the following benefits:

  1. Instantaneously available – you do not have to wait for your computer to boot or install something.
  2. Accessible from everywhere – whether you want to work on problems from your office or from home, they should be accessible from everywhere.
  3. Easy to get your code debugged through the instructors – While working on assignments, you might get stuck and need to show the assignments to your instructor and seek review. If your environment, code, error log and history of commands are available to the instructor immediately, the instructor will be able to test and debug your program right away.

Why multiple computer setups on cloud-based labs?

Since Big Data technologies are all about distributed computing i.e. tools that run on multiple computers simultaneously.

If you go through the following list of tools related to big data, you would understand that Big Data is all about multiple computers working together to solve a problem:

  1. Hadoop Distributed File System – A file system that utilizes multiple computers’ disk space and disk IO to provide really high performance and huge space.
  2. Hadoop Yarn / MapReduce – a compute engine that utilizes multiple computers’ processor and IOs (disk read/write) to solve computing problems without involving too much network transfer of data.
  3. NoSQL (HBase, Cassandra, MongoDB etc) – Databases that run on multiple computers (nodes) simultaneously to provide really high performance when dealing with a huge number of read-writes per second. Such databases provide really huge storage using the storage space of multiple computers.
  4. Apache Spark – Utilizes the memory (RAM) and CPU of multiple computers to provide really high throughput.

So, it is very important to have a setup that has multiple computers. It does not make any sense to have a setup with only one computer.

 

  • Do they not promise jobs?

 

If you find an institute promising jobs or providing job guarantees, stay away from them. An institute can at most try to connect you with the job industry, they can not give you job guarantees. If you are considering an institute that is promising you a job, please enquire before joining the course.

 

  • What is the refund policy?

 

What if you found after attending the first few classes that the course is not up to your expectations, and you want your refund. Check if they have a proper refund policy in place.

 

  • Is it online?

 

Finding instructors in advanced technologies is difficult. And it is even more difficult to find good instructors in your local location. So, the chances of getting a good instructor for classroom training is very very low. Getting a great instructor for online training is easier.

So, always prefer an online training over offline training in case of Big Data. By online, I am referring to online live training and not a recorded one.

 

  • Are the founders of the institute from a technology background?

 

In a good institution or university, even an administrator or a PR person is a professor or a lecturer.

Therefore an institute providing Big Data or Hadoop training, whether online or offline, whether big or small, cannot sustain if the founders are not from a technology background. The founders who are technologically challenged may hire a sub-par instructor and may not be able to address the real problems that students face.

So, always go for an institute where the founders have a good background in technology.

 

  • Has this institute published something useful in the big data world?

 

If the institute has a strong technology-based foundation, they will definitely do some innovations and/ or publish some articles and research papers from time to time. These research papers could be as blog posts or in ACM etc. These institutes can be considered.

If the institute’s blog is filled with marketing material only, and not any substantially useful information, the institute is not putting enough efforts into having good instructors or good subject matter experts. Such institutes are more focused on marketing themselves than in adding any value in their domain.

 

  • Are they asking for a direct transfer?

 

If an institute is accepting payments through net banking, they must have signed up with a payment gateway such as PayPal. Also, the payment gateways generally make sure there is a refund policy. However, if the institute is asking you to pay directly and not through any payment gateway, know that you should stay away from such an institute.

Introduction to Apache Flume in 30 minutes

What is Apache Flume?

Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating & moving large data from many different sources to a centralized data store.

Flume supports a large variety of sources Including:

  • tail (like unix tail -f),
  • syslog,
  • log4j – allowing Java applications to write logs to HDFS via flume

Flume Nodes

Flume nodes can be arranged in arbitrary topologies.Typically there is a node running on each source machine, with tiers of aggregating nodes that the data flows through on its way to HDFS.

Topics Covered

  • What is Flume
  • Flume: Use Case
  • Flume: Agents
  • Flume: Use Case – Agents
  • Flume: Multiple Agents
  • Flume: Sources
  • Flume: Delivery Reliability
  • Flume: Hands-on

Introduction to Flume Presentation

 

Please feel free to leave your comments in the comment box so that we can improve the guide and serve you better. Also, Follow CloudxLab on Twitter to get updates on new blogs and videos.

If you wish to learn Hadoop and Spark technologies such as MapReduce, Hive, HBase, Sqoop, Flume, Oozie, Spark RDD, Spark Streaming, Kafka, Data frames, SparkSQL, SparkR, MLlib, GraphX and build a career in BigData and Spark domain then check out our signature course on Big Data with Apache Spark and Hadoop which comes with

  • Online instructor-led training by professionals having years of experience in building world-class BigData products
  • High-quality learning content including videos and quizzes
  • Automated hands-on assessments
  • 90 days of lab access so that you can learn by doing
  • 24×7 support and forum access to answer all your queries throughout your learning journey
  • Real-world projects
  • A certificate which you can share on LinkedIn

6 Reasons Why Big Data Career is a Smart Choice

Confused whether to take up a career in Big Data or not? Planning to invest your time in getting certified and to acquire expertise in related frameworks like Hadoop, Spark etc. and worried whether you are making a huge mistake? Just spend a few minutes reading this blog and you will get six reasons on why you are making a smart choice by selecting a career in big data.

Why Big Data?

There are several people out there who believe that Big Data is the next big thing which would help companies to spring up above others and help them position themselves as the best in class in their respective sectors.

Companies these days generate a gigantic amount of information irrespective of which industry they belong to and there is a need to store these data which are being generated so that they can be processed and not miss out on important information which could lead to a new breakthrough in their respective sector.  Atul Butte, of Stanford School of Medicine, has stressed the importance of data by saying “Hiding within those mounds of data is the knowledge that could change the life of a patient, or change the world”. And this is where Big Data analytics play a very crucial role.

With the use of Big Data platforms, a gigantic amount of data can be brought together and be processed to develop patterns which would help the company in making better decisions which would help them to grow, increase their productivity and to help create value to their products and services.

Continue reading “6 Reasons Why Big Data Career is a Smart Choice”

Streaming Twitter Data using Flume

In this blog post, we will learn how to stream Twitter data using Flume on CloudxLab

For downloading tweets from Twitter, we have to configure Twitter App first.

Create Twitter App

Step 1

Navigate to Twitter app URL and sign in with your Twitter account

Step 2

Click on “Create New App”

Create New App

Continue reading “Streaming Twitter Data using Flume”

Python Setup Using Anaconda For Machine Learning and Data Science Tools

Python for Machine Learning

In this post, we will learn how to configure tools required for CloudxLab’s Python for Machine Learning course. We will use Python 3 and Jupyter notebooks for hands-on practicals in the course. Jupyter notebooks provide a really good user interface to write code, equations, and visualizations.

Please choose one of the options listed below for practicals during the course.

Continue reading “Python Setup Using Anaconda For Machine Learning and Data Science Tools”

Predicting Income Level, An Analytics Casestudy in R

Percentage of Income more than 50k Country wise

1. Introduction

In this data analytics case study, we will use the US census data to build a model to predict if the income of any individual in the US is greater than or less than USD 50000 based on the information available about that individual in the census data.

The dataset used for the analysis is an extraction from the 1994 census data by Barry Becker and donated to the public site http://archive.ics.uci.edu/ml/datasets/Census+Income. This dataset is popularly called the “Adult” data set. The way that we will go about this case study is in the following order:

  1. Describe the data- Specifically the predictor variables (also called independent variables features) from the Census data and the dependent variable which is the level of income (either “greater than USD 50000” or “less than USD 50000”).
  2. Acquire and Read the data- Downloading the data directly from the source and reading it.
  3. Clean the data- Any data from the real world is always messy and noisy. The data needs to be reshaped in order to aid exploration of the data and modeling to predict the income level.
  4. Explore the independent variables of the data- A very crucial step before modeling is the exploration of the independent variables. Exploration provides great insights to an analyst on the predicting power of the variable. An analyst looks at the distribution of the variable, how variable it is to predict the income level, what skews it has, etc. In most analytics project, the analyst goes back to either get more data or better context or clarity from his finding.
  5. Build the prediction model with the training data- Since data like the Census data can have many weak predictors, for this particular case study I have chosen the non-parametric predicting algorithm of Boosting. Boosting is a classification algorithm (here we classify if an individual’s income is “greater than USD 50000” or “less than USD 50000”) that gives the best prediction accuracy for weak predictors. Cross validation, a mechanism to reduce over fitting while modeling, is also used with Boosting.
  6. Validate the prediction model with the testing data- Here the built model is applied on test data that the model has never seen. This is performed to determine the accuracy of the model in the field when it would be deployed. Since this is a case study, only the crucial steps are retained to keep the content concise and readable.

Continue reading “Predicting Income Level, An Analytics Casestudy in R”

CloudxLab Conducts Another Successful Webinar On “Big Data & AI”

Buoyed by the success of our previous webinar and excited by the unending curiosity of our audience, we at CloudxLab decided to conduct another webinar on “Big Data & AI” on 24th August.  Mr Sandeep Giri, founder of CloudxLab, was the lead presenter in the webinar. A graduate from IIT Roorkee with more than 15 years of experience in companies such as DE Shaw, Inmobi & Amazon, Sandeep conducted the webinar to the appreciation of all.

Continue reading “CloudxLab Conducts Another Successful Webinar On “Big Data & AI””

Future Of Mobility – Shaped By Big Data & AI

The advancements in the field of Big Data & Artificial Intelligence (AI) are occurring at an unprecedented pace and everyone from researchers to engineers to common folk are wondering how their lives will be affected. While almost all industries are estimating significant disruption from advancements in Big Data & AI, I believe the industry that will actually experience the maximum impact will be the Automotive or Transportation industry. Here is my perspective on how Big Data & AI will change the Automotive & Transportation industry landscape. It should appeal to engineers as well as to common folk interested in technological developments. I will discuss the challenges, existing solutions and will propose two alternative solutions.

Continue reading “Future Of Mobility – Shaped By Big Data & AI”

What, How & Why of Artificial Intelligence

Artificial Intelligence (AI) is the buzzword that is resounding and echoing all over the world. While large corporations, organizations & institutions are publicly proclaiming and publicizing their massive investments toward development and deployment of AI capabilities, people, in general, are feeling perplexed regarding the meaning and nuances of AI. This blog is an attempt to demystify AI and provide a brief introduction to the various aspects of AI to all such persons, engineers, non-engineers & beginners, who are seeking to understand AI. In the forthcoming discussion, we will explore the following questions:

  • What is AI & what does it seek to accomplish?
  • How will the goals of AI be accomplished, through which methodologies?
  • Why is AI gaining so much momentum now?

Continue reading “What, How & Why of Artificial Intelligence”