Predicting Income Level, An Analytics Casestudy in R

Percentage of Income more than 50k Country wise

1. Introduction

In this data analytics case study, we will use the US census data to build a model to predict if the income of any individual in the US is greater than or less than USD 50000 based on the information available about that individual in the census data.

The dataset used for the analysis is an extraction from the 1994 census data by Barry Becker and donated to the public site http://archive.ics.uci.edu/ml/datasets/Census+Income. This dataset is popularly called the “Adult” data set. The way that we will go about this case study is in the following order:

  1. Describe the data- Specifically the predictor variables (also called independent variables features) from the Census data and the dependent variable which is the level of income (either “greater than USD 50000” or “less than USD 50000”).
  2. Acquire and Read the data- Downloading the data directly from the source and reading it.
  3. Clean the data- Any data from the real world is always messy and noisy. The data needs to be reshaped in order to aid exploration of the data and modeling to predict the income level.
  4. Explore the independent variables of the data- A very crucial step before modeling is the exploration of the independent variables. Exploration provides great insights to an analyst on the predicting power of the variable. An analyst looks at the distribution of the variable, how variable it is to predict the income level, what skews it has, etc. In most analytics project, the analyst goes back to either get more data or better context or clarity from his finding.
  5. Build the prediction model with the training data- Since data like the Census data can have many weak predictors, for this particular case study I have chosen the non-parametric predicting algorithm of Boosting. Boosting is a classification algorithm (here we classify if an individual’s income is “greater than USD 50000” or “less than USD 50000”) that gives the best prediction accuracy for weak predictors. Cross validation, a mechanism to reduce over fitting while modeling, is also used with Boosting.
  6. Validate the prediction model with the testing data- Here the built model is applied on test data that the model has never seen. This is performed to determine the accuracy of the model in the field when it would be deployed. Since this is a case study, only the crucial steps are retained to keep the content concise and readable.

Continue reading “Predicting Income Level, An Analytics Casestudy in R”

How to install Python packages on CloudxLab?

In this blog post, we will learn how to install Python packages on CloudxLab.

Step 1-

Create the virtual environment for your project. A virtual environment is a tool to keep the dependencies required by different projects in separate places, by creating virtual Python environments for them. Login to CloudxLab web console and create a virtual environment for your project.

Continue reading “How to install Python packages on CloudxLab?”

CloudxLab Reviews

Jose

Jose Manual Ramirez Leon

It is really a great site. As a 37-year-old with a masters
in mechanical engineering, I decided to switch careers
and get another masters. One of my courses was
Big Data and, at the beginning, I was completely lost
& I was falling behind in my assignments and after
searching the internet for a solution, finally found  CloudxLab.

Not only do they have any conceivable Big Data
technology on their servers, they have superb
customer support. Whenever I have had a doubt,
even in debugging my own programs, they have
answered me with the correct solution in a few hours.

I earnestly recommend it to everyone.

Continue reading “CloudxLab Reviews”

Building Real-Time Analytics Dashboard Using Apache Spark

Apache Spark

 

In this blog post, we will learn how to build a real-time analytics dashboard using Apache Spark streaming, Kafka, Node.js, Socket.IO and Highcharts.

Complete Spark Streaming topic on CloudxLab to refresh your Spark Streaming and Kafka concepts to get most out of this guide.

Problem Statement

An e-commerce portal (http://www.aaaa.com) wants to build a real-time analytics dashboard to visualize the number of orders getting shipped every minute to improve the performance of their logistics.

Solution

Before working on the solution, let’s take a quick look at all the tools we will be using:

Apache Spark – A fast and general engine for large-scale data processing. It is 100 times faster than Hadoop MapReduce in memory and 10x faster on disk. Learn more about Apache Spark here

Python – Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Learn more about Python here

Kafka – A high-throughput, distributed, publish-subscribe messaging system. Learn more about Kafka here

Node.js – Event-driven I/O server-side JavaScript environment based on V8. Learn more about Node.js here

Socket.IO – Socket.IO is a JavaScript library for real-time web applications. It enables real-time, bi-directional communication between web clients and servers. Read more about Socket.IO here

Highcharts – Interactive JavaScript charts for web pages. Read more about Highcharts here

CloudxLab – Provides a real cloud-based environment for practicing and learn various tools. You can start practicing right away by just signing up online.

How To Build A Data Pipeline?

Below is the high-level architecture of the data pipeline

Data Pipeline
Data Pipeline

Our real-time analytics dashboard will look like this

Real-Time Analytics Dashboard
Real-Time Analytics Dashboard

Continue reading “Building Real-Time Analytics Dashboard Using Apache Spark”

Running PySpark in Jupyter / IPython notebook

You can run PySpark code in Jupyter notebook on CloudxLab. The following instructions cover both 1 and 2 versions of Apache Spark.

What is Jupyter notebook?

The IPython Notebook is now known as the Jupyter Notebook. It is an interactive computational environment, in which you can combine code execution, rich text, mathematics, plots and rich media. For more details on the Jupyter Notebook, please see the Jupyter website.

Please follow below steps to access the Jupyter notebook on CloudxLab

Step 1 – Login to web console

Continue reading “Running PySpark in Jupyter / IPython notebook”

Access S3 Files in Spark

In this blog post we will learn how to access S3 Files using Spark on CloudxLab.
Please follow below steps to access S3 files:

Access Spark 1.2.1, Spark 1.6 and Spark 2.0 on CloudxLab

In this blog post we will learn how to access various versions of Spark on CloudxLab. Spark 1.2.1 will be helpful if you are preparing for CCA (Cloudera Certified Associate). Spark 1.6 will be useful for practicing SparkR. Please note that Spark 1.2.1, Spark 1.6 and Spark 2.0.1 may not integrate tightly with Hadoop, but you will be able to run most of the commands.

How to access Spark 1.2.1?

Continue reading “Access Spark 1.2.1, Spark 1.6 and Spark 2.0 on CloudxLab”

CloudxLab Getting Started Guide

Please use below resources to make most out of your CloudxLab Subscription

CloudxLab hands-on videos

Hadoop videos on CloudxLab

Spark videos on CloudxLab

Stream Processing Using Apache Spark and Kafka

Thank you all for your overwhelming response to our “Stream Processing using Apache Spark and Apache Kafka session” in “Apache Spark Hands-On” series, which happened on June 15, 2016 8:00 pm IST

Key takeaways- 

+ Introduction to Apache Spark
+ Introduction to stream processing
+ Understanding RDD (Resilient Distributed Datasets)
+ Understanding Dstream
+ Kafka Introduction
+ Understanding Stream Processing flow
+ Real time hands-on using CloudxLab
+ Questions and Answers

Continue reading “Stream Processing Using Apache Spark and Kafka”

Apache Spark Introduction

Thank you all for your overwhelming response to our Apache Spark Introduction session in “Apache Spark Hands-On” series, which happened on April 28, 2016 8:00 pm IST

Presented By
Sandeep Giri

Sandeep Giri

Key takeaways for this webinar were

+ Introduction to Apache Spark
+ Introduction to RDD (Resilient Distributed Datasets)
+ Loading data into an RDD
+ RDD Operations – Transformation
+ RDD Operations – Actions
+ Hands-on demos using CloudxLab
+ Questions and Answers

Continue reading “Apache Spark Introduction”