1. Introduction

In this data analytics case study, we will use the US census data to build a model to predict if the income of any individual in the US is greater than or less than USD 50000 based on the information available about that individual in the census data.

The dataset used for the analysis is an extraction from the 1994 census data by Barry Becker and donated to the public site http://archive.ics.uci.edu/ml/datasets/Census+Income. This dataset is popularly called the “Adult” data set. The way that we will go about this case study is in the following order:

1. Describe the data- Specifically the predictor variables (also called independent variables features) from the Census data and the dependent variable which is the level of income (either “greater than USD 50000” or “less than USD 50000”).
3. Clean the data- Any data from the real world is always messy and noisy. The data needs to be reshaped in order to aid exploration of the data and modeling to predict the income level.
4. Explore the independent variables of the data- A very crucial step before modeling is the exploration of the independent variables. Exploration provides great insights to an analyst on the predicting power of the variable. An analyst looks at the distribution of the variable, how variable it is to predict the income level, what skews it has, etc. In most analytics project, the analyst goes back to either get more data or better context or clarity from his finding.
5. Build the prediction model with the training data- Since data like the Census data can have many weak predictors, for this particular case study I have chosen the non-parametric predicting algorithm of Boosting. Boosting is a classification algorithm (here we classify if an individual’s income is “greater than USD 50000” or “less than USD 50000”) that gives the best prediction accuracy for weak predictors. Cross validation, a mechanism to reduce over fitting while modeling, is also used with Boosting.
6. Validate the prediction model with the testing data- Here the built model is applied on test data that the model has never seen. This is performed to determine the accuracy of the model in the field when it would be deployed. Since this is a case study, only the crucial steps are retained to keep the content concise and readable.

As mentioned earlier, the data set is from http://archive.ics.uci.edu/ml/datasets/Census+Income.

2.1 Dependent Variable

The dependent variable is “incomelevel”, representing the level of income. A value of “<=50K” indicates “less than or equal to USD 50000” and “>50K” indicates “greater than USD 50000”.

2.2 Independent Variable

Below are the independent variables (features or predictors) from the Census Data

Variable NameDescriptionTypePossible Values
AgeAge of the individual
ContinuousNumeric
WorkclassClass of Work
CategoricalPrivate, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked
fnlwgtFinal Weight Determined by Census OrgContinuousNumeric
EducationEducation of the individualOrdered FactorBachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool
Education-numNumber of years of educationContinuousNumeric
Marital-status
Marital status of the individualCategoricalMarried-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse
OccupationOccupation of the individualCategoricalTech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces
RelationshipPresent relationshipCategoricalWife, Own-child, Husband, Not-in-family, Other-relative, Unmarried
RaceRace of the individualCategoricalWhite, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black
SexSex of the individualCategoricalFemale, Male
Capital-gainCapital gain made by the individualContinuousNumeric
Capital-loss
Capital loss made by the individualContinuousNumeric
Hours-per-week
Average number of hours spent by the individual on workContinuousNumeric
Native-countryAverage number of hours spent by the individual on workCategoricalUnited-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands

Training data and test data are both separately available at the UCI source. Both the data files are downloaded as below. The test file is set aside until model validation.

As the training data file does not contain the variable names, the variable names are explicitly specified while reading the data set. While reading the data, extra spaces are stripped.

Dataset is read and stored as train data frame of 32561 rows and 15 columns. A high level summary of the data is below. All the variables have been read in their expected classes.

4. Cleaning the Data

The training data set is cleaned for missing or invalid data.

About 8% (2399/30162) of the dataset has NAs in them. It is observed that in most of the missing data set, the ‘workclass’ variable and ‘occupation’ variable are missing data together. And the remaining have ‘nativecountry’ variable missing. We could handle the missing values by imputing the data. However, since ‘workclass’, ‘occupation’ and ‘nativecountry’ could potentially be very good predictors of income, imputing may simply skew the model.

Also, since most of the missing data 2066/2399 (~86%) rows pertain to the “<=50K” incomelevel and the dataset is predominantly of “<=50K” incomelevel, there will not be much information loss for the predictive model building if we removed the NAS data set.

Data sets with NAs are removed below:

The ‘fnlwgt’ final weight estimate refers to population totals derived from CPS by creating “weighted tallies” of any specified socio-economic characteristics of the population. This variable is removed from the training data set due to it’s diminished impact on income level.

The cleaned data set is now myCleanTrain.

5. Explore the Data

Each of the variables is explored for quirks, distribution, variance, and predictability.

5.1 Explore the Continuous Variables

Since the model of choice here is Boosting, which is non-parametric (does not follow any statistical distribution), we will not be transforming variables to address skewness. We will, however, try to understand the data to determine each variable’s predictability.

5.1.1 Explore the Age variable

The Age variable has a wide range and variability. The distribution and mean are quite different for income level <=50K and >50K, implying that ‘age’ will be a good predictor of ‘incomelevel’.

5.1.2 Explore the Years of Education Variable

The Years of Education variable has good variability. The statistics are quite different for income level <=50K and >50K, implying that ‘educationnum’ will be a good predictor of ‘incomelevel’.

5.1.3 Explore the Capital Gain and Capital Loss variables

The capital gain and capital loss variables do not show much variance for all income levels from the plots below. However, the means show a difference for the different levels of income. So these variables can be used for prediction.

5.1.4 Explore the Hours Per Week variable

The Hours Per Week variable has a good variability implying that ‘hoursperweek’ will be a good predictor of ‘incomelevel’.

5.1.5 Explore the correlation between continuous variables

The below shows that there is no correlation between the continuous variables and that they are independent of each other.

5.2 Explore Categorical Variables

5.2.1 Exploring the Sex variable

Mostly the sex variable is not a good predictor, and so is the case for the income level prediction too. This variable will not be used for the model.

5.2.2 Exploring the work class, occupation, marital status, relationship and education variables

The variables workclass, occupation, maritalstatus, relationship all show good predictability of the incomelevel variable.

The education variable, however, needs to be reordered and marked an ordinal variable (ordered factor variable). The new ordinal variable also shows good predictability of incomelevel.

5.2.3 Exploring the nativecountry variable

Plotting the percentage of Income more than USD 50000 nativecountry-wise, shows that nativecounty is a good predictor of incomelevel.

The names of the countries are cleaned for display on the world map. The code pertaining to these are now shown to keep the article concise.

5.3 Building the Prediction Model

Finally, down to building the prediction model, we will be using all the independent variables except the Sex variable to build a model that predicts the income level of an individual to be greater than USD 50000 or less than USD 50000 using Census data.

Since Census data is typical of weak predictors, the Boosting algorithm is used for this classification modeling.

I have also used Cross Validation (CV) where the training data is partitioned a specific number of times and separate boosted models are built on each. The resulting models are ensembled to arrive at final model. This helps avoid overfitting the model to the training data.

The confusion matrix below shows an in-sample overall accuracy of ~86%, the sensitivity of ~88% and specificity of ~79%.

This implies that 86% of times, the model has classified the income level correctly, 88% of the times, the income level being less than or equal to USD 50000 in classified correctly and 79% of the times, the income level being greater than USD 50000 is classified correctly.

5.4 Validating the Prediction Model

The created prediction model is applied to the test data to validate the true performance. The test data is cleaned similar to the training data before applying the model.

The cleaning is not shown to keep the case study concise. The cleaned test dataset has 15060 rows and 14 columns with no missing data.

The prediction model is applied on the test data. From the confusion matrix below the performance measures are out-of-sample overall accuracy of ~86%, sensitivity of ~88% and specificity of ~78%, which is quite similar to the in-sample performances

6. Executive Summary

During a data analytics exercise, it is very important to understand how the built model has performed with respect to a baseline model. This helps the analyst understand if there is really any value that the new model adds.

The baseline accuracy (here, accuracy of selection by random chance as there is no prior model) is 75% for income less than USD 50000 (sensitivity) and 25% for income more than USD 50000 (specificity) with an overall accuracy of 68% (Refer the skewed number of data sets for both the incomelevels in the cleaned test data).

The prediction model built using the boosting algorithm can predict a less than USD 50000 income level with 88% accuracy (sensitivity) and a more than USD 50000 income level with 78% accuracy (specificity) and an overall accuracy of 86%.

So the prediction model does perform better than the baseline model.

The below maps shows the overall prediction Overall Accuracy, Sensitivity and Specificity nativecountry-wise (For keeping the report concise, the computations for plotting the map are not shown).

• bintao li

Hi Abhinav, what packages do I have to install and then make qplot, grid.arrange and nearZeroVar work respectively? thanks.

Bintao Li

• Abhinav Singh

Hi @bintaoli:disqus,

Are you running it on CloudxLab or on your local machine?

• bintao li

Hi Abhinav,

I run it in my local machine.

Thanks,

Bintao Li

• Abhinav Singh

Hi @bintaoli:disqus,

This blog may be bit outdated. We will have to update it by testing it on latest version of R

Qplot and other packages may not be available on latest version of R but you may find the alternative packages in CRAN and modify the code a bit to make it work on your local machine.

Hope this helps.

Thanks

Regards,
Abhinav