Every now and then, I keep seeing a new company coming up with Hadoop classes/courses. Also, my friends keep asking me which of these courses is good to take. I gave them a few tips to choose the best course suitable for them. Here are the few tips to decide which course you should attend to:
1. Does the instructor have domain expertise?
Know your instructor. You must know about the instructor’s background. Has (s)he done any big data related work? I have seen a lot of instructors who just attend a course somewhere and become instructors.
If the instructor never worked in the domain, do not take such classes. Also, avoid training institutes that do not tell you details about the instructor.
2. Is the instructor hands on? When did she/he code last time?
In the domain of technology, there is a humongous difference between one instructor who is hands-on in coding and another who is delivering based on theoretical knowledge. Also, know when the instructor worked on codes the last time. If instructor never coded, do not attend the class.
3. Does the instructor encourage & answer your questions?
There are many recorded free videos available across the internet. The only reason you would go for live classes would be to get your questions answered and doubts cleared immediately.
If the instructor does not encourage questions and answers, such classes are fairly useless.
Continue reading “10 Things to Look for When Choosing a Big Data course / Institute”
What is Apache Flume?
Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating & moving large data from many different sources to a centralized data store.
Flume supports a large variety of sources Including:
- tail (like unix tail -f),
- log4j – allowing Java applications to write logs to HDFS via flume
Flume nodes can be arranged in arbitrary topologies.Typically there is a node running on each source machine, with tiers of aggregating nodes that the data flows through on its way to HDFS.
- What is Flume
- Flume: Use Case
- Flume: Agents
- Flume: Use Case – Agents
- Flume: Multiple Agents
- Flume: Sources
- Flume: Delivery Reliability
- Flume: Hands-on
Introduction to Flume Presentation
Please feel free to leave your comments in the comment box so that we can improve the guide and serve you better. Also, Follow CloudxLab on Twitter to get updates on new blogs and videos.
If you wish to learn Hadoop and Spark technologies such as MapReduce, Hive, HBase, Sqoop, Flume, Oozie, Spark RDD, Spark Streaming, Kafka, Data frames, SparkSQL, SparkR, MLlib, GraphX and build a career in BigData and Spark domain then check out our signature course on Big Data with Apache Spark and Hadoop which comes with
- Online instructor-led training by professionals having years of experience in building world-class BigData products
- High-quality learning content including videos and quizzes
- Automated hands-on assessments
- 90 days of lab access so that you can learn by doing
- 24×7 support and forum access to answer all your queries throughout your learning journey
- Real-world projects
- A certificate which you can share on LinkedIn
Confused whether to take up a career in Big Data or not? Planning to invest your time in getting certified and to acquire expertise in related frameworks like Hadoop, Spark etc. and worried whether you are making a huge mistake? Just spend a few minutes reading this blog and you will get six reasons on why you are making a smart choice by selecting a career in big data.
Why Big Data?
There are several people out there who believe that Big Data is the next big thing which would help companies to spring up above others and help them position themselves as the best in class in their respective sectors.
Companies these days generate a gigantic amount of information irrespective of which industry they belong to and there is a need to store these data which are being generated so that they can be processed and not miss out on important information which could lead to a new breakthrough in their respective sector. Atul Butte, of Stanford School of Medicine, has stressed the importance of data by saying “Hiding within those mounds of data is the knowledge that could change the life of a patient, or change the world”. And this is where Big Data analytics play a very crucial role.
With the use of Big Data platforms, a gigantic amount of data can be brought together and be processed to develop patterns which would help the company in making better decisions which would help them to grow, increase their productivity and to help create value to their products and services.
Continue reading “6 Reasons Why Big Data Career is a Smart Choice”
In this blog post, we will learn how to stream Twitter data using Flume on CloudxLab
For downloading tweets from Twitter, we have to configure Twitter App first.
Create Twitter App
Navigate to Twitter app URL and sign in with your Twitter account
Click on “Create New App”
Continue reading “Streaming Twitter Data using Flume”
In this post, we will learn how to configure tools required for CloudxLab’s Python for Machine Learning course. We will use Python 3 and Jupyter notebooks for hands-on practicals in the course. Jupyter notebooks provide a really good user interface to write code, equations, and visualizations.
Please choose one of the options listed below for practicals during the course.
Continue reading “Python Setup Using Anaconda For Machine Learning and Data Science Tools”
In this data analytics case study, we will use the US census data to build a model to predict if the income of any individual in the US is greater than or less than USD 50000 based on the information available about that individual in the census data.
The dataset used for the analysis is an extraction from the 1994 census data by Barry Becker and donated to the public site http://archive.ics.uci.edu/ml/datasets/Census+Income. This dataset is popularly called the “Adult” data set. The way that we will go about this case study is in the following order:
- Describe the data- Specifically the predictor variables (also called independent variables features) from the Census data and the dependent variable which is the level of income (either “greater than USD 50000” or “less than USD 50000”).
- Acquire and Read the data- Downloading the data directly from the source and reading it.
- Clean the data- Any data from the real world is always messy and noisy. The data needs to be reshaped in order to aid exploration of the data and modeling to predict the income level.
- Explore the independent variables of the data- A very crucial step before modeling is the exploration of the independent variables. Exploration provides great insights to an analyst on the predicting power of the variable. An analyst looks at the distribution of the variable, how variable it is to predict the income level, what skews it has, etc. In most analytics project, the analyst goes back to either get more data or better context or clarity from his finding.
- Build the prediction model with the training data- Since data like the Census data can have many weak predictors, for this particular case study I have chosen the non-parametric predicting algorithm of Boosting. Boosting is a classification algorithm (here we classify if an individual’s income is “greater than USD 50000” or “less than USD 50000”) that gives the best prediction accuracy for weak predictors. Cross validation, a mechanism to reduce over fitting while modeling, is also used with Boosting.
- Validate the prediction model with the testing data- Here the built model is applied on test data that the model has never seen. This is performed to determine the accuracy of the model in the field when it would be deployed. Since this is a case study, only the crucial steps are retained to keep the content concise and readable.
Continue reading “Predicting Income Level, An Analytics Casestudy in R”
Buoyed by the success of our previous webinar and excited by the unending curiosity of our audience, we at CloudxLab decided to conduct another webinar on “Big Data & AI” on 24th August. Mr Sandeep Giri, founder of CloudxLab, was the lead presenter in the webinar. A graduate from IIT Roorkee with more than 15 years of experience in companies such as DE Shaw, Inmobi & Amazon, Sandeep conducted the webinar to the appreciation of all.
Continue reading “CloudxLab Conducts Another Successful Webinar On “Big Data & AI””
Artificial Intelligence (AI) is the buzzword that is resounding and echoing all over the world. While large corporations, organizations & institutions are publicly proclaiming and publicizing their massive investments toward development and deployment of AI capabilities, people, in general, are feeling perplexed regarding the meaning and nuances of AI. This blog is an attempt to demystify AI and provide a brief introduction to the various aspects of AI to all such persons, engineers, non-engineers & beginners, who are seeking to understand AI. In the forthcoming discussion, we will explore the following questions:
- What is AI & what does it seek to accomplish?
- How will the goals of AI be accomplished, through which methodologies?
- Why is AI gaining so much momentum now?
Continue reading “What, How & Why of Artificial Intelligence”
Understanding the current scenario of tremendous interest of students and professionals regarding “Big Data & AI”, CloudxLab conducted a webinar on July 12, 2017 to introduce and explain the many nuances of this upcoming field to all the enthusiasts. Mr Sandeep Giri, founder of CloudxLab with more than 15 years of experience in the industry with companies such as Amazon, InMobi, DE Shaw etc., was the lead presenter in the webinar.
The webinar covered the following:
- Overview of Big Data
- Big Data Technology Stack
- Introduction to Artificial Intelligence
- Demo of Machine Learning Example
- Artificial Intelligence Technology Stack
Continue reading “CloudxLab Webinar on “Big Data & AI”: An Overwhelming Success”
In this blog post, we will learn how to install Python packages on CloudxLab.
Create the virtual environment for your project. A virtual environment is a tool to keep the dependencies required by different projects in separate places, by creating virtual Python environments for them. Login to CloudxLab web console and create a virtual environment for your project.
$ mkdir my_project
$ cd my_project
$ virtualenv venv
Continue reading “How to install Python packages on CloudxLab?”