Login using Social Account
     Continue with GoogleLogin using your credentials
Problem
Write a MapReduce code to count the frequency of characters in a file stored in HDFS.
Dataset
The file is located at
/data/mr/wordcount/big.txt
Sample Output
Output file will contain the characters and their frequency in the file
a 48839
b 84930
c 84939
Steps
Check out the mapper.py and reducer.py in GitHub
If you haven't cloned the CloudxLab GitHub repository, then clone it in your home folder in web console using the below command
git clone https://github.com/singhabhinav/cloudxlab.git ~/cloudxlab
Else, update the local copy
cd ~/cloudxlab
git pull origin master
Go to count_character_frequency directory
cd ~/cloudxlab/hdpexamples/python-streaming/character_frequency
Run the MapReduce code using Hadoop streaming. Please make sure to save output in mapreduce-programming/character_frequency directory inside your home directory in HDFS. Run the below command
hadoop jar /usr/hdp/2.6.2.0-205/hadoop-mapreduce/hadoop-streaming.jar -input /data/mr/wordcount/big.txt -output mapreduce-programming/character_frequency -mapper mapper.py -file mapper.py -reducer reducer.py -file reducer.py
OR
hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar -input /data/mr/wordcount/big.txt -output mapreduce-programming/character_frequency -mapper mapper.py -file mapper.py -reducer reducer.py -file reducer.py
Check the frequency of characters by typing below command.
hadoop fs -cat mapreduce-programming/character_frequency/* | tail
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Note - Having trouble with the assessment engine? Follow the steps listed here
Loading comments...