Write a MapReduce code to count the frequency of characters in a file stored in HDFS.
The file is located at
Output file will contain the characters and their frequency in the file
a 48839 b 84930 c 84939
Check out the mapper.py and reducer.py in GitHub
If you haven't cloned the CloudxLab GitHub repository, then clone it in your home folder in web console using the below command
git clone https://github.com/singhabhinav/cloudxlab.git ~/cloudxlab
Else, update the local copy
cd ~/cloudxlab git pull origin master
Go to count_character_frequency directory
Run the MapReduce code using Hadoop streaming. Please make sure to save output in mapreduce-programming/character_frequency directory inside your home directory in HDFS. Run the below command
hadoop jar /usr/hdp/220.127.116.11-292/hadoop-mapreduce/hadoop-streaming.jar -input /data/mr/wordcount/big.txt -output mapreduce-programming/character_frequency -mapper mapper.py -file mapper.py -reducer reducer.py -file reducer.py
Check the frequency of characters by typing below command.
hadoop fs -cat mapreduce-programming/character_frequency/* | tail
Taking you to the next exercise in seconds...