Big Data/Analytics Zone is brought to you in partnership with:

Armel is the founder of ETAPIX Global - the Big Data Company (founded in 2006). He is an experienced software developer and architect. Based in London United Kingdom, Armel is known in the London startup scene and occasionally speaks at various events. His expertise is in Java, SOA, Business Intelligence, Enterprise Search and Data Warehousing. He has worked at various fortune 100 companies including Nokia Siemens Network, Tata, Barclays Plc, SMBC among others. He's also an Open Source evangelist. Armel has posted 4 posts at DZone. You can read more from them at their website. View Full User Profile

Hadoop Developer - WordCount tutorial using Maven and NetBeans 7.3RC2

02.13.2013
| 3280 views |
  • submit to reddit

I have adapted the WordCount tutorial to Maven based development as this probably the most popular way to develop in companies. I am not going to rewrite how the WordCount tutorial works but it aims to get you up-and-running with Hadoop development pretty quickly.

I used NetBeans 7.3RC2 because of its integration with Maven but feel free to use an IDE of your choice. I am also using Ubuntu 12.10 64Bit as a development enviroment. I installed the Hadoop debian distribution package.

Warning

When running your WordCount application, Hadoop might throw an out of memory exception, this is because the default settings are -Xmx100m. Apache website mentioned how to fix it but it's not relevant if you install it using the Debian distribution. Here is a quick solution, open the /usr/bin/hadoop (changing this file has no effect and doesn't fix the problem /etc/hadoop/hadoop-env.sh):

  1. set your JAVA to the actual JVM path that you want to use.
  2. set JAVA_HEAP_MAX to increase the available memory to the applications i.e. -Xmx3000m

Here are the steps to creating the WordCount tutorial in NetBeans:

  1. Create a new Maven based Java project
    • NetBeans will create an App.java class, you can rename it to WordCount or leave it as it doesn't affect the outcome of the tutorial. I will refer to the main class as App.java.
  2. Add the Hadoop dependencies, they are available in Maven Central. I used the hadoop-core.1.1.1 for this tutorial.
  3. Important: Maven doesn't package dependencies when building application unless you are working with a "war" project where it will create a lib folder. In order to make sure that our libraries are available to the our program when packaged, we need to add the maven-assembly-plugin to our pom.xml. We also declare a our "Main" class which will be used to execute the program.  view code here: http://pastebin.com/Z0xkyzis
  4.  
  5. Open App.java (or whatever you have renamed it to) and write the following
view code here: http://pastebin.com/cEiJByxi

You can create your Hadoop "input" directory and mount it to be HDFS then execute the following:

$ hadoop dfs -ls input

$ hadoop dfs -cat input/file01 

$ hadoop jar WordCount.jar com.etapix.wordcount.App input output

This is assuming that you are running from your project home directory and that you have installed Hadoop using the Debian distribution or you can follow the rest of the tutorial from the Apache website

Published at DZone with permission of its author, Armel Nene.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)