Big Data/Analytics Zone is brought to you in partnership with:

Scott is a Senior Software Architect at Altamira Corporation. He has been developing enterprise and web applications for over 15 years professionally, and has developed applications using Java, Ruby/Rails, Groovy/Grails and Python. His main areas of interest include object-oriented design, system architecture, testing, and frameworks of all types including Spring, Hibernate, Ruby on Rails, Grails, and Django. In addition, Scott enjoys learning new languages to make himself a better and more well-rounded developer a la The Pragmatic Programmers' advice to "learn one language per year." Scott is a DZone MVB and is not an employee of DZone and has posted 43 posts at DZone. You can read more from them at their website. View Full User Profile

Reading Hive Tables from MapReduce

01.11.2013
| 2943 views |
  • submit to reddit

This article is by Stephen Mouring Jr, appearing courtesy of Scott Leberknight.

This is part two of a two part blog series on how to read/write Apache Hive data from MapReduce jos. Part one (Writing Hive Tables from MapReduce) is here

So just as sometimes you need to write data to Hive with a custom MapReduce job, sometimes you need to read that data back from Hive with a custom MapReduce job. As covered in part one, Hive is a layer that sits on HDFS and imposes a standard convention on the structure of the files so it can interpret them as columns and rows. Reading data out of Hive is just a matter of parsing the files correctly. 

Recall that files processed by MapReduce (and by extension, Hive) are output as key value pairs. Hive ignores the keys (read as a BytesWritable with a value of null) and reads/writes the values as Text objects. The value of the Text object for each row is the concatenation of all the column values delimited by the delimiter of the table (which Hive defaults to the "char 1" ASCII character). 

Seems like a simple problem, so my first thought was to just using String.split() in the map() method of the MapReduce job. 

String SEPARATOR_FIELD = new String(new char[] {1});

String[] rowColumns = new String (rowTextObject.getBytes()).split(SEPARATOR_FIELD);

In theory this should have worked perfectly, but unfortunately I have found that String.split() actually consumes repeated delimiters. This is a problem if any of the values in the row are blank, since split() will shift the positions of your columns and you will be unable to match up what values belong with which columns. 

An alternative would be to create a String from the Text object and iterate through it using indexOf(). This approach however requires extra object creation and depending on the scale of your MapReduce job and the size of your rows, may slow you down needlessly. So an alternative is to use the Text object's find() method. 

String SEPARATOR_FIELD = new String(new char[] {1});

String[] rowColumns = new String[NUMBER_OF_COLUMNS_IN_YOUR_HIVE_TABLE];

int start = 0;
int end = 0;

for (int i = 0; i < rowColumns.length; ++i) {
	end = rowTextObject.find(SEPARATOR_FIELD, start);
    if (end == -1) {
    	end = rowString.getLength();
    }

    rowColumns[i] = new String(rowTextObject.getBytes(), start, end-start);

    start = end + 1;
}

This will parse out each value into the appropriately index of the rowColumns array. Blank values will also be handled correctly and result in blank strings being inserted into the rowColumns array. 

Published at DZone with permission of Scott Leberknight, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)