Big Data/Analytics Zone is brought to you in partnership with:

Chris has been married to the love of his life, Danielle, for over four years now and they recently had their first child, their daughter Helena. Chris was an expert wrangler of developer content for DZone, then spent some time in the marketing department for DZone and AnswerHub, where he taught himself to code html. Now he handles production duties for DZone, so he's responsible for all those emails you get from DZone! BTW, if you're looking for someone to bore you to death with endless football knowledge, Chris is your go-to-guy! Chris is a DZone employee and has posted 315 posts at DZone. You can read more from them at their website. View Full User Profile

Big Data Chapter Excerpt: Implementing Schemas with Apache Thrift

01.12.2012
| 6724 views |
  • submit to reddit
This is an excerpt from the upcoming Manning book about Big Data.

 

 Big Data

Principles and Best Practices of Scalable Realtime Data Systems

By Nathan Marz and Samuel E. Ritchie

Thrift is a widely used project that originated at Facebook. It can be used for making language-neutral RPC servers, but developers use it for its schema-creation capabilities. In this article based on chapter 2, author Nathan Marz discusses workhorses of Thrift—the struct and union type definitions—and Thrift’s built-in mechanisms for evolving a schema over time.

 

You may also be interested in…


Thrift is a widely used project that originated at Facebook. It can be used for making language-neutral RPC servers, but developers use it for its schema-creation capabilities. The workhorses of Thrift are the struct and union type definitions, and Thrift has built-in mechanisms for evolving a schema over time.

Orginally Authored by Nathan Marz and Samuel E. Ritchie


Structs

The following code shows how to define a struct using the Thrift Interface Definition Language (IDL). Defining a struct is like defining a class in an object-oriented language: you specify all the data the object contains. The difference is that a Thrift struct only contains data and doesn't specify any extra behavior for the object. Fields in a struct can be:

  • Primitive types like strings, ints, longs, and doubles. In the Thrift IDL, these are referred to as string, i32, i64, and double, respectively.
  • Collections of other types. Thrift supports list, map, and set.
  • Another Thrift struct or union.

struct Person {

  1: string twitter_username;

  2: string full_name;

  3: list<string> interests;

}

 

The following code listing shows how to serialize a struct with Java. As you can see, we're using ArrayList, a native Java data structure, as part of the Person object.

List<String> interests = new ArrayList<String>() {{

    add("hadoop");

    add("nosql");

}};
Person person = new Person("joesmith", "Joe Smith", interests);

TSerializer serializer = new TSerializer();

byte[] serialized = serializer.serialize(person);

Here's how to deserialize a Person object in Python. When the object is deserialized, it will be using native Python data structures for any collection types.

person = Person()

deserialize(person, serialized_bytes)
Fields in structs can be defined as being either required or optional. If a field is defined as required, than a value for that field must be provided or else Thrift will give an error upon serialization or deserialization. If a field is optional, the value will be null if not provided. You should always declare fields as being either required or optional. The following code listing shows how to define a struct containing required and optional fields.

struct Tweet {

1: required string text;

2: required i64 id;

3: required i64 timestamp;

4: required Person person;

5: optional i64 response_to_tweet_id;

Unions

You can also define unions in Thrift. A union is a struct that must have exactly one field set. Unions are useful for representing polymorphic data. The following listing shows how to define a "PersonID" using a Thrift union that can be one of many different kinds of identifiers.

union PersonID {

  1: string email;

  2: i64 facebook_id;

  3: i64 twitter_id;

}

 

Evolving a schema

Thrift is designed so that schemas can be evolved over time. The key to evolving Thrift schemas over time is the numeric identifiers used for every field. Those ids are used to identify fields in their serialized form. When you want to change the schema but still be backward compatible with existing data, you must obey the following rules.

  • Fields may be renamed. This is because the serialized form of an object uses the field ids to identify fields, not the names.
  • Fields may be removed, but you must be sure never to reuse that field id. When deserializing, Thrift will skip over any fields that don't match an id it's expecting. So the data for that field will just be ignored in the existing data. If you were to reuse that field id, Thrift will try to deserialize that old data into your new field which will lead to either invalid or incorrect data.
  • Only optional fields can be added to existing structs. You can't add required fields because existing data won't have that field and will not be deserializable. Note that this point does not apply to unions since unions have no notion of required and optional fields.


Summary

In a relational database, the schema language is part of the database system and is integrated with how the database stores and processes that data. In the Big Data world, you use your own serialization framework that's separate from the storage and processing pieces. You get the flexibility to fine-tune this component to work exactly as needed to fit your data model.

There are a few different open source serialization frameworks available, namely Thrift, Protocol Buffers, and Avro. We discussed our favorite, Apache Thrift, because it’s mature and supports most languages, but you could use any of these tools for defining a schema.

 

Here are some other Manning titles you might be interested in:

 

MongoDB in Action

Kyle Banker

RabbitMQ in Action

Alvaro Videla and Jason J.W. Williams

Hadoop in Action

Chuck Lam

 

Last updated: January 11, 2012