NoSQL Zone is brought to you in partnership with:

Ayende Rahien is working for Hibernating Rhinos LTD, a Israeli based company producing developer productivity tools for OLTP applications such as NHibernate Profiler (nhprof.com), Linq to SQL Profiler(l2sprof.com), Entity Framework Profiler (efprof.com) and more. Ayende is a DZone MVB and is not an employee of DZone and has posted 484 posts at DZone. You can read more from them at their website. View Full User Profile

RavenDB Bulk Inserts

01.02.2013
| 1981 views |
  • submit to reddit

With RavenDB Bulk Inserts, we significantly improved the time we take to insert a boat load of documents to RavenDB. By over an order of magnitude, in fact.

How did we do that? By doing a whole bunch of things, but mostly by being smart in utilizing the resources on both client & server.

Here is how a standard request timeline looks like:

image

As you can see, there are several factors that hold us up here. We need to prepare (in memory) all of the data to send. On the server side ,we wait until we have the entire dataset before we can start processing the data. Just that aspect cost us a lot.

And because there is finite amount of memory we can use, it means that we have to split things to batches, and each batch is a separate request, requiring the same song and dance just on the network layer.

That doesn’t count what we have to do on the server end once we actually have the data, which is to get the data, process it, flush it to disk, register it for indexing,  call fsync, etc. That isn’t too much, unless you are trying to shove as much data s you can, as fast as you can.

In contrast, this is how the bulk insert looks like on the network:

image

We stream the results to the server directly, so while the client is still sending results, we are already flushing them to disk.

To make things even more interesting, we aren’t using standard GZip compression over the whole request. Instead, each batch is compressed independently, which means we don’t have a dependency on the internals of the compression routine internal buffering system, etc. It also means that we get each batch much faster.

There are, of course, rate limits built in, to protect ourselves from flooding the buffers, but for the most part, you will have hard time hitting them.

In addition to that, because we are in the context of a single request, we can apply additional optimizations, we can do lazy flushes (not have to wait for a full fsync for each batch) because we do the final fsync at the end of the final request.

Finally, we actually created an optimized code path that skips doing a lot of the things that we do in the normal path. For example, by default we assume you are doing an insert only (saves checking the disk, and will throw if not true), we don’t populate the indexing prefetching queue, etc. All in all, it means that we got more than an order of magnitude improvement.

 

Published at DZone with permission of Ayende Rahien, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)