Performance Zone is brought to you in partnership with:

Michael loves building software; he's been building search engines for more than a decade, and has been working on Lucene as a committer, PMC member and Apache member, for the past few years. He's co-author of the recently published Lucene in Action, 2nd edition. In his spare time Michael enjoys building his own computers, writing software to control his house (mostly in Python), encoding videos and tinkering with all sorts of other things. Michael is a DZone MVB and is not an employee of DZone and has posted 46 posts at DZone. You can read more from them at their website. View Full User Profile

Screaming fast Lucene searches using C++ via JNI

06.24.2013
| 2717 views |
  • submit to reddit

 At the end of the day, when Lucene executes a query, after the initial setup the true hot-spot is usually rather basic code that decodes sequential blocks of integer docIDs, term frequencies and positions, matches them (e.g. taking union or intersection for BooleanQuery), computes a score for each hit and finally saves the hit if it's competitive, during collection.

Even apparently complex queries like FuzzyQuery or WildcardQuery go through a rewrite process that reduces them to much simpler forms like BooleanQuery.

Lucene's hot-spots are so simple that optimizing them by porting them to native C++ (via JNI) was too tempting!

So I did just that, creating the lucene-c-boost github project, and the resulting speedups are exciting:

    
Task QPS base  StdDev base  QPS opt  StdDev opt  % change
AndHighLow 469.2 (0.9%) 316.0 (0.7%) 0.7 X
Fuzzy1 63.0 (3.3%) 62.9 (2.0%) 1.0 X
Fuzzy2 25.8 (3.1%) 37.9 (2.3%) 1.5 X
AndHighMed 50.4 (0.7%) 110.0 (0.9%) 2.2 X
OrHighNotLow 46.8 (5.6%) 106.3 (1.3%) 2.3 X
LowTerm 298.6 (1.8%) 691.4 (3.4%) 2.3 X
OrHighNotMed 34.0 (5.3%) 89.2 (1.3%) 2.6 X
OrHighNotHigh 5.0 (5.7%) 14.2 (0.8%) 2.8 X
Wildcard 17.2 (1.2%) 51.1 (9.5%) 3.0 X
AndHighHigh 21.9 (1.0%) 69.0 (1.0%) 3.5 X
OrHighMed 18.7 (5.7%) 59.6 (1.1%) 3.2 X
OrHighHigh 6.7 (5.7%) 21.5 (0.9%) 3.2 X
OrHighLow 15.7 (5.9%) 50.8 (1.2%) 3.2 X
MedTerm 69.8 (4.2%) 243.0 (2.2%) 3.5 X
OrNotHighHigh 13.3 (5.7%) 46.7 (1.4%) 3.5 X
OrNotHighMed 26.7 (5.8%) 115.8 (2.8%) 4.3 X
HighTerm 22.4 (4.2%) 109.2 (1.4%) 4.9 X
Prefix3 10.1 (1.1%) 55.5 (3.7%) 5.5 X
OrNotHighLow 62.9 (5.5%) 351.7 (9.3%) 5.6 X
IntNRQ 5.0 (1.4%) 38.7 (2.1%) 7.8 X


These results are on the full, multi-segment Wikipedia English index with 33.3 M documents. Besides the amazing speedups, it's also nice to see that the variance (StdDev column) is generally lower with the optimized C++ version, because hotspot has (mostly) been taken out of the equation.

The API is easy to use, and works with the default codec so you won't have to re-index just to try it out: instead of IndexSearcher.search, call NativeSearch.search. If the query can be optimized, it will be; otherwise it will seamlessly fallback to IndexSearcher.search. It's fully decoupled from Lucene and works with the stock Lucene 4.3.0 JAR, using Java's reflection APIs to grab the necessary bits.

This is all very new code, and I'm sure there are plenty of exciting bugs, but (after some fun debugging!) all Lucene core tests now pass when using NativeSearch.search.

This is not a C++ port of Lucene

This code is definitely not a general C++ port of Lucene. Rather, it implements a very narrow set of classes, specifically the common query types. The implementations are not general-purpose: they hardwire (specialize) specific code, removing all abstractions like Scorer, DocsEnum, Collector, DocValuesProducer, etc.

There are some major restrictions on when the optimizations will apply:
  • Only tested on Linux and Intel CPU so far
  • Requires Lucene 4.3.x
  • Must use NativeMMapDirectory as your Directory implementation, which maps entire files into RAM (avoids the chunking that the Java-based MMapDirectory must do)
  • Must use the default codec
  • Only sort by score is supported
  • None of the optimized implementations use advance: first, this code is rather complex and will be quite a bit of work to port to C++, and second, queries that benefit from advance are generally quite fast already so we may as well leave them in Java
BooleanQuery is optimized, but only when all clauses are TermQuery against the same field.

C++ is not faster than java!

Not necessarily, anyway: before anyone goes off screaming how these results "prove" Java is so much slower than C++, remember that this is far from a "pure" C++ vs Java test. There are at least these three separate changes mixed in:
  • Algorithmic changes. For example, lucene-c-boost sometimes uses BooleanScorer where Lucene is using BooleanScorer2. Really we need to fix Lucene to do similar algorithmic changes (when they are faster). In particular, all of the OrXX queries that include a Not clause, as well as IntNRQ in the above results, benefit from algorithmic changes.
  • Code specialization: lucene-c-boost implements the searches as big hardwired scary-looking functions, removing all the nice Lucene abstractions. While abstractions are obviously necessary in Lucene, they unfortunately add run-time cost overhead, so removing these abstractions gives some gains.
  • C++ vs java.
It's not at all clear how much of the gains are due to which part; really I need to create the "matching" specialized Java sources to do a more pure test.

This code is dangerous!

Specifically, whenever native C++ code is embedded in Java, there is always the risk of all those fun problems with C++ that we Java developers thought we left behind. For example, if there are bugs (likely!), or even innocent API mis-use by the application such as accidentally closing an IndexReader while other threads are still using it, the process will hit a Segmentation Fault and the OS will destroy the JVM. There may also be memory leaks! And, yes, the C++ sources even use the goto statement.

Work in progress...

This is a work in progress and there are still many ideas to explore. For example, Lucene 4.3.x's default PostingsFormat stores big-endian longs, which means the little-endian Intel CPU must do byte-swapping when decoding each postings block, so one thing to try is a PostingsFormat better optimized for the CPU at search time. Positional queries, Filters and nested BooleanQuery are not yet optimized, as well as certain configurations (e.g., fields that omit norms). Patches welcome!

Nevertheless, initial results are very promising, and if you are willing to risk the dangers in exchange for massive speedups please give it a whirl and report back.
Published at DZone with permission of Michael Mccandless, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)