SQL Zone is brought to you in partnership with:

I am a software architect working in service hosting area. I am interested and specialized in SaaS, Cloud computing and Parallel processing. Ricky is a DZone MVB and is not an employee of DZone and has posted 84 posts at DZone. You can read more from them at their website. View Full User Profile

Notes on Oracle Coherence

01.13.2010
| 13681 views |
  • submit to reddit
Oracle Coherence is a distributed cache that functionally comparable to Memcached. On top of the basic cache API function, it has some additional capabilities that is attractive for building large scale enterprise applications.

The API is based on a Java Map (Hashtable) Interface. It is based on a key/value store semantics where the value can be any Java Serializable object. Coherence allows multiple cache identified by a unique name (which they called a "named cache").

Below code examples are extracted from the great presentation from Brian Oliver of Oracle

The common usage pattern is to locate a cache by its name, and then act on the cache.

Basic cache function (Map, JCache)
  • Get data by key
  • Update data by key
  • Remove data by key
NamedCache nc = CacheFactory.getCache("mine");
Object previous = nc.put("key", "hello world");
Object current = nc.get("key");
int size = nc.size();
Object value = nc.remove("key");
Set keys = nc.keySet();
Set entries = nc.entrySet();
boolean exists = nc.containsKey("key");

Cache Modification Event Listener (ObservableMap)

You can register an event listener on a cache such that it will callback the listener code when certain changes happen within the cache.
  • New cache item is inserted
  • Existing cache item is deleted
  • Existing cache item is updated
NamedCache nc = CacheFactory.getCache("stocks");
nc.addMapListener(new MapListener() {
public void onInsert(MapEvent mapEvent) {
...
}
public void onUpdate(MapEvent mapEvent) {
...
}
public void onDelete(MapEvent mapEvent) {
...
}
});

View of Filtered Cache (QueryMap)

You can also define a "view" on a cache by providing a "filter" which is basically a boolean function, only items that is evaluated to be true by this function will be visible in this view.

NamedCache nc = CacheFactory.getCache("people");

Set keys =
nc.keySet(new LikeFilter("getLastName", "%Stone%"));

Set entries =
nc.entrySet( new EqualsFilter("getAge", 35));


Continuous Query Support (ContinuousQueryCache)

The view can also be used as a "continuous query". All new coming data that fulfilled the filter criteria will be included automatically in the view.

NamedCache nc = CacheFactory.getCache("stocks");

NamedCache expensiveItems =
new ContinuousQueryCache(nc,
new GreaterThan("getPrice", 1000));

Parallel Query Support (InvocableMap)

We can also perform a query and partial aggregation at all nodes within the cluster in parallel, with the final aggregation afterwards.
NamedCache nc = CacheFactory.getCache("stocks");

Double total =
(Double)nc.aggregate(AlwaysFilter.INSTANCE,
new DoubleSum("getQuantity"));

Set symbols =
(Set)nc.aggregate(new EqualsFilter("getOwner", "Larry"),
new DistinctValue("getSymbol"));


Parallel Execution Processing Support (InvocableMap)

We can also perform execution at all nodes within the cluster in parallel
NamedCache nc = CacheFactory.getCache("stocks");

nc.invokeAll(new EqualsFilter("getSymbol", "ORCL"),
new StockSplitProcessor());

class StockSplitProcessor extends AbstractProcessor {
Object process(Entry entry) {
Stock stock = (Stock)entry.getValue();
stock.quantity *= 2;
entry.setValue(stock);
return null;
}
}


Implementation Architecture

Oracle Coherence runs on a cluster of identical server machines connected via a network. Within each server, there are multiple layers of software provide a unified data storage and processing abstraction over a distributed environment.


Application typically runs inside the cluster as well. The cache is implemented as a set of smart data proxy which knows the location of master (primary) and slave (backup) copy of data based on its key.

When the client "read" data from the proxy, it first try to find the data in a local cache (also called the "near cache" within the same machine). If it is not found, the smart proxy will then locate the distributed cache for the corresponding copy (also called the L2 cache). Since this is a read, either a master or a slave copy is fine. If the smart proxy wouldn't find data from the distributed cache, it will lookup data from the backend DB. The return data will then propagate back to the client and the cache will be populated.

Updating data (insert, update, delete) is done in the reverse way. Under the master/slave architecture, all updates will go to the corresponding master node that owns that piece of data. Coherence support two modes of update; "Write through" and "Write behind". "Write through" will update the DB backend immediately after updating the master copy, but before updating the slave copy, and therefore keep the DB always up to date. "Write behind" will update the slave copy and then the DB in an asynchronous fashion. Data lost is possible in "write behind" mode, which has a higher throughput because multiple write can be merge in a single write, resulting in a fewer number of writes.

While extracting data from the cache to the application is a typical way of processing data, it is not very scalable when large volume of data is required to be processed. Instead of shipping the data to the processing logic, a much more efficient way is to ship the processing logic to where the data is residing. This is exactly why Oracle Coherence provide an invocableMap interface where the client can provide a "processor" class that get shipped to every node where processing can be conducted with local data. Moving code towards the data dstributed across many nodes also enable parallel processing because now every node can conduct local processing in parallel.

The processor logic is shipped into the processing queue of the execution node, which has an active processor dequeue the processor object and execute it. Notice that this execution is performed in a serial manner, in other words, the processor will completely finished a processing job before proceeding to the next job. There is no worry about multi-threading issue and no need to use locks, and therefore no dead lock issue.

From http://horicky.blogspot.com

Published at DZone with permission of Ricky Ho, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)