Tuesday, April 7, 2015

What's this Offheap thing anyway?

As you may have noticed already, there's a lot of open-source activity around Ehcache and Terracotta in the past couple of week:
  1. Ehcache 3 Milestone 1 is out, and includes offheap storage. Check it out at http://ehcache.github.io/
  2. Terracotta 4.3 with offheap storage is also available as an open-source offering. Check it out at http://blog.terracotta.org/2015/04/02/terracotta-bolsters-in-memory-open-source-offerings/.
So that's all great and there's a lot to talk about on both these announcements...

But it turns out that one of the first question I got while sharing the news to wider non-tech circles was:
"What's this offheap thing anyway? What's so special about it, and why should I care."

Really fair questions indeed!

So as I wrote my reply and tried hard not to dive in "geeky" land while doing so (please refer to https://github.com/Terracotta-OSS/offheap-store for these technical aspects such as detailed explanations and implementation code), I figured it could be useful to others as well...
So hopefully the following explanation will make sense to a wider non-developper audience (and developers out there too of course!).

So here it is...starting from, well, the start:

Traditionally in Java programming land, the memory space accessible to Java programs (called "heap") is totally managed by the Java Virtual Machine (JVM)...making it much easier for developers to NOT have to think about memory allocations and clean ups (like we used to with programming languages such as C, C++ etc…). And really, "not having to think about memory complexities" is a big part of JAVA’s success over the years.

But the memory management that JAVA performs under-the-hood (refer to as Garbage Collection, or GC) is something that potentially becomes costly performance-wise (lower throughput, higher latencies) especially as the used "heap" space grows (for example, heap space would grow if you started to cache lots of objects in memory)

So to reconcile these 2 contradictory concepts of:

(A) Being able to cache a lot more data (10s of GB or TBs possibly) within your Java application, and
(B) Not incurring a big cost on application performance due to underlying Java memory management operations,

--> Enter Offheap Memory.

Offheap Memory, as the name implies, is a memory space that is "outside the Java heap" (and hence outside the traditional Java memory management responsibilities), but yet still accessible within the Java process through the java.nio API.

So when a product or framework refers to "Offheap" as a general concept, really it means that this product/framework can natively access the machine’s RAM memory directly from the JAVA process (as opposed to doing it the "traditional" way of accessing the machine’s RAM memory through JAVA’s managed memory heap space). In other words, it’s like poking a hole through JAVA’s walls to access the RAM directly.

To the question of why should you care:
  1. With offheap, your Java program can put as much data as it needs in-memory, and access it all within process (there’s no memory limitation aside from the amount of RAM the machine has to offer), even TBs of data (check out this Intel white paper[PDF] showing offheap usage and benchmarks with a single 6TB Intel server)
  2. Your Java program will demonstrate very predictable latencies even if you're storing large amounts of data in-memory (even at the TB scale)...
    1. This is because offheap memory space is not managed by Java in the first place, and as such, storing data in that Offheap space will simply not add any extra JAVA memory management overhead to the picture.
So overall, it’s really the best of both worlds: storing lots of data in memory but not incurring performance unpredictability in the process.

The next question you might have is: if it is such a great concept, why doesn’t everybody do it in their own Java programs?

And the simple answer is that it’s not a straightforward thing to do because you have to create yourself all that low-level memory management when you use the offheap.

And that’s really the "secret" sauce of libraries implementing offheap usage...such as Ehcache/Terracotta libraries (not so secret anymore for ehcache/terracotta since it's officially open-sourced now - refer to offheap-store on github): all these low-level memory mechanisms are done for you and are especially hidden from you so you don’t have to care about them as a Java developer. All you have to know is that you can cache as much as you want/need on a single machine (GBs, TBs even) and that it will not slow down your app unpredictably while doing so (as it would if you were putting all that stuff in the traditional JAVA heap.)

To explore further, find Ehcache Offheap store implementation at https://github.com/Terracotta-OSS/offheap-store

Please leave comments if you have any questions, or better yet, post your question on the Ehcache-users google group!

3 comments :

Brian Ghidinelli said...

Fabien - I see you're presenting on this at CF Summit, unfortunately the same time I'm presenting on SaaS apps Monday morning. If I understand you correctly, you can create a cluster-wide session scope cache to externalize sessions (and therefore remove the need for sticky sessions)? Are there limitations as to what you can put in the session? For example, if you have a CFC and that CFC has any references to other CFCs like a service layer, I imagine it doesn't work or could lead to unpredictable results if you access those values on different servers? We are currently working to move our sessions to Redis and so are eliminating CFCs or creating serialize() methods to get them down to JSON. I'm curious if this Terracotta option would cut out any of the work we're going to have to do or if ultimately we would just be choosing between a redis-based store and an ehcache-based store? (Side question: how does cflock work on session values stored in ehcache? Is access synchronized to prevent race conditions?)

Fabien Sanglier said...

We definitely should catch up at the conference! I think using ehcache + terracotta would allow you to not have to code custom JSON serialization etc... because it's all done automatically with ehcache + terracotta setup.

The only data "limitation" for the objects stored in ehcache+terracotta is that your objects must be "Serializable" (implements java.io.Serializable). Then it's all done and work behind the scene...Terracotta does not care what object or groups of objects you put in there as long as it's overall serializable. (if you have a large tree of interdependent CFCs, not a problem...it will become a big serialized blob in terracotta...)
Same for your mention of locking and race conditions etc...Ehcache+Terracotta offers different consistency models that will use locks internally to make sure there's no inconsistencies between cache entries on various servers...(so you don't need to do this using cflock in the first place...)

Let's catch up for sure for further discussion!

Brian Ghidinelli said...

Thanks Fabien - I'll be arriving in Vegas tomorrow mid-day and then leaving Monday around noon. I'm @ghidinelli on twitter if you will be at the Aria tomorrow - there's a drink on me for a longer chat. :)