Hibernate has two caching layers:
While the first-level cache is short lived, being cleared when the underlying
EntityManager is closed, the second-level cache is tied to an
Some second-level caching providers offer support for clusters. Therefore, a node needs only to store a subset of the whole cached data.
Although the second-level cache can reduce transaction response time since entities are retrieved from the cache rather than from the database,
there are other options to achieve the same goal,
and you should consider these alternatives prior to jumping to a second-level cache layer:
tuning the underlying database cache so that the working set fits into memory, therefore reducing Disk I/O traffic.
optimizing database statements through JDBC batching, statement caching, indexing can reduce the average response time, therefore increasing throughput as well.
database replication is also a very valuable option to increase read-only transaction throughput
After properly tuning the database, to further reduce the average response time and increase the system throughput, application-level caching becomes inevitable.
Topically, a key-value application-level cache like Memcached or Redis is a common choice to store data aggregates.
If you can duplicate all data in the key-value store, you have the option of taking down the database system for maintenance without completely loosing availability since read-only traffic can still be served from the cache.
One of the main challenges of using an application-level cache is ensuring data consistency across entity aggregates.
That’s where the second-level cache comes to the rescue.
Being tightly integrated with Hibernate, the second-level cache can provide better data consistency since entries are cached in a normalized fashion, just like in a relational database.
Changing a parent entity only requires a single entry cache update, as opposed to cache entry invalidation cascading in key-value stores.
The second-level cache provides four cache concurrency strategies:
READ_WRITE is a very good default concurrency strategy since it provides strong consistency guarantees without compromising throughput.
TRANSACTIONAL concurrency strategy uses JTA. Hence, it’s more suitable when entities are frequently modified.
TRANSACTIONAL use write-through caching, while
NONSTRICT_READ_WRITE is a read-through caching strategy.
For this reason,
NONSTRICT_READ_WRITE is not very suitable if entities are changed frequently.
When using clustering, the second-level cache entries are spread across multiple nodes.
When using Infinispan distributed cache, only
NONSTRICT_READ_WRITE are available for read-write caches.
Bear in mind that
NONSTRICT_READ_WRITE offers a weaker consistency guarantee since stale updates are possible.