Version 1.0-first_public

lecture: Java Caching with JSR107 and tCache

Scalable data-aware Java Caching

1481788

tCache - Scalable data-aware Java Caching.

Caching data is an essential part in many high-load scenarios. A local 1st-level cache can augment a shared 2nd-level cache like Redis and Memcached to further boost performance.
JCache (JSR107) is the Java Standard for Caching, and tCache is a production-proof OpenSource JCache provider. This talk presents advanced features of JCache like EntryProcessors and Listeners, and tCache specific cutting-edge features like data-aware evictions, and built-in load-spreading.

JCache (JSR107) is the Java Standard for Caching. tCache is a production-proof in-process cache for the JVM, which is part of trivago's OpenSource Java library [triava](https://github.com/trivago/triava) (Apache v2 license). This talk explains how to create a fast Cache, and shows leading-edge features like data-aware evictions that are operating near lock-free. tCache takes a creative approach for near lock-free evictions and supports data-aware evictions.

# Part one: Goals and key features of JCache and tCache
We will hear about goals, key features and no-goals for JSR107 and tCache: This includes standard operation, CAS operations, CacheLoaders, Write-Trough, throughput goals, cache topologies, eviction strategies, MBeans support, and Statistics.

# Part two: Saving your ass - the problems of mass insertions and mass deletions

Loading or refreshing many entries in a cache can massively stress the database and thus have adverse effects than being helpful. Clever expiration policies like tCache's "expireUntil" can help. We will see the lifetime of Cache entries: Expiration, idle time, explicit removal and evictions are available options. Another way to optimize cache usage are data-aware evictions: They can be used instead of LFU or LRU to to keep more important data in the cache, for example customers that have at least one item in their shopping bag. Code examples show how to implement this with a single line of code.

# Part three: Speed, Design and Benchmarks

How can one reach a high throughput? Insights in the design will show the critical actions, and how they are solved. Evictions are one of the crucial parts that can effect cache throughput. tCache takes a fresh approach at evictions that inspects all cache entries (not sampling), but still allows to perform better in benchmarks than other caches. It will be shown why the performance is independent from the eviction strategy (LFU, LRU, Data-aware eviction).