Database Caching Background
For years using an external caching layer with an RDBMS was the accepted conventional wisdom for attaining performance and reliability. Data was kept in memory by the cache to reduce the amount of time spent querying the underlying data store. As data volumes grew, NoSQL databases were substituted for the RDBMS to provide horizontal scaling to clusters and to keep latency down.
Digital Transformation Happened
Simple web applications have transformed into edge-based systems of engagement (SoE) that are now serving billions of objects, with millions of contextual data points and enabling rich interactions and engagement – all in a few milliseconds. Data has grown in both velocity and volume – the more data, delivered faster, the richer the engagement and the better the decision. Larger datasets of up to 40TB to 100TB are now common, according to Forrester Research, and there is no end in sight to data growth or to transaction velocity.
Fundamental Issues with a Cache Database
The only way to support modern velocities and simultaneously deal with growing database cache volumes is to subscribe to a never-ending cycle of adding cache nodes and deploying increasingly more complex cache management systems and strategies.
This approach ignores the high cost of external DRAM caching layers. As data volumes expand external caching solutions are un-fundable over time and require a significant investment in managing complex data lifecycle issues such as cache consistency and correctness. Translation: cache is unaffordable, un-trustable, and unstable at high volumes.