When to Use Aerospike vs. Redis
George Csaba, Director of Technical Product Marketing / August 5, 2019
If you’ve had Redis for a while and are starting to run into issues with growing workloads, you’re not alone. Many of our customers have migrated to Aerospike from Redis, and that number is growing.
Our customers tell us when they first deploy Redis it’s easy to use. But then things quickly change as data volumes and workloads continue to increase. When this happens, they face greater challenges with delivering new applications faster, applying analytical technologies like machine learning to data above 5 terabytes, providing a reliable and engaging user experience, or deploying successful digital transformation projects.
This is a common situation for many companies we work with. They discover that the high ownership costs, poor performance at scale, and increased operational complexity with Redis is worse than they’d imagined. Faced with IT budget overruns, service-level agreement (SLA) violations, and delayed application rollouts, they end up looking for an alternative.
Here’s where Aerospike can help. We’re a NoSQL key-value database that delivers ultra-fast runtime performance for read/write workloads, high availability, near-linear scalability, and strong data consistency – all at a fraction of the cost of other alternatives. We power some of the world’s most innovative and industry-leading companies hyperscale data platforms often times replacing Redis and other legacy NoSQL databases that can’t reliably scale.
So what are the signs that our customers had outgrown Redis? Here is what we’ve learned from those that had issues with Redis:
Total cost of ownership (TCO) worries
Soaring data volumes and competitive pressures are forcing companies to deliver new applications faster and process large data sets in real time. Such demands can stress Redis clusters, prompting the need to deploy more nodes, memory, and manpower that drives up TCO.
Need for scalability and elasticity
To scale Redis, companies often add more nodes and DRAM because it’s a single-threaded system designed for in-memory processing. But DRAM is expensive, and managing increasingly large clusters isn’t easy. Redis on Flash (ROF) doesn’t solve these problems because it keeps metadata and indexes in memory, caches “hot” data for performance, and relies on memory-hungry RocksDB processes behind the scenes.
Redis configuration requirements inhibit elasticity as well. Companies can only scale out a cluster by a multiple of the current number of shards, and they can’t remove shards from a cluster once they are created. So scaling up before peak periods or down afterward can be painful and expensive.
Need for persistence with high performance
Redis publishes benchmarks that use full DRAM instances and only one copy of user data – a configuration that differs greatly from what companies need in their production environments and doesn’t match any customer environment we’ve seen when dealing with hypersale use cases. As many have discovered, persistence in Redis – via snapshots and append-only files – can reduce performance and even lead to data loss.
Need for strong data consistency
If companies are building mission-critical applications where data consistency is a must, then Redis is not likely the right choice. Redis has not passed the Jepsen test for strong consistency (whereas Aerospike has). Redis supports eventual consistency, which can result in stale reads and even data loss under certain circumstances. Redis has released a WAIT command, which is most closely associated with consistency, yet Redis documentation acknowledges that WAIT does not make Redis a strongly consistent store. For financial services companies or e-commerce companies where payments are involved, the eventual consistency approach implemented through the WAIT command is not appropriate.
Need for manageability and operational ease at scale
Scaling Redis requires substantial memory and leads to large clusters, which means more complexity and more frequent node failures. Our customers that had used Redis as an in-memory cache for a persistent SQL or NoSQL data store to speed access to data, worked hard to manage and synchronize both environments so their applications didn’t access stale data or experience slow performance due to cache misses. Covering the operational costs of scaling two systems can be daunting, particularly as data volumes and workloads grow. The staff resources required to manage this type of environment could be spent building or deploying new applications, not maintaining existing applications.
If you’ve experienced one or more of these signs, it’s likely your Redis database isn’t cutting it.