Can you think of an application that can’t benefit from speed?

Sheryl Sage, Sr. Director, Cloud Product Management Blog, Developer

Aerospike CacheDB Delivers Fast, Economic Scaling

You would be hard pressed to think of a use case where you would want to build a slow system. This is why there is such a demand for in-memory databases.  But it’s not a binary decision, and it doesn’t need to be a trade-off between speed and persistence.  It’s really about understanding your use case and data requirements so you can augment your existing architecture and bring new features to market.

Gone are the days of a single monolithic database, serving all of your data needs (long live the monolith).  With the diversity of usage and data access patterns you may need a database that is optimized as a single source of truth, a temporary store, or somewhere inbetween.  To apply the right strategy it’s important to understand and classify your data requirements.  For example, do you require high velocity and availability, but the data is temporary, or do you require a single source of truth for transactional and operational data?

In this post we’ll review the challenges of traditional data architectures, and introduce a number of examples of how you can implement several caching strategies and use cases with Aerospike:

  • How traditional data architectures are challenged with speed
  • Connect data with action using several cache design patterns
  • Aerospike CacheDB strategies and use cases

Too long for interactive users to wait

Remember when you went to an office, grocery store, restaurant or even a sporting event? Even for those of us in the tech industry, Covid-19 has forever changed how we work, learn, play, socialize and buy products.  We expect our online interactions to be instant and personalized.

The challenge and expectations are growing. In a recent real-time interaction management report, Forrester, advises online B2C firms that effective personalization must align technology to deliver customer value. 

 “Successful real-time interaction management deployments must keep pace with ever-increasing demands for real-time data from disparate sources, high-volume decisioning in 100 milliseconds (or less), and experience delivery across digital and offline channels.”  

The Forrester Wave™: Real-Time Interaction Management, October 20, 2020

At Aerospike, we see firsthand how data is at the heart of our customer’s success bringing new personal, interactive experiences to market.  For millions of customer interactions, they must integrate user profiles, sessions, social and web history data with real-time decisioning engines in about 100 millisecond or less.  

Why do traditional data architectures make this hard?

It’s not easy to bring new interactive services to market quickly, and legacy disk-based databases and data silos can complicate real-time access. 

Traditional data architectures typically have an application powered by app servers to maintain sessions and manage data.  The app server connects to one or more databases via a data access layer and the data layer provides a layer of abstraction between the app and the database.  This presents several challenges:

  • Response times are critical for modern applications: Internet and network latencies consume greater than 100 milliseconds, placing the burden on the application and databases to respond with sub-millisecond latency. 
  • Concurrent connections: Modern apps require a high number of connections, mainly in the form of API calls to the app server. They also need more data with extremely low latency, which generates a higher volume of calls. Traditional databases don’t support a large number of concurrent connections because of the memory costs for buffers associated with each connection.  
  • Time to develop new applications: Traditional data architectures provide an abstraction layer for data access, nevertheless introducing new tables or modifying an existing schema is often complex. One change may impact many programs that access the data.  To get to market faster, you need to be able to work in parallel with flexible data structures.
  • Service availability: When a disk-based database receives more queries than it can handle, the database is unavailable system-wide.  Many enterprises end up shutting down apps to limit the load on the database.   
  • Economics of scaling: Traditional databases are severely limited by the reads and write operations per second.  You can distribute disk-based databases on more than one server, but maintaining data consistency across instances becomes an overhead.  At scale, hardware costs quickly become cost-prohibitive and difficult to manage.  

Connecting data to action

When modernizing for instant customer interactions, you have a certain set of requirements to consider:

  • Deliver an instant response to your customers 
  • Meet rapidly increasing data volumes while lowering latency and cost
  • Eliminate data silos and complexity while building modern applications
  • Leverage valuable existing investments 
  • Keep your apps responsive and fault tolerant
  • Support multi-region, multi-cloud and hybrid deployments

Aerospike CacheDB can do all of this for globally distributed applications without the need to replace your legacy backend databases.  

Aerospike CacheDB delivers the speed of a cache with the persistence of a database.

Why is this important?

Aerospike CacheDB allows you to store fast data in-memory, flash, persistent memory or a combination thereof.  It was designed with a patented Hybrid Memory Architecture™ (HMA) for intelligent data access, instant response times, and better economics at scale.  With HMA modern apps are able to manage dozens of terabytes to petabytes of data on a single machine with sub-millisecond latency.  

Developers can build a variety of transient, ephemeral, operational and transactional use cases such as a geo-distributed cache; session store; real-time analytics; job and queue management; search; recommendation engines; fast API responses; rate limiting, high-speed transactions; and more. 

Aerospike Caching design patterns 

By reading frequently accessed data from Aerospike CacheDB, performance can be improved without impacting a traditional data architecture.  To accommodate a wide range of demanding real-time applications Aerospike supports several caching design patterns, such as cache-aside, and write-around to minimize data access without taxing backend data services.  

A cache-aside design pattern is the fastest way to implement a cache with limited programming overhead.  It is typically used for frequent reads, infrequent writes and data is shared between user sessions.  This pattern is ideal for:

  • Simple content such as media or thumbnails can dramatically reduce requests to storage and network bandwidth usage. 
  • Speed up access to backend databases e.g., RDBMS, MySQL
  • Responses from a REST service to adhere to quotes
  • Cache search results

Diagram - Look aside, Cache-aside

Design Steps: 

  • Identify the data or the objects that are repeatedly read by the application. 
  • Determine the key format. 
  • Determine the format or the data structure.
  • Agree on a time interval after which the cached data goes stale (time-to-live, expiration). 
  • Decide the eviction policy. 
  • Implement the logic.

Write-around is used for data that is not read frequently.  By bypassing the cache and writing directly to the backend database, the cache has a higher potential to be able to read the most frequently accessed data.  In the design pattern below, the data is written directly to the backend data store.  Data is pushed into the cache with Aerospike Connect for Kafka or another event change stream technology.  Only frequently used data is read from the Aerospike CacheDB

Diagram - Write-around

Distributed caching is frequently used by Aerospike CacheDB customers for application resiliency. In this case the cache is replicated among multiple servers and availability zones.  A distributed cache can be configured with active-passive or active-active replication.  Distributed caching ensures that content is available during outages, local-latency reads for faster performance and network costs are reduced. 

Diagram - Distributed Cache

Stored sessions are often used for functions like saving a user’s place in a large file, or saving shopping cart data, personalization in a leaderboard, and real-time recommendations. Session data may include user profile information, messages, personalized data and themes, recommendations, targeted promotions and discounts. In a session store the data is not shared between the sessions of different users. The design must ensure that the data remains isolated between users. 

Aerospike can take user session data to the next level with extremely low latency.  A single cluster on decently sized servers can manage millions of sessions. Many Aerospike CacheDB customers have been able to offload the user profile from a relational backend database such as MySQL or Oracle to a session store, limiting writes to the backend database.  In this case unique session data is loaded in Aerospike CacheDB and upon session exit the data is written back to the backend data store. 

Diagram - Session Store

How it works:

  1. Each session must acquire a random session id that’s not shared with other sessions. 
  2. The session must append the session id to the keys. 

In the above diagram, user session data is stored in an Aerospike key ordered map. User recommendations are stored in a key-value ordered map. 

Why Aerospike CacheDB?

Successful applications and services tend to become more complex over time.  And your architecture must adapt to more users, more apps, and more data.  Aerospike CacheDB can help you develop new high performance features faster and perform millions of operations a second with sub-millisecond latency, with the lowest TCO. 

Develop with agility

Wayfair, the Trade Desk, Snap, Dream-11 and four of the top five financial services companies use Aerospike for distributed caching to boost performance, speed innovation and scale with fewer resources.  In a future blog, we will go into several customer use cases in depth.  For now, we will generally cover what they build with Aerospike Cache:

  • User session store – user profile and web history data used in a shopping cart, personalization in a leaderboard, and real-time recommendation engine.
  • Manage user spikes – In seasonal cases or user spikes, caching can prevent the application from being overrun and can help avoid adding additional resources.
  • Speed up access to backend databases e.g., RDBMS, MySQL – relational systems were not designed to operate at internet scale and can be overwhelmed by the volume of requests as usage grows.  
  • User authentication – to deliver high performance user authentication tokens are cached to deliver high performance user authentication. 
  • Leaderboards – a scoreboard showing the ranked names and current scores (or other data points) of the leading competitors. 
  • API responses – modern apps communicate via an API in cases where an API response can be stored in cache, even for short durations.
  • Queue Management – schedule and queue asynchronous events.  Can be used for rate limiting or any tasks to control the flow of traffic to an endpoint.
  • Configuration settings – by keeping cached copies of runtime configuration data, an application can access this information with minimal latency.

Low Latency with the Lowest TCO

Unlike many other caching solutions, Aerospike CacheDB is not limited to costly DRAM to achieve high throughput rates and rapid response times.  Although a DRAM-only configuration is fully supported, Aerospike’s patented Hybrid Memory Architecture™ delivers exceptional speed at scale by exploiting Flash and PMem technologies. Such configurations often yield substantially lower server footprints — and substantially lower operational costs.

CacheDB Intel PMEM Architecture Diagram

Many Aerospike customers tell us stories about starting with another open source or commercial in-memory or memory first caching solution.  It starts well but as data volumes grow, they hit a wall with the cost of DRAM becoming prohibitively expensive.  When the cluster sizes and expenses grow, they turn to Aerospike to take advantage of a system that is cost effective for very large datasets. 

Conclusion

In this post you have learned how instant interactive customer experiences don’t require a trade-off between speed and persistence.  Aerospike CacheDB provides excellent caching capabilities to implement new modern use cases such as user session store, recommendations, leaderboards, API responses, and application resiliency with local latency.  But you shouldn’t stop here.  

Get Started with a 30 Day Trial 

Aerospike Cache is available for a 30-day trial cloud trial and is available as a docker container.

Learn More 

Checkout our customer case studies and caching whitepaper to learn more about how Aerospike CacheDB can provide extremely the speed of a cache with the persistence of a database.

www.aerospike.com/cachedb/ 

Share:

About Author

mm

    Sheryl Sage, Sr. Director, Cloud Product Management

    All posts by this author
    Sheryl is the Sr. Director of Cloud Product Management at Aerospike where she works on the cloud platform, leading customer-centered strategies that unlock growth. She has marketing and product management positions in startup and large global technology companies including Redis Labs, MapR, Informatica and VMware.