Cross Datacenter Replication (XDR)

Cross Datacenter Replication (XDR)

Aerospike Database XDR provides excellent management and control of the replication of global data across geographically separate clusters. It can be used to build a global data hub, allowing to route and augment data captured at the edge to where it is needed – in Aerospike clusters or any other repository.

Running an active/active system with XDR provides applications with low-latency reads and writes, but applications might need to compromise on consistency guarantees during site failures. Furthermore, the application’s data access patterns need to be designed carefully to ensure that concurrent write conflicts do not result in application-level inconsistencies.

The Aerospike XDR feature allows data to be asynchronously replicated between two or more clusters that are located at different geographically distributed sites. A site can be a physical rack in a datacenter, an entire datacenter, an availability zone in a cloud region, or a cloud region. Here are some examples.

Specifically, as shown in the above figure, by grouping data objects by namespace into the same arena, the long-term object creation, access, modification, and deletion pattern is optimized, and fragmentation is minimized.

Example configurationExplanation and topology type
  1. One cluster in an availability zone in Amazon US West region configured to ship all of its data updates to a second cluster in a different availability zone also in Amazon West region.
One-way shipping is configured between the clusters. This is a typical active-passive setup where the second cluster is sometimes described as a “hot standby”.
  1. A two-cluster system where each cluster is configured to ship all of its data updates to the other, the first cluster in Amazon US West region and the second in Amazon East region. In this case, applications can be configured to use either cluster for both reads/updates.

A two-way shipping configuration.

This configuration can be characterized as active/active.

  1. A three-cluster cross-cloud active/active system, one cluster in an Azure datacenter in the US West region, a second cluster in Amazon US Central region, and a third cluster on Google Cloud US East region.

A three-way shipping configuration.

This configuration can be characterized as active/active.

XDR (active/active or active/passive) can extend the data infrastructure to any number of clusters with control, flexibility, ease of administration, faster writes, and regional autonomy.

Dealing with conflicting updates

Even in an active-passive setup with the source cluster configured with strong consistency, there is still work to be done in terms of metadata exchange to ensure that the data in the source (active) cluster is perfectly synchronized with the data in the destination (passive) cluster.

In an active/active setup (examples 2 and 3), the asynchronous nature of the replication mechanism could cause a consistency issue if concurrent updates are made to the same data record in multiple sites at the same time, also known as conflicting writes.

Assuming the entire record is shipped on every write, inconsistencies can occur. The changed record at one site could be in-transit via XDR to the other site, while the changed record on the other site is simultaneously shipped by XDR to the first site. The result is either the record ends up having different values on the two clusters or one of the two updates is lost (assuming some kind of timestamp-based resolution of the conflict).

There are ways to mitigate this situation. For example, the XDR shipping mechanism might work at the bin level, tracking bin update times in addition to record update times. This technique, known as bin projection, will ensure that different bins of the same record can be changed concurrently at different sites without violating data consistency. It should be possible to merge the records properly on both sites using bin level update times.

Even with shipping bin level changes, concurrent changes to the same bin in multiple sites can result in inconsistencies analogous to those described above for the record level. We will either end up with two different values for the bin in the two sites or one of the two values will be lost.

An often suggested alternative is the conflict-free replicated data type (CRDT). However, consistency approaches such as CRDTs only apply to a small subset of data types, typically counters and sets, and do not help with most other data types. Network interruptions, which are amplified over distance, as well as nodes, zones or regions going down all ensure that data inconsistencies will happen, and that writes will be lost.

One possible way to guarantee consistency in most cases for an active/active XDR deployment is to architect the application’s data access in such a way that conflicts on specific data items (records or bins) are completely avoided.

However, if write conflicts are allowed, the application needs to tolerate some level of consistency violations. The asynchronous nature of the shipping algorithms means that when a site failure occurs, the most recent updates from the failed site will not be available at the other sites. This means applications need to be designed and prepared to handle possible loss of data.

For an in-depth discussion, watch Strong Consistency in Databases. What does it actually guarantee? by Andrew Gooding, VP of Engineering at Aerospike.