Across Data Centers and Around the World
|Some customers configure one cluster as a standby and some choose to store data local to all the places where different instances of the application are running. By simply configuring and combining these capabilities, Aerospike gives customers great flexibility in designing very complex topologies (ring, star and a mix).|
Associated with each cluster node is an XDR process consisting of the digest logger module, data shipper module and failure handler module:
Digest logger module
The digest logger listens to all of the changes happening in the cluster node and stores this information in a log file called the digest log. The digest log stores just enough information to ship the actual records at a later stage – typically, only the key’s hash value and some metadata. The digest log file is a ring buffer, so the most recent data could potentially overwrite the oldest data if the buffer is full. It is important to size the digest log file appropriately to ensure that it stores enough information to handle link failure across data centers. Because the digest log file is stored in an efficient way, this is not a major problem. A few GBs of digest log file can hold days’ worth of log.
Data shipper module
The data shipper is responsible for reading the digest log and shipping the data from one cluster to another. This module acts like a client application to both the local cluster and the remote cluster albeit with some special privileges and settings. The data is read from the local cluster as normal reads and also shipped to the remote cluster as normal writes on the remote cluster. This approach has a huge benefit that there is no dependency in the configuration of the local and remote clusters. For example, the local cluster can be m-node cluster while the remote is n-node cluster. The local and remote clusters can have different settings for replication factor, number of disks per-namespace, disk/memory size configuration etc.
The speed at which XDR ships data to the remote site is predominantly dependent on two factors, the write speed on each node and the network bandwidth that is available. Typically in the production environments, there are cycles of peak loads and low times. The bandwidth that is available should be at-least enough to catch up the pending-to-be-shipped data during the low periods.
It is important to adjust the XDR batch settings appropriately to ensure that XDR will ship data at a higher rate to one or more remote clusters than the rate at which writes are occurring in a local cluster. To handle situations in a production system, we have hooks in the system which can be dynamically changed without having to restart the XDR service.
Failure handler module
The Failure handler ensures that the system continues shipping data in case of local node failure. It achieves this through monitoring and performing any activity that’s required. Node failures in the local data center are handled by keeping a copy of the digest log (shadow log) at the replica node. Normally, XDR does not ship using the shadow logs. Once a node has failed, the XDR logs in the nodes that are replica nodes (for partitions that were master in the failed node) will take over the responsibility for shipping the data. This ensures that there are no data shipping glitches during a single or multiple node failure. In the case of node failures in a remote datacenter, XDR piggybacks on the standard Aerospike intra-cluster replication scheme, as XDR access to the remote datacenter is purely as a standard Aerospike client with normal failure handling between Aerospike clients and servers.
In Star topology a data center can ship to multiple data centers simultaneously. In this case one or more data center links can go bad. XDR continues to ship to the available data centers. It remembers the times when the links are down for each configured data centers. When the link becomes available it ships the current writes as well as the writes of the period when th elink was down. In more complex scenario the local node failures can happen with remote links failures. XDR is capable handling these very well.
Read about how eXelate uses cross data center replication for eXtreme High Availability.