Skip to main content
Loading

Aerospike on Amazon EC2 - IP Addressing

Overview

This page has configuration details for private IP addresses, public IP addresses, and elastic IP addresses, each of which has its own use in an Aerospike deployment on AWS EC2. This page also describes how to manage multiple network cards, configure a mesh heartbeat, and address IPs with XDR.

Some of the AWS EC2-related network setup is done automatically when you use CloudFormation templates that are already configured with recommended settings. Refer to CloudFormation page for details on how to quickly get a cluster up and running.

IP addressing on EC2-VPC platform

On the EC2-VPC platform, you can use private IP addresses, public IP addresses, and elastic IP addresses. Use the following guidelines to configure your choice of IP address for Aerospike on AWS.

  • Private IP address: Aerospike clients can access a cluster using the AWS private IP address. A private IP address cannot be reached from the internet.

  • A public IP address: Can be reached from the internet and is assigned to default-VPC instances by default. However, non-default-VPC instances must have public IP address assignment enabled. Public IP addresses are disassociated from an instance when it is stopped or an ENI or EIP are added to the instance.

  • Elastic IP address: A static public IP address that remains associated with an instance even when the instance is stopped and restarted.

Configure a private IP address

Configure the Aerospike server so that Aerospike clients can access a cluster using the AWS private IP address. EC2-VPC instances receive a static private IP address from the range of your VPC addresses by default.

  1. In aerospike.conf, verify the following network stanza.

    network {
    service {
    address any
    port 3000
    }
    }
  2. Start the Aerospike server.

    sudo service aerospike restart

Configure an elastic or public IP address

Configure elastic and public IP addresses when Aerospike clients access the cluster from a public network. An elastic IP address is a static public IP address that remains associated with an instance even when the instance is stopped and restarted. Use the following steps on each node in the cluster.

  1. Edit /etc/aerospike/aerospike.conf.

  2. Add the access-address line in the network stanza with the elastic IP address of the node. The address 54.208.32.99 is the example elastic IP in the following configuration.

    network {
    service {
    address any
    port 3000
    access-address 54.208.32.99 virtual
    }
    }
  3. Restart the Aerospike server.

    sudo service aerospike restart

Seed node as a proxy

Because an AWS instance can have both a public and private IP address for an ENI, database operations may be forwarded through the seed node as a proxy. For example, assume we have a cluster of three servers. The servers are all part of the same AWS VPC. The addresses for each of the servers are as follows:

NodeInternal IP AddressPublic IP Address
Server_1172.18.10.7652.91.243.125
Server_2172.18.10.8252.105.13.44
Server_3172.18.10.2654.91.34.242

Run ifconfig on the EC2 instance to display only the private IP addresses. The public IP (Elastic IP) is mapped to the private IP address through network address translation (NAT) and is not part of the server configuration.

ifconfig 

Sample output:

eth0      Link encap:Ethernet  HWaddr 12:5A:18:B8:AD:15 
inet addr:172.18.10.76 Bcast:172.18.10.255 Mask:255.255.255.0
inet6 addr: fe80::105a:18ff:feb8:ad15/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:809 errors:0 dropped:0 overruns:0 frame:0
TX packets:515 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:534945 (522.4 KiB) TX bytes:52512 (5 a.2 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:140 (140.0 b) TX bytes:140 (140.0 b)

You can find the public IP of your instance on the EC2 Dashboard, or by running the aws command on your instance.

aws ec2 describe-instances

Sample output:

"NetworkInterfaceId": "eni-b51f0394", 
"PrivateIpAddresses": [
{
"PrivateDnsName": "ip-172-18-10-76.ec2.internal",
"Association": {
"PublicIp": "52.91.243.125",
"PublicDnsName": "ec2-52-91-243-125.compute.amazonaws.com",
"IpOwnerId": "amazon"
},
"Primary": true,
"PrivateIpAddress": "172.18.10.76"
}
]

Client connection to seed node

When the Aerospike client connects to a seed node in the cluster through its public IP address, it receives the internal IP addresses for all the other nodes in the cluster. The client attempts to add each of the nodes with the internal IP addresses. The client receives a timeout for the nodes other than the seed node. The following sample output shows a client request timeout.

Sample output:

2015-11-03 17:55:53.055 INFO Thread 1 Add node BB9A1541F839E12 54.172.74.76:3000 
2015-11-03 17:55:54.322 WARN Thread 1 Add node 172.18.10.82:3000 failed: Error Code 11:
java.net.SocketTimeoutException: connect timed out 2015-11-03 17:55:55.324 WARN Thread 1
Add node 172.18.10.26:3000 failed: Error Code 11: java.net.SocketTimeoutException: connect timed out

All database operations succeed at this point because the client uses the seed node as a proxy for the other nodes. However, the performance will be poor because all the operations will be sent through the seed node as the proxy.

Timeout errors

Following are the timeout errors when the client attempts to connect to one of the private IP addresses.

Sample output:

2015-11-03 17:55:56.417 WARN Thread 8 Add node 172.18.10.82:3000 failed: Error Code 11: 
java.net.SocketTimeoutException: connect timed out 2015-11-03 17:55:56.442 write
(tps=12 timeouts=0 errors=0) read(tps=62 timeouts=0 errors=0) total(tps=74 timeouts=0 errors=0)
2015-11-03 17:55:57.418 WARN Thread 8 Add node 172.18.10.26:3000 failed: Error Code 11:
java.net.SocketTimeoutException: connect timed out

The client continues to try and add the nodes with internal IP addresses and receive timeouts. Even though operations succeed, the timeouts are a sign that there may be a network problem.

Search the log file

You can search into the Aerospike log file (/var/log/aerospike/aerospike.log) to confirm that there are operations being proxied to the nodes with internal IP addresses.

grep  proxy /var/log/aerospike/aerospike.log
Nov 06 2015 01:07:26 GMT: INFO (info): (hist.c::137) histogram dump: proxy (3251) total) msec

Under normal conditions the number of proxy operations should be 0. Here the seed node has proxied 3251 transactions. If you are using AMC with a public IP, you may also notice that only one node is listed. However, when you run the info command with the asadm tool on one node, all nodes are listed.

Correct client/IP problems

Add the access-address parameter to the network configuration of the aerospike.conf file to corect the problem of an external Aerospike client using internal IP addresses.

network {
service {
address any
port 3000
access-address 54.208.32.99 virtual
}
}

When you use the access-address parameter, the database verifies that the address is valid for one of the network ports. If it is not a valid IP address for a network port, then the database will not start.

Example log output:

Nov 04 2015 01:15:31 GMT: INFO (cf:misc): (id.c::119) Node ip: 172.18.10.26 Nov 04 2015 01:15:31 
GMT: INFO (cf:misc): (id.c::327) Heartbeat address for mesh: 172.18.10.26 Nov 04 2015 01:15:31
GMT: INFO (config): (cfg.c::3231) Rack Aware mode not enabled Nov 04 2015 01:15:31
GMT: INFO (config): (cfg.c::3234) Node id bb96f7dbef24f12 Nov 04 2015 01:15:31
GMT: CRITICAL (config): (cfg.c:3265) external address ‘54.208.32.99’ does not match service
addresses ‘172.18.10.26:3000’ Nov 04 2015 01:15:31 GMT: WARNING (as): (signal.c::135) SIGINT
received, shutting down Nov 04 2015 01:15:31 GMT: WARNING (as): (signal.c::138) startup was
all not complete, exiting immediately

The additional keyword virtual must be added to the end of the access-address line in the configuration file so that the database starts properly. With the access-address parameter set with the keyword virtual the Aerospike client using public IPs will be able to communicate with all of the nodes in the cluster. Because operations will no longer be proxied through the seed node, the performance will be significantly better.

Multiple virtual network interfaces

Each network interface on an Amazon Linux Hardware Virtual Machine (HVM) instance can handle about 250K packets per second, which can bottleneck on cores processing interrupts. Additional interfaces can process more packets per second on the same instance. If you need higher performance per instance, you can add multiple virtual NICs to an instance with Elastic Network Interfaces (ENI).

note

Using ENIs with private IPs is free of cost in AWS.

Multiple ENIs with one interface

Configure the access-address value of the network stanza where multiple network interfaces (ENI) are on the instance and only one interface provides access to the node.

  1. For each node in the cluster, use ifconfig to list the interfaces and IP addresses.

    ifconfig -a

    Sample output:

    eth0      Link encap:Ethernet  HWaddr 12:CC:47:86:8F:AF  
    inet addr:172.18.10.189 Bcast:172.18.10.255 Mask:255.255.255.0
    inet6 addr: fe80::10cc:47ff:fe86:8faf/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
    RX packets:832 errors:0 dropped:0 overruns:0 frame:0
    TX packets:694 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:98191 (95.8 KiB) TX bytes:81771 (79.8 KiB)

    eth1 Link encap:Ethernet HWaddr 12:DF:E3:5B:FB:69
    inet addr:172.18.224.190 Bcast:172.18.10.255 Mask:255.255.255.0
    inet6 addr: fe80::10df:e3ff:fe5b:fb69/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:2 errors:0 dropped:0 overruns:0 frame:0
    TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:716 (716.0 b) TX bytes:1206 ( a.1 KiB)

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:2 errors:0 dropped:0 overruns:0 frame:0
    TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:140 (140.0 b) TX bytes:140 (140.0 b)

  2. Edit /etc/aerospike/aerospike.conf. Add the access-address line in the network stanza with the private IP address of the selected interface. In this example the IP address for eth1 is used.

    network {
    service {
    address any
    port 3000
    access-address 172.18.224.190
    }
    }
  3. Restart the Aerospike server.

    sudo service aerospike restart
  4. Confirm access-address after the server restarts.

    asinfo -v services
    172.18.224.189:3000,172.18.224.194:3000,172.18.224.195:3000

  5. Verify the access-address of the other nodes in the cluster.

    asinfo -v services
    172.18.224.189:3000,172.18.224.194:3000,172.18.224.195:3000

Multiple network cards

The following sections describe how to add and modify multiple network cards to improve Aerospike network performance by separating different types of traffic by their respective ports, and binding that traffic to a specific network card. This is supported in heartbeat protocol v3. The goal is to isolate client traffic from XDR traffic using different network cards over port 3000, and also separate mesh heartbeat traffic over port 3002 and fabric network traffic over port 3001. Each of these types of network traffic would use a specific network card of the instance.

Network traffic isolation

The following table assumes four network cards per node, and two clusters A and B.

  • Network cards in nodes on cluster A labeled A_eth{0-3}
  • Network cards in nodes on cluster B labeled B_eth{0-3}

Traffic Isolation Table

NICSettingTraffic typePort
A_eth0access-addressclient3000
A_eth1alternate-access-addressXDR3000
A_eth2heartbeat > addressHeartbeat3002
A_eth3fabric > addressFabric3001
B_eth0access-addressClient3000
B_eth1alternate-access-addressXDR3000
B_eth2heartbeat > addressHeartbeat3002
B_eth3fabric > addressFabric3001

Add a NIC on a running cluster

  1. Check current network card stats on the EC2 instance. From the command line or the AWS Console:

    ifconfig

    Sample output:

    eth0      Link encap:Ethernet  HWaddr 0E:F9:00:58:88:62  
    inet addr:10.0.0.182 Bcast:10.0.0.255 Mask:255.255.255.0
    inet6 addr: fe80::cf9:ff:fe58:8862/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
    RX packets:56729 errors:0 dropped:0 overruns:0 frame:0
    TX packets:44089 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:25110351 (23.9 MiB) TX bytes:5487096 (5.2 MiB)

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:217 errors:0 dropped:0 overruns:0 frame:0
    TX packets:217 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1
    RX bytes:44332 (43.2 KiB) TX bytes:44332 (43.2 KiB)

  2. Retrieve the instance metadata.

    curl http://169.254.169.254/latest/dynamic/instance-identity/document
    {
    "devpayProductCodes" : null,
    "privateIp" : "10.0.0.182",
    "availabilityZone" : "us-east-1a",
    "version" : "2010-08-31",
    "region" : "us-east-1",
    "instanceId" : "i-074831c705d8eb129",
    "billingProducts" : null,
    "instanceType" : "c4.xlarge",
    "accountId" : "268841430234",
    "architecture" : "x86_64",
    "kernelId" : null,
    "ramdiskId" : null,
    "imageId" : "ami-b239daa4",
    "pendingTime" : "2017-01-23T23:33:05Z"
    }

  3. Retrieve the security group.

    curl http://169.254.169.254/latest/meta-data/security-groups
    AWSMPMyVPCCluster-71H99AZ60CK6
  4. Create new network card

    a. On the AWS console, click NETWORK & SECURITY → Network Interfaces.

    b. Click Create Network Interface.

    c. Select the availability-zone that matches the one your instance is using.

    d. Select the security group for your network.

    e. Click Yes Create

    f. Select the newly created Network Interface and click Attach.

    g. Select InstanceID for your running instance and click Attach.

    Create other network cards as needed, to match the traffic isolation table above.

  5. Create XDR-dedicated network card

    a. On the AWS console, click NETWORK & SECURITY → Elastic IPs.

    b. Click Allocate new address.

    c. Click Allocate and note the address of the new Elastic IP.

    d. Select the new Elastic IP, and then click Associate Address.

    e. Select Resource Type [Network interface], choose the Network Interface of the eth1 NIC of the instance, and its IP as the private IP.

    Repeat with the other nodes of the cluster.

  1. Verify new network cards.

    ifconfig

    Sample output:

    eth0      Link encap:Ethernet  HWaddr 0E:F9:00:58:88:62  
    inet addr:10.0.0.182 Bcast:10.0.0.255 Mask:255.255.255.0
    inet6 addr: fe80::cf9:ff:fe58:8862/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
    RX packets:57474 errors:0 dropped:0 overruns:0 frame:0
    TX packets:44785 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:25312145 (24.1 MiB) TX bytes:5579362 (5.3 MiB)

    eth1 Link encap:Ethernet HWaddr 0E:0A:CC:FB:AD:8A
    inet addr:10.0.0.29 Bcast:10.0.0.255 Mask:255.255.255.0
    inet6 addr: fe80::c0a:ccff:fefb:ad8a/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
    RX packets:9 errors:0 dropped:0 overruns:0 frame:0
    TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:1128 ( a.1 KiB) TX bytes:2766 (2.7 KiB)

    eth2 Link encap:Ethernet HWaddr 0E:53:3F:CD:BB:02
    inet addr:10.0.0.210 Bcast:10.0.0.255 Mask:255.255.255.0
    inet6 addr: fe80::c53:3fff:fecd:bb02/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
    RX packets:4 errors:0 dropped:0 overruns:0 frame:0
    TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:830 (830.0 b) TX bytes:2214 (2.1 KiB)

    eth3 Link encap:Ethernet HWaddr 0E:82:D3:7D:D4:BE
    inet addr:10.0.0.87 Bcast:10.0.0.255 Mask:255.255.255.0
    inet6 addr: fe80::c82:d3ff:fe7d:d4be/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
    RX packets:3 errors:0 dropped:0 overruns:0 frame:0
    TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:768 (768.0 b) TX bytes:2058 (2.0 KiB)

    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:65536 Metric:1
    RX packets:217 errors:0 dropped:0 overruns:0 frame:0
    TX packets:217 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1
    RX bytes:44332 (43.2 KiB) TX bytes:44332 (43.2 KiB)

  2. Verify IP addresses.

    netstat -anpt

    Sample output:

    (No info could be read for "-p": geteuid()=500 but you should be root.)
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
    tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN -
    tcp 0 0 0.0.0.0:3001 0.0.0.0:* LISTEN -
    tcp 0 0 0.0.0.0:3002 0.0.0.0:* LISTEN -
    tcp 0 0 0.0.0.0:3003 0.0.0.0:* LISTEN -
    tcp 0 0 0.0.0.0:36300 0.0.0.0:* LISTEN -
    tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN -
    tcp 0 0 0.0.0.0:8082 0.0.0.0:* LISTEN -
    tcp 0 0 10.0.0.182:44996 10.0.0.112:3001 ESTABLISHED -
    tcp 0 1016 10.0.0.182:22 204.153.194.234:32501 ESTABLISHED -
    tcp 0 0 10.0.0.182:3002 10.0.0.112:46318 ESTABLISHED -
    tcp 0 0 10.0.0.182:3001 10.0.0.112:57388 ESTABLISHED -
    tcp 0 0 10.0.0.182:3001 10.0.0.112:57390 ESTABLISHED -
    tcp 0 0 :::22 :::* LISTEN -
    tcp 0 0 :::34438 :::* LISTEN -
    tcp 0 0 :::111 :::* LISTEN -
    tcp 0 0 :::8081 :::*
  3. Verify IP address broadcast to Aerospike clients.

    asinfo -v service
    10.0.0.182:3000
    asinfo -v services
    10.0.0.112:3000
    asinfo -v services-alternate
    34.197.64.209:3000
  4. Modify aerospike.conf for multiple cards.

    In aerospike.conf add or modify the config parameters access-address 1, _alternate-access-address_ 1 , heartbeat _address_ 1, and fabric address:

    network {
    service {
    address any # listen on any available interface for service traffic
    port 3000
    access-address 10.0.0.182 # eth0 for service calls
    alternate-access-address 34.197.253.11 # eth1 Elastic IP value for XDR
    }

    heartbeat {
    mode mesh
    protocol v3 # required for using separate NICs for mesh heartbeat and fabric
    address 10.0.0.210 # eth2 for mesh heartbeat
    port 3002

    mesh-seed-address-port 10.0.0.210 3002 # eth2 of this node
    mesh-seed-address-port 10.0.0.246 3002 # eth2 on node B

    interval 150 # number of milliseconds between heartbeats
    timeout 15 # number of heartbeats failing in a row before timing out
    }

    fabric {
    address 10.0.0.87 # eth3 for fabric communication
    port 3001
    }

    info {
    address 127.0.0.1
    port 3003
    }
    }

Aerospike heartbeat on AWS

AWS does not allow multicast addressing. The default configuration for the heartbeat mode must be modified from multicast mode to mesh mode. Then, the port and the mesh-seed-address-port settings must be set. The port should be changed to the port used for the heartbeat. The mesh-seed address-port-setting should be set to the IP address and heartbeat port of a node in the cluster. It is a good idea to have more than one mesh seed node because if a seed node goes offline, a client may not be able to contact the cluster.

In the current versions of Aerospike (heartbeat-protocol=v3), the heartbeat cannot be configured to use the elastic or public IP address because the node does not have any interface that binds that IP; the public IP is made available via NAT in the EC2 infrastructure, so the node does not see it directly.

For information about how to configure the network heartbeat, see Network Heartbeat Configuration.

Aerospike XDR IP addresses on AWS

This section describes how to configure Aerospike XDR networking with locally accessible IP addresses on AWS, or with public/elastic IP addresses on AWS using the access-address configuration setting.

XDR with locally-accessible IP Addresses on AWS

  1. For each node edit /etc/aerospike/aerospike.conf.

  2. Set dc-node-address-port in the datacenter sub-stanza of the XDR stanza to the local IP address of one node in the remote cluster.

    xdr {
    # http://www.aerospike.com/docs/operations/configure/cross-datacenter
    enable-xdr true # Globally enable/disable XDR on local node.
    namedpipe-path /tmp/xdr_pipe # XDR to/from Aerospike communications channel.
    digestlog-path /opt/aerospike/digestlog 100G # Track digests to be shipped.
    errorlog-path /var/log/aerospike/asxdr.log # Log XDR errors.
    xdr-pidfile /var/run/aerospike/asxdr.pid # XDR PID file location.
    local-node-port 3000 # Port on local node used to read records etc.
    info-port 3004 # Port used by tools to monitor XDR health, current config, etc.
    xdr-compression-threshold 1000

    # http://www.aerospike.com/docs/operations/configure/cross-datacenter/network
    # Canonical name of the remote datacenter.
    datacenter REMOTE_DC_1 {
    dc-node-address-port 172.18.224.189 3000
    }
    }
  3. Restart the Aerospike server.

    sudo service aerospike restart

Configure access-address

The following example demonstrates how to configure access-address to setup communications for XDR. This configuration is most appropriate when both clusters cannot communicate over a private network, and the Aerospike clients access the clusters over the public network. The this configuration introduces a second VPC which places the clusters on separate networks. The local address for each remote node should have a dc-node-address-port entry in aerospike.conf. After all the nodes have been added, each local address is mapped with a dc-int-ext-ipmap line in the XDR stanza of aerospike.conf. With the IP addresses mapped on the local cluster and access-address set to the external addresses on the remote cluster, XDR can begin replication between the two clusters.

The following table shows the public and private IP addresses for the example.

ClusterAZVPCInternal IPExt IP
Localus-east-1aVPC1172.18.10.1854.175.209.97
Localus-east-1aVPC1172.18.10.19054.164.210.61
Remoteus-east-1bVPC210.2.0.1552.90.223.53
Remoteus-east-1bVCP210.2.0.20152.23.170.121
  1. For each node edit /etc/aerospike/aerospike.conf.

  2. Configure access-address on each node in the remote cluster using the elastic IP and the keyword "virtual".

    xdr {
    # http://www.aerospike.com/docs/operations/configure/cross-datacenter

    enable-xdr true # Globally enable/disable XDR on local node.
    namedpipe-path /tmp/xdr_pipe # XDR to/from Aerospike communications channel.
    digestlog-path /opt/aerospike/digestlog 100G # Track digests to be shipped.
    errorlog-path /var/log/aerospike/asxdr.log # Log XDR errors.
    xdr-pidfile /var/run/aerospike/asxdr.pid # XDR PID file location.
    local-node-port 3000 # Port on local node used to read records etc.
    info-port 3004 # Port used by tools to monitor XDR health, current config, etc.
    xdr-compression-threshold 1000

    # http://www.aerospike.com/docs/operations/configure/cross-datacenter/network

    # Canonical name of the remote datacenter.
    datacenter REMOTE_DC_1 {
    dc-node-address-port 10.2.0.154 3000
    dc-node-address-port 10.2.0.201 3000

    # Remote nodes' internal-to-external ip map - include all remote nodes.
    # These are needed only when there are multiple NICs.
    dc-int-ext-ipmap 10.2.0.154 52.90.223.53
    dc-int-ext-ipmap 10.2.0.201 52.23.170.121
    }
    }
  3. Restart the Aerospike server.

    sudo service aerospike restart