Smart Client

Automatically distributes both data and traffic to all the nodes in a cluster. Automatic load balancing of the client improves both performance and correctness.

Aerospike’s Smart Client™ automatically distributes both data and traffic to all the nodes in a cluster. Automatic load balancing of the client improves both performance and correctness.

The client needs to know about all the nodes of the cluster and their roles. In Aerospike, each node maintains a list of its neighboring nodes. This list is used for the discovery of the cluster nodes. The client starts with one or more seed nodes and discovers the entire set of cluster nodes. Once all nodes are discovered, the client needs to know the role of each node. Each node owns a master or replica for a subset of partitions out of the total set of partitions. This mapping from partition to node (partition map) is exchanged and cached with the clients. Sharing of the partition map with the client is critical in making client-server interactions extremely efficient. This is why, in Aerospike, there is single-hop access to data from the client. In steady state, the scale-out ability of the Aerospike cluster is purely a function of the number of clients or server nodes. This guarantees the linear scalability of the system as long as other parts of the system-like network interconnect-can absorb the load.

Each client process stores the partition map in its memory. To keep the information up to date, the client process periodically consults the server nodes to check if there are any updates. It does this by checking the version that it has stored locally against the latest version of the server. If there is an update, it performs requests for the full partition map.

For each of the cluster node, at the time of initialization, the client creates an in-memory structure on behalf of that node and stores its partition map. It also maintains a connection pool for that node. All of this is torn down when the node is declared to be down. The setup and tear-down is a costly operation. Also, in case of failure, the client needs to have a fallback plan to handle the failure by retrying the database operation on the same node or on a different node in the cluster. If the underlying network is flaky and this happens repeatedly, it can end up degrading the performance of the overall system. This leads to the need to have a balanced approach to identifying cluster node health. The following strategies are used by Aerospike to achieve this balance.

The client’s use of transaction response status code alone as a measure of the state of the DBMS cluster is a suboptimal scheme. The contacted server node may temporarily fail to accept the transaction request. Or it could be that there is a transient network issue, while the server node itself is up and healthy. To discount such scenarios, clients track the number of failures encountered by the client on database operations at a specific cluster node. The client drops a cluster node only when the failure count (a.k.a. “happiness factor”) crosses a particular threshold. Any successful operation to that node will reset the failure count to 0.

Inconsistent networks are often tough to handle. One-way network failures (A sees B, but B does not see A) are even tougher. There can be situations where the cluster nodes can see each other but the client is unable to see some cluster nodes directly (say, X). In these cases, the client consults all the nodes of the cluster visible to itself and sees if any of these nodes has X in their neighbor list. If a client-visible node in the cluster reports that X is in its neighbor list, the client does nothing. If no client-visible cluster nodes report that X is in their neighbor list, the client will wait for a threshold time and then permanently remove the node by tearing down the data structures referencing the removed node. Over several years of deployments, we have found that this scheme greatly improved the stability of the overall system.

View all features

Ready to get started?