Aerospike is an open-source, real-time NoSQL database and key-value store that achieves in-memory performance for big data or context driven applications that must sense and respond right now — at a fraction of the cost. Aerospike operates at in-memory speed and global scale with enterprise-grade reliability.

Aerospike’s distributed “Shared-Nothing” architecture is designed and built to reliably store data with automatic failover, replication with immediate consistency within the cluster and cross data-center synchronization. Since the data is distributed evenly and randomly across all nodes within the cluster, it also offers predicable performance where access to every piece of data using its primary key has the same latency.

Identical Aerospike servers scale out linearly to form a shared-nothing cluster which transparently partitions data and parallelizes processing across nodes. Systematic automation of all cluster management functions eliminates manual operations. You can start with two nodes and just add more hardware, take down nodes as needed – the cluster scales linearly.


Applications link to client libraries – Aerospike Smart Client™

Aerospike Smart Client™ is designed for speed and is implemented as open source libraries and packages available in Java, C#, Node.js, PHP, Go, Python, Ruby, C, Perl, Erlang, Libevent, etc.

Aerospike Smart Client
  • As a first class observer of the cluster, it tracks nodes and knows where data is stored, instantly learning of cluster configuration changes or when nodes are added or removed so you do not have to restart your applications during cluster reconfiguration.
  • Eliminates the need to set up and manage additional cluster management servers or proxies
  • Implements API for storing and retrieving data as well as the client-server protocol. This architecture reduces transaction latency and makes the data available in a single hop from the client
  • Detects transaction failures in the cluster and re-routes those transactions to nodes with copies of data
  • Takes care of and implements its own TCP/IP connection pool for efficiency
  • Responses are predictably fast – data is not cached.

The Aerospike Smart Cluster™ replicates data synchronously (immediate consistency) within the cluster.

  • A typical cluster has replication factor of 2 – the master copy plus a replica. There is no chatter, both clients and servers use a partitioning algorithm to calculate which node is the master or the replica for a given partition. The cluster is rack aware – replicas are distributed across racks.
  • Aerospike uses the Paxos algorithm for consensus during cluster formation to determine which nodes are part of a cluster. Clusters are “tightly coupled” i.e., nodes are typically within a data center or the same availability zone. This proximity between nodes means it takes very little time to detect any nodes entering or leaving the cluster. In practice, we have observed that this enables the Paxos algorithm to terminate in a very short amount of time. Therefore, Aerospike clusters enable applications to continue working with minimal disruption during node addition and deletion events.
  • When cluster state changes (e.g. a node fails or a new node is added) and consensus is reached, nodes use the partitioning algorithm to calculate the new partition map and automatically rebalance the data.
  • If during re-balancing a node receives a request for a piece of data that it does not have locally, it creates an internal proxy for this request, fetches the data and replies to the client directly.
  • For writes with immediate consistency, writes are propagated to all replicas before committing the data and returning the result to the client.
  • When a cluster is recovering from being partitioned, the system can be configured to automatically resolve conflicts between different copies of data using timestamps. Alternatively, both copies of the data can be returned to the application for resolution at that higher level.
  • In some cases, the replication factor can’t be satisfied. The cluster can be configured to either decrease the replication factor and retain all data, or begin evicting the oldest data that is marked as disposable. If the cluster can’t accept any more data, it will begin operating in a read-only mode until new capacity becomes available – at which point it will automatically being accepting application writes.
  • Adding capacity is easy – just install and configure the new server and the cluster auto-discovers the new node and re-balances.
Architecture Smart Cluster

Aerospike Cross Data Center Replication™ (XDR) manages replication asynchronously across clusters in different data centers.

Aerospike Cross Data Center Replication
  • Data centers can be located closer to consumers, for low latency in different geographies.
  • Data replicated in multiple data centers offer redundancy and disaster recovery.
  • Clusters in different data centers can be of different sizes giving operators more flexibility.
  • Each namespace can be configured to replicate asynchronously to one or more data centers at the same time, in any combination of star (master/slave or active/passive) or ring (master/master or active/active) topology.
  • In the event of a data center failure, the remote cluster can take on the load of serving database requests. When the original cluster becomes available again, the two clusters sync up to ensure that no data is lost.
  • Conflicts can be resolved in database using timestamps in-App by comparing versions

The Aerospike Hybrid Memory System™ gives you the best of both – near RAM speed and the economics of flash.

Aerospike Flash Optimized Hybrid Database You can run Aerospike in pure RAM with spinning disks for persistence or as a hybrid memory database with RAM and flash.

  • Indexes (primary and secondary) are always stored in DRAM for fast access and are never stored on Solid State Drives (SSDs) to ensure low wear.
  • Unlike other databases that use the linux file system that was built for rotational drives, Aerospike has implemented a log structured file system to access flash – raw blocks on SSDs – directly. Access is optimized for how flash works – with small block reads and large block writes – and parallelized across multiple SSDs for better throughput.
  • Per namespace storage configuration – each namespace can be configured to store data on DRAM or on SSDs.
  • Expiration / Eviction. Automatic procedures to handle data overflows. When the system nears capacity, the database continues to serve queries but evicts expired data. Built-in Defragmenter and Evictor processes work together to ensure that there is space in DRAM, data is never lost, and that it safely written to disk.
  • Fast restart. If a server is temporarily taken down, this capability restores the index from a saved copy, eliminating delays due to index rebuilding. A node with over 1 billion records now will restart in about 10 seconds. This allows cluster upgrades and various other operations to go much faster.
  • Aerospike supports SSDs from Intel, Micron, Fusion-IO, Violin Memory, Samsung and others, but some work better than others. The (ACT), an industry standard, is an open source tool used by vendors and customers to validate SSD performance. Read our benchmarks and contribute yours.

Learn more about our products and services.

See what works best for you!