Aerospike was purpose built for speed at scale and be extensible to push the limits of modern computer hardware, parallel processing and distributed systems. It scales both up and out: Aerospike is written in C, for faster processing (compared to databases written in Java or Erlang) and processing is parallelized across threads, cores and direct attached flash memory or solid-state drives (SSDs). Identical Aerospike servers scale out to form a shared-nothing cluster which transparently partitions data and parallelizes processing across nodes. Nodes in the cluster are identical, you can start with 2 and just add more hardware – the cluster scales linearly.

Aerospike 3 System Architecture

Applications use the Aerospike Smart Client™ to access the database. This client library tracks the state of the cluster and the location of data. It takes care of managing connections to the cluster and handling database transactions. When a read or write request is made, it automatically ensures that data can be stored/retrieved in a single hop from the correct node in the cluster. Responses from the database are predictably fast. Data is not pre-loaded or cached.

Scaling is transparent and seamless. Applications do not have to be restarted when new nodes are brought up (or servers fail) and there are no additional cluster management servers or proxies to worry about.

We’re the only NoSQL database that follows the ACID standard. Data is divided into partitions that are distributed evenly across nodes in the cluster. Aerospike Smart Cluster™ replicates data synchronously within the cluster. When state changes (e.g. a node fails or a new node is added), nodes re-synchronize to rebalance the data. There’s no manual sharding or node management.

A typical cluster has replication factor of 2 – that means that two copies of all of the data: the master copy plus a replica. Each node in the cluster and every client uses the Aerospike Smart Partitions™ hashing algorithm to automatically determine who will be data master and who will be replica for a given partition.

Aerospike Cross Data Center Replication™ (XDR) manages replication asynchronously across clusters. In the event of a data center failure, the remote cluster can take on the load of serving database requests. When the original cluster becomes available again, the two clusters sync up to ensure that no data is lost.

The Aerospike Hybrid (DRAM & Flash) Memory System™ gives Aerospike the speed of DRAM and persistence of rotational drives and with Aerospike 3, the Aerospike Alchemy Framework™ pushes processing into the database with User Defined Functions (UDFs). UDFs are used to process one or more records in-database, implement Large Data Types and distributed aggregations.

The Aerospike Monitoring Console (AMC) provides a performance dashboard to monitor your cluster performance. Command line tools allow you to do dynamic server upgrades (without restart) for most configuration parameters and configure and monitor one node or all of the nodes in your cluster. Latency monitoring tools allow you to review histogram data for different database operations in near-real time.

Extensible Data Model

Aerospike Data Model
Aerospike is a row store - data is stored in records (rows). Records are grouped into sets and records and sets can be grouped into namespaces – policy containers that specify whether data is stored in DRAM or Flash memory or how it is replicated.

Records consist of a key-value pairs called bins. The value of a bin may contain integers, strings and blobs.

Aerospike 3 adds support for list and map types for bin values:

  • A list is a sequence of values of any bin type – integer, string, list or map.
  • A map is a mapping of keys to values. The keys can be integers or strings and each value can be of any bin type – integer, string, list or map.

Complex Data Types can sometimes get large and unwieldy, so Aerospike 3 also adds Large Data Types (LDTs) to store a complex structure of smaller records called “sub-records”, bundled together as one large related object. LDTs are a way of extending the database with new bin types using base record types and UDFs. Each LDT can be made up of scores of elements and each element value can be a string, integer, list or map. These new bin types can only be viewed or manipulated by Record UDFs on the server.

  • Large Stack – efficiently stores values for stack-based (LIFO) access, with the ability to push and peek values from the top of the stack.
  • Large List – efficiently stores values in an ordered sequence, with ability to access values by value, position and range.
  • Large Set – efficiently stores unique values, with ability to test for existence of a value within the set.
  • Large Map – efficiently stores key-value pairs, with the ability to accessing entries via the key.

To learn more about the Aerospike architecture, download the whitepaper.