Skip to content

Get started with AKO in the cloud

This tutorial describes how to create an Aerospike Database Enterprise Edition deployment using the Aerospike Kubernetes Operator (AKO) on Google Kubernetes Engine (GKE) or Amazon Elastic Kubernetes Service (EKS).

Most steps on this page are performed in the same way on GKE or EKS. Where there is a difference, choose the tab that applies to your deployment.

Prerequisites

  • A running GKE or EKS Kubernetes cluster
  • The gcloud or aws terminal utilities
  • Helm
  • Git
  • Kubectl
  • An Aerospike Enterprise Edition feature-key file

This tutorial assumes basic knowledge of Kubernetes and that your terminal is set up to communicate with your cloud Kubernetes instance.

Pre-install

Your cloud Kubernetes instance should be set up to the point that you can run commands with your local terminal to create Secrets and install Helm charts. You can test this with the following commands. Copy and paste the commands into your terminal. If all commands complete without errors, your environment is ready to install AKO.

Terminal window
# Point kubectl at the correct GKE cluster
gcloud container clusters get-credentials CLUSTER_NAME --region REGION
# Check connectivity and versions
kubectl cluster-info # API-server reachable?
kubectl get nodes -o wide # Nodes visible?
helm version --short # Helm installed?
git --version # Git installed?

Install AKO

In this section, you use Helm to install AKO on your Kubernetes cluster and configure the Kubernetes namespace to watch for your Aerospike Database deployment.

  1. Add the JetStack Helm repository so you can install cert-manager, a utility that AKO relies on to manage certificates.

    Terminal window
    helm repo add jetstack https://charts.jetstack.io --force-update
  2. Install the cert-manager Helm chart.

    Terminal window
    helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.17.0 --set crds.enabled=true
  3. Add the AKO Helm repository.

    Terminal window
    helm repo add aerospike https://aerospike.github.io/aerospike-kubernetes-enterprise
  4. Install AKO to your cluster. The watchNamespaces parameter tells AKO which Kubernetes namespace to monitor for AerospikeCluster custom resources.

    Terminal window
    helm install aerospike-kubernetes-operator aerospike/aerospike-kubernetes-operator --version=4.3.0 --set watchNamespaces="aerospike"

AKO is now running on your cluster and is ready to create a new Aerospike Database deployment.

Deploy an Aerospike database

In this section, you use kubectl to deploy Aerospike Database. This tutorial uses a pre-built sample configuration for Aerospike database deployment.

  1. Create a dedicated Kubernetes namespace for your Aerospike Database deployment. This must match the watchNamespaces value you set when installing AKO.

    Terminal window
    kubectl create namespace aerospike
  2. Download the Aerospike Kubernetes Operator repository from GitHub to your local machine. This repository contains sample configuration files for an Aerospike Database deployment. You can modify these files on your local machine before using kubectl to apply the changes to the cluster.

    Terminal window
    git clone https://github.com/aerospike/aerospike-kubernetes-operator.git

    The only directory you need to interact with during this tutorial is config/samples/ as shown in the following diagram:

    • Directoryaerospike-kubernetes-operator
      • Directoryapi/
      • Directorycmd/
      • Directoryconfig/
        • Directorysamples/
        • (sample configuration files)
      • (other directories)
  3. Navigate to the repository and check out the release tag that matches your AKO version.

    Terminal window
    cd aerospike-kubernetes-operator
    git checkout v4.3.0

    Your working directory should remain aerospike-kubernetes-operator/ for the rest of this tutorial. All config/samples/... paths are relative to this directory.

  4. Copy your feature-key file, typically named features.conf, into the existing config/samples/secrets/ directory. Aerospike Enterprise Edition requires this file to start.

  5. Create the Kubernetes Secrets that the Aerospike cluster needs at runtime. The first secret loads everything in the secrets directory, including your feature-key file. The second sets a placeholder initial password for the Aerospike admin user.

    Terminal window
    kubectl -n aerospike create secret generic aerospike-secret --from-file=config/samples/secrets
    kubectl -n aerospike create secret generic auth-secret --from-literal=password='admin123'
  6. Create a ServiceAccount for AKO in the aerospike Kubernetes namespace. AKO uses this identity when it manages pods, services, and other resources for your database.

    Terminal window
    kubectl -n aerospike create serviceaccount aerospike-operator-controller-manager
  7. Bind the aerospike-cluster ClusterRole to the ServiceAccount you just created. This grants AKO the permissions it needs to manage Aerospike cluster resources.

    Terminal window
    kubectl create clusterrolebinding aerospike-cluster --clusterrole=aerospike-cluster --serviceaccount=aerospike:aerospike-operator-controller-manager
  8. Prepare the cluster to use SSD storage with a sample storage class file from the GitHub repository.

    Terminal window
    kubectl apply -f config/samples/storage/gce_ssd_storage_class.yaml
  9. Create the Aerospike cluster by applying a sample Custom Resource (CR). AKO watches for this resource and automatically provisions the Aerospike Database pods. This sample uses SSD-backed storage, which is the recommended configuration for cloud deployments.

    Terminal window
    kubectl apply -f config/samples/ssd_storage_cluster_cr.yaml

    After you apply the CR, the cluster starts initializing or updating.

  10. Watch the cluster status. The -w flag streams updates so you can see the phase change in real time.

    Terminal window
    kubectl get aerospikeclusters aerocluster -n aerospike -w

    Wait until the PHASE column shows Completed, then press Ctrl+C to stop watching. This can take a few minutes, depending on your cluster size and the resources available.

    Terminal window
    NAME SIZE IMAGE MULTIPODPERHOST HOSTNETWORK AGE PHASE
    aerocluster 2 aerospike/aerospike-server-enterprise:8.1.1.0 true 2s InProgress
    aerocluster 2 aerospike/aerospike-server-enterprise:8.1.1.0 true 21s InProgress
    aerocluster 2 aerospike/aerospike-server-enterprise:8.1.1.0 true 21s InProgress
    aerocluster 2 aerospike/aerospike-server-enterprise:8.1.1.0 true 27s InProgress
    aerocluster 2 aerospike/aerospike-server-enterprise:8.1.1.0 true 28s Completed
  11. Find the access endpoints for your cluster. These are the host:port pairs that client applications use to connect to your Aerospike database.

    Terminal window
    kubectl -n aerospike describe aerospikeclusters aerocluster | grep -E 'Access Endpoints|Alternate Access Endpoints' -A1

    You can use any of the Aerospike client libraries to write tests for reading and writing to this database backend.

You now have a running Aerospike Database deployment on the cloud using AKO!

Cleanup

When you have finished this tutorial, remove the AKO and Aerospike resources you created before deleting cloud infrastructure. This cleanup order helps avoid leaving persistent volumes and cloud disks behind.

Terminal window
kubectl delete -f config/samples/ssd_storage_cluster_cr.yaml
helm uninstall aerospike-kubernetes-operator
helm uninstall cert-manager -n cert-manager
kubectl delete clusterrolebinding aerospike-cluster
kubectl delete ns aerospike

For complete cleanup guidance, including uninstalling CRDs and related resources, see Uninstall Aerospike Kubernetes Operator.

Next steps

The biggest difference between the cluster created in this tutorial and a production cluster is in the storage class and CR file configurations. See the AKO Configuration section to learn how to configure your deployment for your own application needs.

Become familiar with the Aerospike Backup Service (ABS) and the monitoring stack. They are separate services that run alongside the database in your cluster. ABS listens for REST requests to perform backups and restores of the database, while the monitoring stack lets you visualize cluster statistics on Grafana dashboards.

Feedback

Was this page helpful?

What type of feedback are you giving?

What would you like us to know?

+Capture screenshot

Can we reach out to you?