Skip to main content
Loading
Version: Operator 3.0.0

Create an Aerospike Cluster on Kubernetes with non-root user

To use the Operator to deploy a non-root Aerospike cluster, create an Aerospike custom resource (CR) file which describes the cluster (including its number of nodes, the Aerospike configuration, system resources, etc.). Then use kubectl to apply that configuration file to your Kubernetes cluster(s).

Requirementsโ€‹

Configure CRI container runtimes (containerd, CRI-O)โ€‹

For non-root containers to use devices requires cluster administrators to opt-in to the functionality by setting device_ownership_from_security_context = true on each worker node. The flag is available in CRI-O v1.22 release and containerd v1.6.6 and above. For more details, see Non-root Containers And Devices

Cluster runs containerd with:

[plugins]
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = true

or CRI-O with:

[crio.runtime]
device_ownership_from_security_context = true

Restart container runtime service:

sudo systemctl restart containerd
or
sudo systemctl restart crio

Verify device_ownership_from_security_context = true has been set successfully:

sudo crictl info
...
"disableHugetlbController": true,
"device_ownership_from_security_context": true,
"ignoreImageDefinedVolumes": false,
"netnsMountsUnderStateDir": false,
...

Install Aerospike Kubernetes Operatorโ€‹

Before deploying your Aerospike cluster, install the Aerospike Kubernetes Operator on your Kubernetes cluster(s) using either:

Prepare the namespace, storage and secretsโ€‹

Before creating your Aerospike cluster CR, create the required namespace, storage and secrets using either:

Create Aerospike Cluster Custom Resource (CR)โ€‹

Refer to the cluster configuration settings for details on the Aerospike cluster custom resource (CR) file. You can find sample Aerospike cluster CR files for different configurations in the main Aerospike Kubernetes Operator repository.

Edit the CR file to add securityContext under podSpec.

vi config/samples/ssd_storage_cluster_cr.yaml
...
podSpec:
multiPodPerHost: true
securityContext:
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
...

Deploy the Aerospike Clusterโ€‹

Use the custom resource YAML file you created to deploy an Aerospike cluster.

kubectl apply -f config/samples/ssd_storage_cluster_cr.yaml

Verify Cluster Statusโ€‹

Use kubectl get statefulset to ensure the aerospike-kubernetes-operator creates the StatefulSets for the custom resource.

Output:

$ kubectl get statefulset -n aerospike
NAME READY AGE
aerocluster-0 2/2 24s

Use kubectl get pods to check the pods to confirm the status. This step may take time as the pods provision resources, initialize, and are ready. Please wait for the pods to switch to the Running state before you continue.

Output:

$ kubectl get pods -n aerospike
NAME READY STATUS RESTARTS AGE
aerocluster-0-0 1/1 Running 0 48s
aerocluster-0-1 1/1 Running 0 48s

To verify the results, check the user and group ID that the container runs as:

kubectl exec -it aerocluster-0-0 -c aerospike-server -n aerospike -- id

They are set to non-zero values as expected:

uid=1001 gid=1001 groups=1001

Next, check that the device node permissions are accessible to runAsUser/runAsGroup:

Output:

$ kubectl exec -it aerocluster-0-0 -c aerospike-server -n aerospike -- ls -la /test/dev           # Block device path /test/dev/xvdf
total 8
drwxr-xr-x 2 root root 4096 Sep 29 18:30 .
drwxr-xr-x 3 root root 4096 Sep 29 18:30 ..
brw-rw---- 1 1001 1001 8, 64 Sep 29 18:30 xvdf

If the Aerospike cluster pods do not switch to Running status in a few minutes, refer to the Troubleshooting Guide.