Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
In this article, we create the infrastructure resources required to run a Valkey cluster on Azure Kubernetes Service (AKS).
Prerequisites
- If you haven't already, review the Overview for deploying a Valkey cluster on Azure Kubernetes Service (AKS).
- An Azure subscription. If you don't have one, create a free account.
- Azure CLI version 2.61.0. To install or upgrade, see Install Azure CLI.
- Helm version 3 or later. To install, see Installing Helm.
kubectl
, which the Azure Cloud Shell installs by default.- Docker installed on your local machine. To install, see Get Docker.
Set environment variables
Set the required environment variables for use throughout this guide:
random=$(echo $RANDOM | tr '[0-9]' '[a-z]') export MY_RESOURCE_GROUP_NAME=myResourceGroup-rg export MY_LOCATION=eastus export MY_ACR_REGISTRY=mydnsrandomname$(echo $random) export MY_KEYVAULT_NAME=vault-$(echo $random)-kv export MY_CLUSTER_NAME=cluster-aks
Create a resource group
Create a resource group using the
az group create
command.az group create --name $MY_RESOURCE_GROUP_NAME --location $MY_LOCATION --output table
Example output:
Location Name ---------- ------------------ eastus myResourceGroup-rg
Create an Azure Key Vault instance
Create an Azure Key Vault instance using the
az keyvault create
command.az keyvault create --name $MY_KEYVAULT_NAME --resource-group $MY_RESOURCE_GROUP_NAME --location $MY_LOCATION --enable-rbac-authorization false --output table
Example output:
Location Name ResourceGroup ---------- -------------- ------------------ eastus vault-bbbhe-kv myResourceGroup-rg
Create an Azure Container Registry
Create an Azure Container Registry to store and manage your container images using the
az acr create
command.az acr create \ --name ${MY_ACR_REGISTRY} \ --resource-group $MY_RESOURCE_GROUP_NAME \ --sku Premium \ --location $MY_LOCATION \ --admin-enabled true \ --output table export MY_ACR_REGISTRY_ID=$(az acr show --name $MY_ACR_REGISTRY --resource-group $MY_RESOURCE_GROUP_NAME --query id --output tsv)
Example output:
NAME RESOURCE GROUP LOCATION SKU LOGIN SERVER CREATION DATE ADMIN ENABLED -------------------- ------------------ ---------- ------- ------------------------------- -------------------- --------------- mydnsrandomnamebbbhe myResourceGroup-rg eastus Premium mydnsrandomnamebbbhe.azurecr.io 2024-06-11T09:36:43Z True
Create an AKS cluster
In this step, we create an AKS cluster. We enable the Azure KeyVault Secret Provider Addon, which allows the AKS cluster to access secrets stored in Azure Key Vault. We also enable Workload Identity, which allows the AKS cluster to access other Azure resources securely.
Create an AKS cluster using the
az aks create
command.az aks create \ --location $MY_LOCATION \ --name $MY_CLUSTER_NAME \ --tier standard \ --resource-group $MY_RESOURCE_GROUP_NAME \ --network-plugin azure \ --node-vm-size Standard_DS4_v2 \ --node-count 3 \ --auto-upgrade-channel stable \ --node-os-upgrade-channel NodeImage \ --attach-acr ${MY_ACR_REGISTRY} \ --enable-oidc-issuer \ --enable-workload-identity \ --enable-addons azure-keyvault-secrets-provider \ --zones 1 2 3 \ --generate-ssh-keys \ --output table
Example output:
Kind KubernetesVersion Location MaxAgentPools Name NodeResourceGroup ProvisioningState ResourceGroup ResourceUid SupportPlan ----------------------------------------------------------------------- -------------------------- ---------------------- ---------------------------------- ------------------------------------ ------------------------- ------------ ---------------------------------------------------------------- ------ ------------------- ---------- --------------- ----------- ---------------------------------------- ------------------- ------------------ ------------------------ ------------------ cluster-ak-myresourcegroup--9b70ac-hhrizake.portal.hcp.eastus.azmk8s.io 1.28.9 False cluster-ak-myResourceGroup--9b70ac efecebf9-8894-46b9-9d68-09bfdadc474a False True cluster-ak-myresourcegroup--9b70ac-hhrizake.hcp.eastus.azmk8s.io Base 1.28 eastus 100 cluster-aks MC_myResourceGroup-rg_cluster-aks_eastus Succeeded myResourceGroup-rg 66681ad812cd770001814d32 KubernetesOfficial
Get the Identity ID and the Object ID created by the Azure KeyVault Secret Provider Addon, using the
az aks show
command.export userAssignedIdentityID=$(az aks show --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_CLUSTER_NAME --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId --output tsv) export userAssignedObjectID=$(az aks show --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_CLUSTER_NAME --query addonProfiles.azureKeyvaultSecretsProvider.identity.objectId --output tsv)
Assign the
AcrPull
role to the kubelet identity using theaz role assignment create
command.export KUBELET_IDENTITY=$(az aks show --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_CLUSTER_NAME --output tsv --query identityProfile.kubeletidentity.objectId) az role assignment create \ --assignee ${KUBELET_IDENTITY} \ --role "AcrPull" \ --scope ${MY_ACR_REGISTRY_ID} \ --output table
Example output:
CreatedBy CreatedOn Name PrincipalId PrincipalName PrincipalType ResourceGroup RoleDefinitionId RoleDefinitionName Scope UpdatedBy UpdatedOn ------------------------------------ -------------------------------- ------------------------------------ ------------------------------------ ------------------------------------ ---------------- ------------------ ------------------------------------------------------------------------------------------------------------------------------------------ -------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------ -------------------------------- bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f 2024-06-11T09:41:36.631310+00:00 04628c5e-371a-49b8-8462-4ecd7f90a43f 6a9a8328-7257-4db2-8c4f-169687f36556 94fa3265-4ac2-4e19-8516-f3e830642ca8 ServicePrincipal myResourceGroup-rg /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/providers/Microsoft.Authorization/roleDefinitions/7f951dda-4ed3-4680-a7ca-43fe172d538d AcrPull /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myResourceGroup-rg/providers/Microsoft.ContainerRegistry/registries/mydnsrandomnamebbbhe bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f 2024-06-11T09:41:36.631310+00:00
Create a node pool for the Valkey workload
In this section, we create a node pool dedicated to running the Valkey workload. This node pool has autoscaling disabled and is created with six nodes across two availability zones, because we want to have one secondary per primary in a different zone.
Create a new node pool using the
az aks nodepool add
command.while [ "$(az aks show --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_CLUSTER_NAME --output tsv --query provisioningState)" != "Succeeded" ]; do echo "waiting for cluster to be ready"; sleep 10; done az aks nodepool add \ --resource-group $MY_RESOURCE_GROUP_NAME \ --cluster-name $MY_CLUSTER_NAME \ --name valkey \ --node-vm-size Standard_D4s_v3 \ --node-count 6 \ --zones 1 2 3 \ --output table
Example output:
Count CurrentOrchestratorVersion ETag EnableAutoScaling EnableCustomCaTrust EnableEncryptionAtHost EnableFips EnableNodePublicIp EnableUltraSsd KubeletDiskType MaxPods Mode Name NodeImageVersion OrchestratorVersion OsDiskSizeGb OsDiskType OsSku OsType ProvisioningState ResourceGroup ScaleDownMode TypePropertiesType VmSize WorkloadRuntime ------- ---------------------------- ------------------------------------ ------------------- --------------------- ------------------------ ------------ -------------------- ---------------- ----------------- --------- ------ ------ ---------------------------------------- --------------------- -------------- ------------ ------- -------- ------------------- ------------------ --------------- ----------------------- --------------- ----------------- 6 1.28.9 b7aa8e37-ff39-4ec7-bed0-cb37876416cc False False False False False False OS 30 User valkey AKSUbuntu-2204gen2containerd-202405.27.0 1.28 128 Managed Ubuntu Linux Succeeded myResourceGroup-rg Delete VirtualMachineScaleSets Standard_D4s_v3 OCIContainer
Upload Valkey images to your Azure Container Registry
In this section, we download the Valkey image from Docker Hub and upload it to Azure Container Registry. This step ensures that the image is available in your private registry and can be used in your AKS cluster. We don't recommend consuming the public image in a production environment.
Import the Valkey image from Dockerhub and upload it to your Azure Container Registry using the
az acr import
command.az acr import \ --name $MY_ACR_REGISTRY \ --source docker.io/valkey/valkey:latest \ --image valkey:latest \ --output table
Deploy the infrastructure with Terraform
To deploy the infrastructure using Terraform, we're going to use the Azure Verified Modulefor AKS.
Note
If you're planning to run this deployment in production, we recommend looking at AKS production pattern module for Azure Verified Modules. This module comes coupled with best practice recommendations.
Clone the git repository with the terraform module.
git clone https://github.com/Azure/terraform-azurerm-avm-res-containerservice-managedcluster.git cd terraform-azurerm-avm-res-containerservice-managedcluster/tree/stateful-workloads/examples/stateful-workloads-valkey
Set Valkey variables by creating a
valkey.tfvars
file with the following contents. You can also provide your specific variables at this step:acr_task_content = <<-EOF version: v1.1.0 steps: - cmd: bash echo Waiting 10 seconds the propagation of the Container Registry Data Importer and Data Reader role - cmd: bash sleep 10 - cmd: az login --identity - cmd: az acr import --name $RegistryName --source docker.io/valkey/valkey:latest --image valkey:latest EOF valkey_enabled = true node_pools = { valkey = { name = "valkey" vm_size = "Standard_DS4_v2" node_count = 3 zones = [1, 2, 3] os_type = "Linux" } }
To deploy the infrastructure, run the Terraform commands.In this step, we set the required variables that will be used when deploying Valkey in the next step.
terraform init export MY_RESOURCE_GROUP_NAME=myResourceGroup-rg export MY_LOCATION=centralus SECRET=$(openssl rand -base64 32) export TF_VAR_valkey_password=${SECRET} export TF_VAR_location=${MY_LOCATION} export TF_VAR_resource_group_name=${MY_RESOURCE_GROUP_NAME} terraform apply -var-file="valkey.tfvars"
Note
In some cases, the container registry tasks that import Valkey images to the container registry might fail. For more information, see container-registry-task. In most cases, retrying resolves the problem.
- Run the following command to export the Terraform output values as environment variables in the terminal to use them in the next steps:
export MY_ACR_REGISTRY=$(terraform output -raw acr_registry_name) export MY_CLUSTER_NAME=$(terraform output -raw aks_cluster_name)
Next steps
Contributors
Microsoft maintains this article. The following contributors originally wrote it:
- Nelly Kiboi | Service Engineer
- Saverio Proto | Principal Customer Experience Engineer
- Don High | Principal Customer Engineer
- LaBrina Loving | Principal Service Engineer
- Ken Kilty | Principal TPM
- Russell de Pina | Principal TPM
- Colin Mixon | Product Manager
- Ketan Chawda | Senior Customer Engineer
- Naveed Kharadi | Customer Experience Engineer
- Erin Schaffer | Content Developer 2
Azure Kubernetes Service