Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This article explains the setup process, dual-stack networking configuration, and an example workload deployment for Azure CNI Overlay in Azure Kubernetes Service (AKS) clusters. For an overview of Azure CNI Overlay networking, see Overview of Azure CNI Overlay networking in Azure Kubernetes Service (AKS).
Important
As of November 30, 2025, Azure Kubernetes Service (AKS) no longer supports or provides security updates for Azure Linux 2.0. The Azure Linux 2.0 node image is frozen at the 202512.06.0 release. Beginning March 31, 2026, node images will be removed, and you'll be unable to scale your node pools. Migrate to a supported Azure Linux version by upgrading your node pools to a supported Kubernetes version or migrating to osSku AzureLinux3. For more information, see [Retirement] Azure Linux 2.0 node pools on AKS.
Prerequisites
- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
- Azure CLI version 2.48.0 or later. To install or upgrade the Azure CLI, see Install the Azure CLI.
- An existing Azure resource group. If you need to create one, see Create resource groups.
For dual-stack networking, you need Kubernetes version 1.26.3 or later.
Key parameters for Azure CNI Overlay AKS clusters
The following table describes the key parameters for configuring Azure CNI Overlay networking in AKS clusters:
| Parameter | Description |
|---|---|
--network-plugin |
Set to azure to use Azure Container Networking Interface (CNI) networking. |
--network-plugin-mode |
Set to overlay to enable Azure CNI Overlay networking. This setting applies only when --network-plugin=azure. |
--pod-cidr |
Specify a custom pod Classless Inter-Domain Routing (CIDR) block for the cluster. The default is 10.244.0.0/16. |
The default behavior for network plugins depends on whether you explicitly set --network-plugin:
- If you don't specify
--network-plugin, AKS defaults to Azure CNI Overlay. - If you specify
--network-plugin=azureand omit--network-plugin-mode, AKS intentionally uses virtual network (node subnet) mode for backward compatibility.
Create an Azure CNI Overlay AKS cluster
Create an Azure CNI Overlay AKS cluster by using the az aks create command with --network-plugin=azure and --network-plugin-mode=overlay. If you don't specify a value for --pod-cidr, AKS assigns the default value of 10.244.0.0/16.
az aks create \
--name $CLUSTER_NAME \
--resource-group $RESOURCE_GROUP \
--location $REGION \
--network-plugin azure \
--network-plugin-mode overlay \
--pod-cidr 192.168.0.0/16 \
--generate-ssh-keys
Add a new node pool to a dedicated subnet
Add a node pool to a different subnet within the same virtual network to control virtual machine (VM) node IP addresses for network traffic to virtual network or peered virtual network resources.
Add a new node pool to the cluster by using the az aks nodepool add command and specify the subnet resource ID with the --vnet-subnet-id parameter. For example:
az aks nodepool add \
--resource-group $RESOURCE_GROUP \
--cluster-name $CLUSTER_NAME \
--name $NODE_POOL_NAME \
--node-count 1 \
--mode system \
--vnet-subnet-id $SUBNET_RESOURCE_ID
About Azure CNI Overlay AKS clusters with dual-stack networking
You can deploy your Azure CNI Overlay AKS clusters in a dual-stack mode with an Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive an IPv4 and IPv6 address from a different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6).
Note
You can also deploy dual-stack networking clusters by using Azure CNI Powered by Cilium. For more information, see Dual-stack networking with Azure CNI Powered by Cilium.
Dual-stack networking limitations
The following features aren't supported with dual-stack networking:
Key parameters for dual-stack networking
The following table describes the key parameters for configuring dual-stack networking in Azure CNI Overlay AKS clusters:
| Parameter | Description |
|---|---|
--ip-families |
Takes a comma-separated list of IP families to enable on the cluster. Only ipv4 and ipv4,ipv6 are supported. |
--pod-cidrs |
Takes a comma-separated list of Classless Inter-Domain Routing (CIDR) notation IP ranges to assign pod IPs from. The count and order of ranges in this list must match the value provided to --ip-families. If you don't supply any values, the parameter uses the default value of 10.244.0.0/16,fd12:3456:789a::/64. |
--service-cidrs |
Takes a comma-separated list of CIDR notation IP ranges to assign service IPs from. The count and order of ranges in this list must match the value provided to --ip-families. If you don't supply any values, the parameter uses the default value of 10.0.0.0/16,fd12:3456:789a:1::/108. The IPv6 subnet assigned to --service-cidrs can be no larger than /108. |
Create an Azure CNI Overlay AKS cluster with dual-stack networking (Linux)
Create an Azure resource group for the cluster by using the
az group createcommand:az group create --location $REGION --name $RESOURCE_GROUPCreate a dual-stack AKS cluster by using the
az aks createcommand with the--ip-familiesparameter set toipv4,ipv6:az aks create \ --location $REGION \ --resource-group $RESOURCE_GROUP \ --name $CLUSTER_NAME \ --network-plugin azure \ --network-plugin-mode overlay \ --ip-families ipv4,ipv6 \ --generate-ssh-keys
Create an Azure CNI Overlay AKS cluster with dual-stack networking (Windows)
Important
AKS preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see the following support articles:
Before you create an Azure CNI Overlay AKS cluster with dual-stack networking with Windows node pools, you need to install the aks-preview Azure CLI extension and register the AzureOverlayDualStackPreview feature flag in your subscription.
Install the aks-preview Azure CLI extension
Install the
aks-previewextension by using theaz extension addcommand:az extension add --name aks-previewUpdate to the latest version of the extension by using the
az extension updatecommand:az extension update --name aks-preview
Register the AzureOverlayDualStackPreview feature flag
Register the
AzureOverlayDualStackPreviewfeature flag by using theaz feature registercommand:az feature register --namespace "Microsoft.ContainerService" --name "AzureOverlayDualStackPreview"It takes a few minutes for the status to show
Registered.Verify the registration status by using the
az feature showcommand:az feature show --namespace "Microsoft.ContainerService" --name "AzureOverlayDualStackPreview"When the status reflects
Registered, refresh the registration of theMicrosoft.ContainerServiceresource provider by using theaz provider registercommand:az provider register --namespace Microsoft.ContainerService
Create a dual-stack Azure CNI Overlay AKS cluster and add a Windows node pool
Create a cluster with Azure CNI Overlay by using the
az aks createcommand:az aks create \ --name $CLUSTER_NAME \ --resource-group $RESOURCE_GROUP \ --location $REGION \ --network-plugin azure \ --network-plugin-mode overlay \ --ip-families ipv4,ipv6 \ --generate-ssh-keysAdd a Windows node pool to the cluster by using the
az aks nodepool addcommand:az aks nodepool add \ --resource-group $RESOURCE_GROUP \ --cluster-name $CLUSTER_NAME \ --os-type Windows \ --name $WINDOWS_NODE_POOL_NAME \ --node-count 2
Deploy an example workload to the Azure CNI Overlay AKS cluster
Deploy dual-stack AKS CNI Overlay clusters with IPv4/IPv6 addresses on virtual machine nodes. This example deploys an NGINX web server and exposes it by using a LoadBalancer service with both IPv4 and IPv6 addresses.
Note
We recommend using the application routing add-on for ingress in AKS clusters. However, for demonstration purposes, this example deploys an NGINX web server without the application routing add-on. For more information about the add-on, see Managed NGINX ingress with the application routing add-on.
Expose the workload by using a LoadBalancer service
Expose the NGINX deployment by using either kubectl commands or YAML manifests.
Important
There are currently two limitations that pertain to IPv6 services in AKS:
- Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, you can't route this traffic to a pod, so traffic flowing to IPv6 services deployed with
externalTrafficPolicy: Clusterfails. - You must deploy IPv6 services with
externalTrafficPolicy: Local, which causeskube-proxyto respond to the probe on the node.
Expose the NGINX deployment by using the
kubectl expose deployment nginxcommand:kubectl expose deployment nginx --name=nginx-ipv4 --port=80 --type=LoadBalancer kubectl expose deployment nginx --name=nginx-ipv6 --port=80 --type=LoadBalancer --overrides='{"spec":{"ipFamilies": ["IPv6"]}}'Your output should show the exposed services. For example:
service/nginx-ipv4 exposed service/nginx-ipv6 exposedAfter the deployment is exposed and the
LoadBalancerservices are fully provisioned, get the IP addresses of the services by using thekubectl get servicescommand:kubectl get servicesYour output should show the services with their assigned IP addresses. For example:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ipv4 LoadBalancer 10.0.88.78 20.46.24.24 80:30652/TCP 97s nginx-ipv6 LoadBalancer fd12:3456:789a:1::981a 2603:1030:8:5::2d 80:32002/TCP 63sGet the service IP by using the
kubectl get servicescommand and set it to an environment variable:SERVICE_IP=$(kubectl get services nginx-ipv6 -o jsonpath='{.status.loadBalancer.ingress[0].ip}')Verify functionality by using a
curlrequest from an IPv6-capable host. (Azure Cloud Shell isn't IPv6 capable.)curl -s "http://[${SERVICE_IP}]" | head -n5Your output should show the HTML for the NGINX welcome page. For example:
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style>
Related content
To learn more about Azure CNI Overlay networking on AKS, see the following articles: