AKS enabled by Azure Arc network requirements

Applies to: Azure Stack HCI, version 23H2

This article introduces core networking concepts for your VMs and applications in AKS enabled by Azure Arc. The article also describes the required networking prerequisites for creating Kubernetes clusters. We recommend that you work with a network administrator to provide and set up the networking parameters required to deploy AKS enabled by Arc.

In this conceptual article, the following key components are introduced. These components need a static IP address in order for the AKS Arc cluster and applications to create and operate successfully:

  • AKS cluster VMs
  • AKS control plane IP
  • Load balancer for containerized applications

Networking for AKS cluster VMs

Kubernetes nodes are deployed as specialized virtual machines in AKS enabled by Arc. These VMs are allocated IP addresses to enable communication between Kubernetes nodes. AKS Arc uses Azure Stack HCI logical networks to provide IP addresses and networking for the underlying VMs of the Kubernetes clusters. For more information about logical networks, see Logical networks for Azure Stack HCI. You must plan to reserve one IP address per AKS cluster node VM in your Azure Stack HCI environment.

Note

Static IP is the only supported mode for assigning an IP address to AKS Arc VMs. This is because Kubernetes requires the IP address assigned to a Kubernetes node to be constant throughout the lifecycle of the Kubernetes cluster. Software defined virtual networks and SDN related features are currently not supported on AKS on Azure Stack HCI 23H2.

The following parameters are required in order to use a logical network for AKS Arc cluster create operation:

Logical network parameter Description Required parameter for AKS Arc cluster
--address-prefixes AddressPrefix for the network. Currently only 1 address prefix is supported. Usage: --address-prefixes "10.220.32.16/24". Supported
--dns-servers Space-separated list of DNS server IP addresses. Usage: --dns-servers 10.220.32.16 10.220.32.17. Supported
--gateway Gateway. The gateway IP address must be within the scope of the address prefix. Usage: --gateway 10.220.32.16. Supported
--ip-allocation-method The IP address allocation method. Supported values are "Static". Usage: --ip-allocation-method "Static". Supported
--ip-pool-start The start IP address of your IP pool. The address must be in range of the address prefix. Usage: --ip-pool-start "10.220.32.18". Supported
--ip-pool-end The end IP address of your IP pool. The address must be in range of the address prefix. Usage: --ip-pool-end "10.220.32.38". Supported
--vm-switch-name The name of the VM switch. Usage: --vm-switch-name "vm-switch-01". Supported

Control plane IP

Kubernetes uses a control plane to ensure every component in the Kubernetes cluster is kept in the desired state. The control plane also manages and maintains the worker nodes that hold the containerized applications. AKS enabled by Arc deploys the KubeVIP load balancer to ensure that the API server IP address of the Kubernetes control plane is available at all times. This KubeVIP instance requires a single immutable "control plane IP address" to function correctly.

Note

The control plane IP is a required parameter to create a Kubernetes cluster. You must ensure that the control plane IP address of a Kubernetes cluster does not overlap with anything else, including Arc VM logical networks, infrastructure network IPs, load balancers, etc. The control plane IP also must be within the scope of the address prefix of the logical network, but outside the IP pool. This is because the IP pool is only used for VMs, and if you choose an IP address from the IP pool for the control plane, an IP address conflict can result. Overlapping IP addresses can lead to unexpected failures for both the AKS cluster and any other place the IP address is being used. You must plan to reserve one IP address per Kubernetes cluster in your environment.

Load balancer IPs for containerized applications

The main purpose of a load balancer is to distribute traffic across multiple nodes in a Kubernetes cluster. This load balancing can help prevent downtime and improve overall performance of applications. AKS supports the following options to deploy a load balancer for your Kubernetes cluster:

Whether you choose the Arc extension for MetalLB, or bring your own load balancer, you must provide a set of IP addresses to the load balancer service. You have the following options:

  • Provide IP addresses for your services from the same subnet as the AKS Arc VMs.
  • Use a different network and list of IP addresses if your application needs external load balancing.

Regardless of the option you choose, you must ensure that the IP addresses allocated to the load balancer don't conflict with the IP addresses in the logical network or control plane IPs for your Kubernetes clusters. Conflicting IP addresses can lead to unforeseen failures in your AKS deployment and applications.

Proxy settings

Proxy settings in AKS are inherited from the underlying infrastructure system. The functionality to set individual proxy settings for Kubernetes clusters and change proxy settings isn't supported yet. For more information on how to set proxy correctly, see proxy requirements for Azure Stack HCI.

Firewall URL exceptions

Firewall requirements for AKS have been consolidated with Azure Stack HCI firewall requirements. See Azure Stack HCI firewall requirements for list of URLs that need to be allowed to successfully deploy AKS.

DNS server settings

You need to ensure that the DNS server of the logical network can resolve the FQDN of the Azure Stack HCI cluster. DNS name resolution is required for all Azure Stack HCI nodes to be able to communicate with the AKS VM nodes.

Network port and cross-VLAN requirements

When you deploy Azure Stack HCI, you allocate a contiguous block of at least six static IP addresses on your management network's subnet, omitting addresses already used by the physical servers. These IPs are used by Azure Stack HCI and internal infrastructure (Arc Resource Bridge) for Arc VM management and AKS Arc. If your management network that provides IP addresses to Arc Resource Bridge related Azure Stack HCI services are on a different VLAN than the logical network you used to create AKS clusters, you need to ensure that the following ports are opened to successfully create and operate an AKS cluster.

Destination Port Destination Source Description Cross VLAN networking notes
22 Logical network used for AKS Arc VMs IP addresses in management network Required to collect logs for troubleshooting. If you use separate VLANs, IP addresses in management network used for Azure Stack HCI and Arc Resource Bridge need to access the AKS Arc cluster VMs on this port.
6443 Logical network used for AKS Arc VMs IP addresses in management network Required to communicate with Kubernetes APIs. If you use separate VLANs, IP addresses in management network used for Azure Stack HCI and Arc Resource Bridge need to access the AKS Arc cluster VMs on this port.
55000 IP addresses in management network Logical network used for AKS Arc VMs Cloud Agent gRPC server If you use separate VLANs, the AKS Arc VMs need to access the IP addresses in management network used for cloud agent IP and cluster IP on this port.
65000 IP addresses in management network Logical network used for AKS Arc VMs Cloud Agent gRPC authentication If you use separate VLANs, the AKS Arc VMs need to access the IP addresses in management network used for cloud agent IP and cluster IP on this port.

Next steps

IP address planning and considerations for Kubernetes clusters and applications