Using the Azure portal, you can enable all of the features at the same time. You can also enable them individually by using the Azure CLI, Azure Resource Manager template, Terraform, or Azure Policy. Each of these methods is described in this article.
Important
Kubernetes clusters generate a lot of log data, which can result in significant costs if you aren't selective about the logs that you collect. Before you enable monitoring for your cluster, see the following articles to ensure that your environment is optimized for cost and that you limit your log collection to only the data that you require:
Cost optimization in Azure Monitor Best practices for configuring all features of Azure Monitor to optimize your costs and limit the amount of data that you collect.
Supported clusters
This article provides onboarding guidance for the following types of clusters. Any differences in the process for each type are noted in the relevant sections.
If you previously installed monitoring for AKS, ensure that you have disabled monitoring before proceeding to avoid issues during the extension install.
If you previously installed monitoring on a cluster using a script without cluster extensions, follow the instructions at Disable monitoring of your Kubernetes cluster to delete this Helm chart.
Note
The Managed Prometheus Arc-Enabled Kubernetes extension does not support the following configurations:
Red Hat Openshift distributions, including Azure Red Hat OpenShift (ARO)
Windows nodes
Workspaces
The following table describes the workspaces that are required to support Managed Prometheus and Container insights. You can create each workspace as part of the onboarding process or use an existing workspace. See Design a Log Analytics workspace architecture for guidance on how many workspaces to create and where they should be placed.
Contributor permission is enough for enabling the addon to send data to the Azure Monitor workspace. You will need Owner level permission to link your Azure Monitor Workspace to view metrics in Azure Managed Grafana. This is required because the user executing the onboarding step, needs to be able to give the Azure Managed Grafana System Identity Monitoring Reader role on the Azure Monitor Workspace to query the metrics.
You can attach a cluster to a Log Analytics workspace in a different Azure subscription in the same Microsoft Entra tenant, but you must use the Azure CLI or an Azure Resource Manager template. You can't currently perform this configuration with the Azure portal.
If you're connecting an existing cluster to a Log Analytics workspace in another subscription, the Microsoft.ContainerService resource provider must be registered in the subscription with the Log Analytics workspace. For more information, see Register resource provider.
Use one of the following methods to enable scraping of Prometheus metrics from your cluster and enable Managed Grafana to visualize the metrics. See Link a Grafana workspace for options to connect your Azure Monitor workspace and Azure Managed Grafana workspace.
Note
If the deployment is done using an ARM, Bicep or Terraform template or Azure Policy, and you are not using the provided examples, ensure that the Data Collection Endpoints, Data Collection Rules and the Data Collection Rule Associations are named MSProm-<Location of Azure Monitor Workspace>-<Name of cluster resource>. Failure to do so will result in the onboarding process not completing successfully.
Note
If you have a single Azure Monitor Resource that is private-linked, then Prometheus enablement won't work if the AKS cluster and Azure Monitor Workspace are in different regions.
The configuration needed for the Prometheus add-on isn't available cross region because of the private link constraint.
To resolve this, create a new DCE in the cluster location and a new DCRA (association) in the same cluster region. Associate the new DCE with the cluster and name the new association (DCRA) as configurationAccessEndpoint.
For full instructions on how to configure the DCEs associated with your Azure Monitor workspace to use a Private Link for data ingestion, see Enable private link for Kubernetes monitoring in Azure Monitor.
If you don't specify an existing Azure Monitor workspace in the following commands, the default workspace for the resource group will be used. If a default workspace doesn't already exist in the cluster's region, one with a name in the format DefaultAzureMonitorWorkspace-<mapped_region> will be created in a resource group with the name DefaultRG-<cluster_region>.
Prerequisites
Az CLI version of 2.49.0 or higher is required.
The aks-preview extension must be uninstalled from AKS clusters by using the command az extension remove --name aks-preview.
The k8s-extension extension must be installed using the command az extension add --name k8s-extension.
The k8s-extension version 1.4.1 or higher is required.
Optional parameters
Each of the commands for AKS and Arc-Enabled Kubernetes allow the following optional parameters. The parameter name is different for each, but their use is the same.
Comma-separated list of Kubernetes annotations keys used in the resource's kube_resource_annotations metric. For example, kube_pod_annotations is the annotations metric for the pods resource. By default, this metric contains only name and namespace labels. To include more annotations, provide a list of resource names in their plural form and Kubernetes annotation keys that you want to allow for them. A single * can be provided for each resource to allow any annotations, but this has severe performance implications. For example, pods=[kubernetes.io/team,...],namespaces=[kubernetes.io/team],....
Comma-separated list of more Kubernetes label keys that is used in the resource's kube_resource_labels metric kube_resource_labels metric. For example, kube_pod_labels is the labels metric for the pods resource. By default this metric contains only name and namespace labels. To include more labels, provide a list of resource names in their plural form and Kubernetes label keys that you want to allow for them A single * can be provided for each resource to allow any labels, but i this has severe performance implications. For example, pods=[app],namespaces=[k8s-label-1,k8s-label-n,...],....
Recording rules
AKS: --enable-windows-recording-rules
Lets you enable the recording rule groups required for proper functioning of the Windows dashboards.
AKS cluster
Use the -enable-azure-monitor-metrics option az aks create or az aks update (depending whether you're creating a new cluster or updating an existing cluster) to install the metrics add-on that scrapes Prometheus metrics.
### Use default Azure Monitor workspace
az aks create/update --enable-azure-monitor-metrics --name <cluster-name> --resource-group <cluster-resource-group>
### Use existing Azure Monitor workspace
az aks create/update --enable-azure-monitor-metrics --name <cluster-name> --resource-group <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
### Use an existing Azure Monitor workspace and link with an existing Grafana workspace
az aks create/update --enable-azure-monitor-metrics --name <cluster-name> --resource-group <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id>
### Use optional parameters
az aks create/update --enable-azure-monitor-metrics --name <cluster-name> --resource-group <cluster-resource-group> --ksm-metric-labels-allow-list "namespaces=[k8s-label-1,k8s-label-n]" --ksm-metric-annotations-allow-list "pods=[k8s-annotation-1,k8s-annotation-n]"
Arc-enabled cluster
### Use default Azure Monitor workspace
az k8s-extension create --name azuremonitor-metrics --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers.Metrics
## Use existing Azure Monitor workspace
az k8s-extension create --name azuremonitor-metrics --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers.Metrics --configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id>
### Use an existing Azure Monitor workspace and link with an existing Grafana workspace
az k8s-extension create --name azuremonitor-metrics --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers.Metrics --configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id> grafana-resource-id=<grafana-workspace-name-resource-id>
### Use optional parameters
az k8s-extension create --name azuremonitor-metrics --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers.Metrics --configuration-settings azure-monitor-workspace-resource-id=<workspace-name-resource-id> grafana-resource-id=<grafana-workspace-name-resource-id> AzureMonitorMetrics.KubeStateMetrics.MetricAnnotationsAllowList="pods=[k8s-annotation-1,k8s-annotation-n]" AzureMonitorMetrics.KubeStateMetrics.MetricLabelsAllowlist "namespaces=[k8s-label-1,k8s-label-n]"
The following additional optional parameters are available for Azure Arc-enabled clusters:
Parameter
Description
Default
Upstream Arc cluster setting
ClusterDistribution
The distribution of the cluster.
Azure.Cluster.Distribution
yes
CloudEnvironment
The cloud environment for the cluster.
Azure.Cluster.Cloud
yes
MountCATrustAnchorsDirectory
Whether to mount CA trust anchors directory.
true
no
MountUbuntuCACertDirectory
Whether to mount Ubuntu CA certificate directory.
true unless an aks_edge distro.
no
Enable with ARM templates
Prerequisites
The Azure Monitor workspace and Azure Managed Grafana instance must already be created.
The template must be deployed in the same resource group as the Azure Managed Grafana instance.
If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor workspace subscription, register the Azure Monitor workspace subscription with the Microsoft.Dashboard resource provider using the guidance at Register resource provider.
Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Reader role directly by deploying the template.
Note
Currently in Bicep, there's no way to explicitly scope the Monitoring Reader role assignment on a string parameter "resource ID" for an Azure Monitor workspace like in an ARM template. Bicep expects a value of type resource | tenant. There is also no REST API spec for an Azure Monitor workspace.
Therefore, the default scoping for the Monitoring Reader role is on the resource group. The role is applied on the same Azure Monitor workspace by inheritance, which is the expected behavior. After you deploy this Bicep template, the Grafana instance is given Monitoring Reader permissions for all the Azure Monitor workspaces in that resource group.
Retrieve required values for Grafana resource
If the Azure Managed Grafana instance is already linked to an Azure Monitor workspace, then you must include this list in the template. On the Overview page for the Azure Managed Grafana instance in the Azure portal, select JSON view, and copy the value of azureMonitorWorkspaceIntegrations which will look similar to the sample below. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
Edit the following values in the parameter file. The same set of values are used for both the ARM and Bicep templates. Retrieve the resource ID of the resources from the JSON View of their Overview page.
Parameter
Value
azureMonitorWorkspaceResourceId
Resource ID for the Azure Monitor workspace. Retrieve from the JSON view on the Overview page for the Azure Monitor workspace.
azureMonitorWorkspaceLocation
Location of the Azure Monitor workspace. Retrieve from the JSON view on the Overview page for the Azure Monitor workspace.
clusterResourceId
Resource ID for the AKS cluster. Retrieve from the JSON view on the Overview page for the cluster.
clusterLocation
Location of the AKS cluster. Retrieve from the JSON view on the Overview page for the cluster.
metricLabelsAllowlist
Comma-separated list of Kubernetes labels keys to be used in the resource's labels metric.
metricAnnotationsAllowList
Comma-separated list of more Kubernetes label keys to be used in the resource's annotations metric.
grafanaResourceId
Resource ID for the managed Grafana instance. Retrieve from the JSON view on the Overview page for the Grafana instance.
grafanaLocation
Location for the managed Grafana instance. Retrieve from the JSON view on the Overview page for the Grafana instance.
grafanaSku
SKU for the managed Grafana instance. Retrieve from the JSON view on the Overview page for the Grafana instance. Use the sku.name.
Open the template file and update the grafanaIntegrations property at the end of the file with the values that you retrieved from the Grafana instance. This will look similar to the following samples. In these samples, full_resource_id_1 and full_resource_id_2 were already in the Azure Managed Grafana resource JSON. The final azureMonitorWorkspaceResourceId entry is already in the template and is used to link to the Azure Monitor workspace resource ID provided in the parameters file.
Deploy the template with the parameter file by using any valid method for deploying Resource Manager templates. For examples of different methods, see Deploy the sample templates.
Enable with Terraform
Prerequisites
The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.
The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace.
Users with the User Access Administrator role in the subscription of the AKS cluster can enable the Monitoring Reader role directly by deploying the template.
If the Azure Managed Grafana instance is in a subscription other than the Azure Monitor Workspaces subscription, register the Azure Monitor Workspace subscription with the Microsoft.Dashboard resource provider by following this documentation.
Retrieve required values for a Grafana resource
On the Overview page for the Azure Managed Grafana instance in the Azure portal, select JSON view.
If you're using an existing Azure Managed Grafana instance that's already linked to an Azure Monitor workspace, you need the list of Grafana integrations. Copy the value of the azureMonitorWorkspaceIntegrations field. If it doesn't exist, the instance hasn't been linked with any Azure Monitor workspace. Update the azure_monitor_workspace_integrations block in main.tf with the list of grafana integrations.
Edit the variables in variables.tf file with the correct parameter values.
Run terraform init -upgrade to initialize the Terraform deployment.
Run terraform plan -out main.tfplan to initialize the Terraform deployment.
Run terraform apply main.tfplan to apply the execution plan to your cloud infrastructure.
Note: Pass the variables for annotations_allowed and labels_allowed keys in main.tf only when those values exist. These are optional blocks.
Note
Edit the main.tf file appropriately before running the terraform template. Add in any existing azure_monitor_workspace_integrations values to the grafana resource before running the template. Else, older values get deleted and replaced with what is there in the template during deployment. Users with 'User Access Administrator' role in the subscription of the AKS cluster can enable 'Monitoring Reader' role directly by deploying the template. Edit the grafanaSku parameter if you're using a nonstandard SKU and finally run this template in the Grafana Resource's resource group.
Enable with Azure Policy
Download Azure Policy template and parameter files.
After you create the policy definition, in the Azure portal, select Policy and then Definitions. Select the policy definition you created.
Select Assign and fill in the details on the Parameters tab. Select Review + Create.
If you want to apply the policy to an existing cluster, create a Remediation task for that cluster resource from Policy Assignment.
After the policy is assigned to the subscription, whenever you create a new cluster without Prometheus enabled, the policy will run and deploy to enable Prometheus monitoring.
Enable Container insights
Use one of the following methods to enable Container insights on your cluster. Once this is complete, see Configure agent data collection for Container insights to customize your configuration to ensure that you aren't collecting more data than you require.
Note
If you have a single Azure Monitor Resource that is private-linked, then Container insights enablement will not work through the Azure portal.
For full instructions on how to configure Container insights with Private Link, see Enable private link for Kubernetes monitoring in Azure Monitor.
Use one of the following commands to enable monitoring of your AKS and Arc-enabled clusters. If you don't specify an existing Log Analytics workspace, the default workspace for the resource group will be used. If a default workspace doesn't already exist in the cluster's region, one will be created with a name in the format DefaultWorkspace-<GUID>-<Region>.
Prerequisites
Azure CLI version 2.43.0 or higher
Managed identity authentication is default in CLI version 2.49.0 or higher.
Azure k8s-extension version 1.3.7 or higher
Managed identity authentication is the default in k8s-extension version 1.43.0 or higher.
Managed identity authentication is not supported for Arc-enabled Kubernetes clusters with ARO (Azure Red Hat Openshift) or Windows nodes. Use legacy authentication.
For CLI version 2.54.0 or higher, the logging schema will be configured to ContainerLogV2 using ConfigMap.
Note
You can enable the ContainerLogV2 schema for a cluster either using the cluster's Data Collection Rule (DCR) or ConfigMap. If both settings are enabled, the ConfigMap will take precedence. Stdout and stderr logs will only be ingested to the ContainerLog table when both the DCR and ConfigMap are explicitly set to off.
AKS cluster
### Use default Log Analytics workspace
az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name>
### Use existing Log Analytics workspace
az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id>
### Use legacy authentication
az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id> --enable-msi-auth-for-monitoring false
Example
az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace"
Arc-enabled cluster
### Use default Log Analytics workspace
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers
### Use existing Log Analytics workspace
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings logAnalyticsWorkspaceResourceID=<workspace-resource-id>
### Use managed identity authentication (default as k8s-extension version 1.43.0)
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true
### Use advanced configuration settings
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.resources.daemonset.limits.cpu=150m amalogs.resources.daemonset.limits.memory=600Mi amalogs.resources.deployment.limits.cpu=1 amalogs.resources.deployment.limits.memory=750Mi
### With custom mount path for container stdout & stderr logs
### Custom mount path not required for Azure Stack Edge version > 2318. Custom mount path must be /home/data/docker for Azure Stack Edge cluster with version <= 2318
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.logsettings.custommountpath=<customMountPath>
If the cluster is configured with a forward proxy, then proxy settings are automatically applied to the extension. In the case of a cluster with AMPLS + proxy, proxy config should be ignored. Onboard the extension with the configuration setting amalogs.ignoreExtensionProxySettings=true.
Arc-enabled cluster with ARO or OpenShift or Windows nodes
Managed identity authentication is not supported for Arc-enabled Kubernetes clusters with ARO (Azure Red Hat OpenShift) or OpenShift or Windows nodes. Use legacy authentication by specifying amalogs.useAADAuth=false as in the following example.
The following command only deletes the extension instance, but doesn't delete the Log Analytics workspace. The data in the Log Analytics resource is left intact.
Edit the following values in the parameter file. The same set of values are used for both the ARM and Bicep templates. Retrieve the resource ID of the resources from the JSON View of their Overview page.
Parameter
Description
AKS: aksResourceId Arc: clusterResourceId
Resource ID of the cluster.
AKS: aksResourceLocation Arc: clusterRegion
Location of the cluster.
AKS: workspaceResourceId Arc: workspaceResourceId
Resource ID of the Log Analytics workspace.
Arc: workspaceRegion
Region of the Log Analytics workspace.
Arc: workspaceDomain
Domain of the Log Analytics workspace. opinsights.azure.com for Azure public cloud opinsights.azure.us for AzureUSGovernment.
AKS: resourceTagValues
Tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be MSCI-<clusterName>-<clusterRegion> and this resource created in an AKS clusters resource group. For first time onboarding, you can set arbitrary tag values.
Deploy the template with the parameter file by using any valid method for deploying Resource Manager templates. For examples of different methods, see Deploy the sample templates.
New AKS cluster
Download Terraform template file depending on whether you want to enable Syslog collection.
Adjust the azurerm_kubernetes_cluster resource in main.tf based on your cluster settings.
Update parameters in variables.tf to replace values in "<>"
Parameter
Description
aks_resource_group_name
Use the values on the AKS Overview page for the resource group.
resource_group_location
Use the values on the AKS Overview page for the resource group.
cluster_name
Define the cluster name that you would like to create.
workspace_resource_id
Use the resource ID of your Log Analytics workspace.
workspace_region
Use the location of your Log Analytics workspace.
resource_tag_values
Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will match MSCI-<clusterName>-<clusterRegion> and this resource is created in the same resource group as the AKS clusters. For first time onboarding, you can set the arbitrary tag values.
enabledContainerLogV2
Set this parameter value to be true to use the default recommended ContainerLogV2.
Copy the DCR and DCRA resources from the Terraform templates
Run terraform plan -out main.tfplan and make sure the change is adding the oms_agent property. Note: If the azurerm_kubernetes_cluster resource defined is different during terraform plan, the existing cluster will get destroyed and recreated.
Run terraform apply main.tfplan to apply the execution plan to your cloud infrastructure.
Tip
Edit the main.tf file appropriately before running the terraform template
Data will start flowing after 10 minutes since the cluster needs to be ready first
WorkspaceID needs to match the format /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/example-resource-group/providers/Microsoft.OperationalInsights/workspaces/workspaceValue
If resource group already exists, run terraform import azurerm_resource_group.rg /subscriptions/<Subscription_ID>/resourceGroups/<Resource_Group_Name> before terraform plan
Azure portal
From the Definitions tab of the Policy menu in the Azure portal, create a policy definition with the following details.
Definition location: Azure subscription where the policy definition should be stored.
Name: AKS-Monitoring-Addon
Description: Azure custom policy to enable the Monitoring Add-on onto Azure Kubernetes clusters.
Category: Select Use existing and then Kubernetes from the dropdown list.
After the policy is assigned to the subscription, whenever you create a new cluster without Prometheus enabled, the policy will run and deploy to enable Prometheus monitoring.
Enable full monitoring with Azure portal
New AKS cluster (Prometheus, Container insights, and Grafana)
When you create a new AKS cluster in the Azure portal, Enable Container Logs, Enable Prometheus metrics, Enable Grafana, and Enable Recommended Alerts checkboxes are checked by default in the Monitoring tab.
Existing cluster (Prometheus, Container insights, and Grafana)
Navigate to your cluster in the Azure portal.
In the service menu, select Monitor > Monitor Settings.
Prometheus metrics, Grafana and Container Logs and events are selected for you. If you have existing Azure Monitor workspace, Grafana workspace and Log Analytics workspace, then they're selected for you.
Select Advanced settings if you want to select alternate workspaces or create new ones. The Logging profiles and Classic profiles setting allows you to modify the default collection details to reduce your monitoring costs. See Enable cost optimization settings in Container insights for details.
Select Configure.
Enable Windows metrics collection (preview)
Note
There is no CPU/Memory limit in windows-exporter-daemonset.yaml so it may over-provision the Windows nodes
For more details see Resource reservation
As you deploy workloads, set resource memory and CPU limits on containers. This also subtracts from NodeAllocatable and helps the cluster-wide scheduler in determining which pods to place on which nodes.
Scheduling pods without limits may over-provision the Windows nodes and in extreme cases can cause the nodes to become unhealthy.
As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon container (prometheus_collector), Windows metric collection has been enabled for AKS clusters. Onboarding to the Azure Monitor Metrics add-on enables the Windows DaemonSet pods to start running on your node pools. Both Windows Server 2019 and Windows Server 2022 are supported. Follow these steps to enable the pods to collect metrics from your Windows node pools.
Manually install windows-exporter on AKS nodes to access Windows metrics by deploying the windows-exporter-daemonset YAML file. Enable the following collectors:
Deploy the windows-exporter-daemonset YAML file. Note that if there are any taints applied in the node, you will need to apply the appropriate tolerations.
Enable the recording rules that are required for the out-of-the-box dashboards:
If onboarding using the CLI, include the option --enable-windows-recording-rules.
If onboarding using an ARM template, Bicep, or Azure Policy, set enableWindowsRecordingRules to true in the parameters file.
If the cluster is already onboarded, use this ARM template and this parameter file to create the rule groups. This will add the required recording rules and is not an ARM operation on the cluster and does not impact current monitoring state of the cluster.
Verify that the DaemonSet was deployed properly on the Linux node pools
kubectl get ds ama-metrics-node --namespace=kube-system
The number of pods should be equal to the number of Linux nodes on the cluster. The output should resemble the following example:
User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ama-metrics-node 1 1 1 1 1 <none> 10h
Verify that Windows nodes were deployed properly
kubectl get ds ama-metrics-win-node --namespace=kube-system
The number of pods should be equal to the number of Windows nodes on the cluster. The output should resemble the following example:
User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ama-metrics-win-node 3 3 3 3 3 <none> 10h
Verify that the two ReplicaSets were deployed for Prometheus
kubectl get rs --namespace=kube-system
The output should resemble the following example:
User@aksuser:~$kubectl get rs --namespace=kube-system
NAME DESIRED CURRENT READY AGE
ama-metrics-5c974985b8 1 1 1 11h
ama-metrics-ksm-5fcf8dffcd 1 1 1 11h
Container insights
Verify that the DaemonSets were deployed properly on the Linux node pools
kubectl get ds ama-logs --namespace=kube-system
The number of pods should be equal to the number of Linux nodes on the cluster. The output should resemble the following example:
User@aksuser:~$ kubectl get ds ama-logs --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ama-logs 2 2 2 2 2 <none> 1d
Verify that Windows nodes were deployed properly
kubectl get ds ama-logs-windows --namespace=kube-system
The number of pods should be equal to the number of Windows nodes on the cluster. The output should resemble the following example:
User@aksuser:~$ kubectl get ds ama-logs-windows --namespace=kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ama-logs-windows 2 2 2 2 2 <none> 1d
Verify deployment of the Container insights solution
kubectl get deployment ama-logs-rs --namespace=kube-system
The output should resemble the following example:
User@aksuser:~$ kubectl get deployment ama-logs-rs --namespace=kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
ama-logs-rs 1/1 1 1 24d
View configuration with CLI
Use the aks show command to find out whether the solution is enabled, the Log Analytics workspace resource ID, and summary information about the cluster.
az aks show --resource-group <resourceGroupofAKSCluster> --name <nameofAksCluster>
The command will return JSON-formatted information about the solution. The addonProfiles section should include information on the omsagent as in the following example:
When you enable monitoring, the following resources are created in your subscription:
Resource Name
Resource Type
Resource Group
Region/Location
Description
MSCI-<aksclusterregion>-<clustername>
Data Collection Rule
Same as cluster
Same as Log Analytics workspace
This data collection rule is for log collection by Azure Monitor agent, which uses the Log Analytics workspace as destination, and is associated to the AKS cluster resource.
MSPROM-<aksclusterregion>-<clustername>
Data Collection Rule
Same as cluster
Same as Azure Monitor workspace
This data collection rule is for prometheus metrics collection by metrics addon, which has the chosen Azure monitor workspace as destination, and also it is associated to the AKS cluster resource
MSPROM-<aksclusterregion>-<clustername>
Data Collection endpoint
Same as cluster
Same as Azure Monitor workspace
This data collection endpoint is used by the above data collection rule for ingesting Prometheus metrics from the metrics addon
When you create a new Azure Monitor workspace, the following additional resources are created as part of it
DCE created when you use OSS Prometheus server to Remote Write to Azure Monitor Workspace.
Differences between Windows and Linux clusters
The main differences in monitoring a Windows Server cluster compared to a Linux cluster include:
Windows doesn't have a Memory RSS metric. As a result, it isn't available for Windows nodes and containers. The Working Set metric is available.
Disk storage capacity information isn't available for Windows nodes.
Only pod environments are monitored, not Docker environments.
With the preview release, a maximum of 30 Windows Server containers are supported. This limitation doesn't apply to Linux containers.
Note
Container insights support for the Windows Server 2022 operating system is in preview.
The containerized Linux agent (replicaset pod) makes API calls to all the Windows nodes on Kubelet secure port (10250) within the cluster to collect node and container performance-related metrics. Kubelet secure port (:10250) should be opened in the cluster's virtual network for both inbound and outbound for Windows node and container performance-related metrics collection to work.
If you have a Kubernetes cluster with Windows nodes, review and configure the network security group and network policies to make sure the Kubelet secure port (:10250) is open for both inbound and outbound in the cluster's virtual network.
Next steps
If you experience issues while you attempt to onboard the solution, review the Troubleshooting guide.
With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn how to use Container insights.