Edit

Connecting an External Storage Array to Azure Local

Overview

Azure Local supports attaching external Fibre Channel (FC) storage area network (SAN) storage as an alternative to local storage (Storage Spaces Direct). This capability enables customers with existing SAN investments to reuse that infrastructure while running Azure Local workloads.

Prerequisites

The following prerequisites apply to use this document:

  • Azure Local cluster deployed with 2604 release or later.
  • Fibre Channel HBAs (Windows Server 2025 certified HBA and driver) installed in all cluster nodes and zoned on the FC fabric.
  • The SAN array is accessible on the FC fabric with management access configured.

Important

Don't zone in FC HBA World Wide Names (WWNs) until AFTER the Azure Local deployment, to avoid deployment confusion for FC LUNs.

Step 1: Enable Multipath IO

  1. Enable MPIO feature on each node:
    Add-WindowsFeature -Name 'Multipath-IO' -IncludeManagementTools
    

Step 2: Configure MPIO and Set MPIO policy

Note

Run configurations on each Azure Local node. MPIO policy changes aren't in effect until after a reboot.

[!Note:] Required HBA: Emulex LPe36002-M64 | Firmware: 03.09.19 (DUP: D815X A00-00) | Driver: 14.4.393.20 (DUP: VKNP1 A00-00). Install firmware via DUP through the iDRAC System Update menu; install driver via DUP in Windows OS.

  1. Register Dell PowerStore with MSDSM:

    New-MSDSMSupportedHW -VendorId "DellEMC" -ProductId "PowerStore" 
    
  2. Set round robin load-balancing policy:

    Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR 
    
  3. Configure MPIO path recovery settings:

    Set-MPIOSetting -NewPathVerificationState Enabled -NewPathVerificationPeriod 30 ` 
    -NewPDORemovePeriod 20 -NewRetryCount 3 -NewRetryInterval 3 ` 
    -CustomPathRecovery Enabled -NewPathRecoveryInterval 10 -NewDiskTimeout 30 
    
  4. Verify that paths are using correct policy:

    mpclaim -s -d 
    
  5. Restart each node after completing MPIO configuration. Perform reboots in a rolling manner before proceeding with SAN configuration and WWN registration.

  6. Launch the PowerStore WebUI. For example, https://PowerStoreManagementIPorFQDN.

  7. Compute → Host Information → Hosts & Host Groups → '+Add Host.'

  8. Enter host name; select Operating System = Windows; select Next.

  9. Select Fibre Channel as protocol; select Next.

  10. Run Get-InitiatorPort on the node for World Wide Port Names (WWPNs); select the appropriate initiators; select Next. Leave default Local Connectivity; select Next. Verify summary and select Add Host. Repeat for all nodes.

  11. Compute → Host Information → '+Add Host Group'; enter name; select all cluster hosts; select Create.

  12. Storage → Volume → '+CREATE'; enter name, size, and Volume Performance Policy; select Next.

  13. Select SCSI protocol; map to the host group; select 'Generate Automatically' for LUN ID; complete the wizard.

  14. On each cluster node, rescan for the new volume:

    Update-HostStorageCache 
    

Step 3: Verify SAN Disks Before Initialization

  1. On each Azure Local node, run:

    Get-Disk  
    
  2. Verify that new disks are visible on all nodes, disk sizes match expected LUN sizes and the number of disks is consistent across nodes. If disks are missing:

  3. Rerun Update-HostStorageCache

  4. Verify zoning and LUN masking

Step 4: Initialize and Format Disks

Note: Run on one cluster node only.

  1. Get the new disk (usually the one with no partition):

    $disk = Get-Disk | Where-Object PartitionStyle -Eq 'RAW' 
    
  2. Initialize as GPT:

    Initialize-Disk -Number $disk.Number -PartitionStyle GPT 
    
  3. Create a partition and format it as NTFS:

    New-Partition -DiskNumber $disk.Number -UseMaximumSize -AssignDriveLetter | 
    Format-Volume -FileSystem NTFS -NewFileSystemLabel "ClusterDisk1" -Confirm:$false 
    

Step 5: Validate Cluster Configuration and Add CSV

  1. Validate cluster storage configuration and inspect the report for issues.
    Test-Cluster 
    
  2. Add SAN volumes as Cluster Shared Volumes:
    Get-ClusterAvailableDisk | Add-ClusterDisk | Add-ClusterSharedVolume 
    

Step 6: Add Storage Path in Azure portal

  1. In the Azure portal (portal.azure.com), select the Azure Local Cluster.

  2. Under Resources, select Storage Paths, then Create Storage Path.

  3. Enter a friendly Name and the actual File System Path (for example, C:\ClusterStorage{VolumeName}), then select Create.

Note: Reference: Create Storage Path on Azure Local

Troubleshooting

Use the following guidance to identify and resolve common issues when attaching SAN storage to Azure Local.

Disks aren't visible on cluster nodes

If SAN disks don't appear on one or more cluster nodes, first verify that Fibre Channel zoning is configured correctly between the host HBAs and the SAN array target ports. Ensure that all LUNs are masked and presented to every node in the cluster. On each node, run the following command to refresh the storage view:

Update-HostStorageCache 

If the disks are still not visible, confirm that the HBA drivers and firmware are correctly installed and that all FC ports are online.

MPIO isn't claiming disks correctly

If disks are visible but MPIO isn't managing them, verify that the correct storage vendor is registered with MSDSM and that the MPIO feature is enabled on all nodes.

Run the following command to check disk claim status:

mpclaim -s -d 

Ensure that disks are listed under MSDSM and that multiple active paths are present. If disks aren't claimed, recheck vendor registration and reboot the node to apply MPIO settings.

Cluster validation fails during Test-Cluster

If the Test-Cluster command reports errors, review the generated validation report carefully and don't proceed with configuration until all critical issues are resolved. Common causes include inconsistent disk visibility across nodes, incorrect zoning, or LUNs that aren't presented uniformly to all cluster members. Ensure that all nodes can see the same set of disks with identical characteristics.

Unable to add disks as Cluster Shared Volumes

If disks can't be added as Cluster Shared Volumes, verify that the disks are visible to all nodes and aren't already in use or reserved. Confirm that the disks are initialized and formatted correctly using NTFS. You can also check disk state using:

Get-ClusterAvailableDisk 

Only disks listed as available can be added to the cluster.

Cluster Shared Volumes aren't accessible

If CSV paths aren't accessible or don't appear under C:\ClusterStorage, verify that the disks were successfully added to the cluster and promoted to CSVs.

Use the following command to confirm CSV status:

Get-ClusterSharedVolume 

If volumes are missing or inaccessible, check Failover Cluster Manager for disk ownership and status, and ensure that there are no underlying storage connectivity issues.

Storage Path creation fails in Azure portal

If creating a Storage Path fails in the Azure portal, verify that the specified file system path exists and is accessible on the cluster. The path must point to a valid CSV location under C:\ClusterStorage. Also confirm that the Azure Arc connection for the cluster is healthy and that the cluster resource is in a ready state. If the issue persists, retry the operation after confirming that all previous steps completed successfully.

Next Articles: