Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Overview
Azure Local supports attaching external Fibre Channel (FC) storage area network (SAN) storage as an alternative to local storage (Storage Spaces Direct). This capability enables customers with existing SAN investments to reuse that infrastructure while running Azure Local workloads.
Prerequisites
The following prerequisites apply to use this document:
- Azure Local cluster deployed with 2604 release or later.
- Fibre Channel HBAs (Windows Server 2025 certified HBA and driver) installed in all cluster nodes and zoned on the FC fabric.
- The SAN array is accessible on the FC fabric with management access configured.
Important
Don't zone in FC HBA World Wide Names (WWNs) until AFTER the Azure Local deployment, to avoid deployment confusion for FC LUNs.
Step 1: Enable Multipath IO
- Enable MPIO feature on each node:
Add-WindowsFeature -Name 'Multipath-IO' -IncludeManagementTools
Step 2: Configure MPIO and Set MPIO policy
Note
Run configurations on each Azure Local node. MPIO policy changes aren't in effect until after a reboot.
[!Note:] Required HBA: Emulex LPe36002-M64 | Firmware: 03.09.19 (DUP: D815X A00-00) | Driver: 14.4.393.20 (DUP: VKNP1 A00-00). Install firmware via DUP through the iDRAC System Update menu; install driver via DUP in Windows OS.
Register Dell PowerStore with MSDSM:
New-MSDSMSupportedHW -VendorId "DellEMC" -ProductId "PowerStore"Set round robin load-balancing policy:
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RRConfigure MPIO path recovery settings:
Set-MPIOSetting -NewPathVerificationState Enabled -NewPathVerificationPeriod 30 ` -NewPDORemovePeriod 20 -NewRetryCount 3 -NewRetryInterval 3 ` -CustomPathRecovery Enabled -NewPathRecoveryInterval 10 -NewDiskTimeout 30Verify that paths are using correct policy:
mpclaim -s -dRestart each node after completing MPIO configuration. Perform reboots in a rolling manner before proceeding with SAN configuration and WWN registration.
Launch the PowerStore WebUI. For example,
https://PowerStoreManagementIPorFQDN.Compute → Host Information → Hosts & Host Groups → '+Add Host.'
Enter host name; select Operating System = Windows; select Next.
Select Fibre Channel as protocol; select Next.
Run Get-InitiatorPort on the node for World Wide Port Names (WWPNs); select the appropriate initiators; select Next. Leave default Local Connectivity; select Next. Verify summary and select Add Host. Repeat for all nodes.
Compute → Host Information → '+Add Host Group'; enter name; select all cluster hosts; select Create.
Storage → Volume → '+CREATE'; enter name, size, and Volume Performance Policy; select Next.
Select SCSI protocol; map to the host group; select 'Generate Automatically' for LUN ID; complete the wizard.
On each cluster node, rescan for the new volume:
Update-HostStorageCache
Step 3: Verify SAN Disks Before Initialization
On each Azure Local node, run:
Get-DiskVerify that new disks are visible on all nodes, disk sizes match expected LUN sizes and the number of disks is consistent across nodes. If disks are missing:
Rerun Update-HostStorageCache
Verify zoning and LUN masking
Step 4: Initialize and Format Disks
Note: Run on one cluster node only.
Get the new disk (usually the one with no partition):
$disk = Get-Disk | Where-Object PartitionStyle -Eq 'RAW'Initialize as GPT:
Initialize-Disk -Number $disk.Number -PartitionStyle GPTCreate a partition and format it as NTFS:
New-Partition -DiskNumber $disk.Number -UseMaximumSize -AssignDriveLetter | Format-Volume -FileSystem NTFS -NewFileSystemLabel "ClusterDisk1" -Confirm:$false
Step 5: Validate Cluster Configuration and Add CSV
- Validate cluster storage configuration and inspect the report for issues.
Test-Cluster - Add SAN volumes as Cluster Shared Volumes:
Get-ClusterAvailableDisk | Add-ClusterDisk | Add-ClusterSharedVolume
Step 6: Add Storage Path in Azure portal
In the Azure portal (portal.azure.com), select the Azure Local Cluster.
Under Resources, select Storage Paths, then Create Storage Path.
Enter a friendly Name and the actual File System Path (for example, C:\ClusterStorage{VolumeName}), then select Create.
Note: Reference: Create Storage Path on Azure Local
Troubleshooting
Use the following guidance to identify and resolve common issues when attaching SAN storage to Azure Local.
Disks aren't visible on cluster nodes
If SAN disks don't appear on one or more cluster nodes, first verify that Fibre Channel zoning is configured correctly between the host HBAs and the SAN array target ports. Ensure that all LUNs are masked and presented to every node in the cluster. On each node, run the following command to refresh the storage view:
Update-HostStorageCache
If the disks are still not visible, confirm that the HBA drivers and firmware are correctly installed and that all FC ports are online.
MPIO isn't claiming disks correctly
If disks are visible but MPIO isn't managing them, verify that the correct storage vendor is registered with MSDSM and that the MPIO feature is enabled on all nodes.
Run the following command to check disk claim status:
mpclaim -s -d
Ensure that disks are listed under MSDSM and that multiple active paths are present. If disks aren't claimed, recheck vendor registration and reboot the node to apply MPIO settings.
Cluster validation fails during Test-Cluster
If the Test-Cluster command reports errors, review the generated validation report carefully and don't proceed with configuration until all critical issues are resolved. Common causes include inconsistent disk visibility across nodes, incorrect zoning, or LUNs that aren't presented uniformly to all cluster members. Ensure that all nodes can see the same set of disks with identical characteristics.
Unable to add disks as Cluster Shared Volumes
If disks can't be added as Cluster Shared Volumes, verify that the disks are visible to all nodes and aren't already in use or reserved. Confirm that the disks are initialized and formatted correctly using NTFS. You can also check disk state using:
Get-ClusterAvailableDisk
Only disks listed as available can be added to the cluster.
Cluster Shared Volumes aren't accessible
If CSV paths aren't accessible or don't appear under C:\ClusterStorage, verify that the disks were successfully added to the cluster and promoted to CSVs.
Use the following command to confirm CSV status:
Get-ClusterSharedVolume
If volumes are missing or inaccessible, check Failover Cluster Manager for disk ownership and status, and ensure that there are no underlying storage connectivity issues.
Storage Path creation fails in Azure portal
If creating a Storage Path fails in the Azure portal, verify that the specified file system path exists and is accessible on the cluster. The path must point to a valid CSV location under C:\ClusterStorage. Also confirm that the Azure Arc connection for the cluster is healthy and that the cluster resource is in a ready state. If the issue persists, retry the operation after confirming that all previous steps completed successfully.