An Azure backup service that provides built-in management at scale.
Hello MichaelB-2136, You are experiencing a partial success in the Azure Backup workflow. The "Snapshot" phase works: This means the Backup Service has the correct permissions to talk to the Azure Files API and trigger a point-in-time copy. The "Transfer to Vault" phase fails: This is the data-heavy movement where the snapshot is cataloged and moved to the Recovery Services Vault (RSV) storage.
Error 850000: This is a "Data Mover" timeout or connectivity interruption. Because it fails at different time intervals, it suggests a throttling or a network handshake drop rather than a flat permission deny.
The most likely reason is a Metadata Timeout or Storage Throughput Exhaustion.
Even though your file count is low, FSLogix .vhdx files are "sparse" and dynamic. When Azure Backup transfers data to a "Vault-Standard" tier, it doesn't just copy blocks; it performs a consistency check and metadata synchronization.
- IOPS/Throughput: Premium shares have fixed ingress/egress limits based on provisioned size. If FSLogix "compaction" or background disk activity occurs during the backup transfer, you may be hitting the egress limit of the Premium share, causing the Backup Data Mover to lose its connection.
- The "Large File" Penalty: Because the 500 GB share is 5x larger than your working 100 GB share, the "Transfer to Vault" stage stays open much longer, making it more susceptible to transient network blips or "Lease" conflicts on the VHDX files.
The first backup was a "Full Seed." Subsequent backups are Incremental.
- The Delta Calculation: For FSLogix profiles (
.vhdx), even if users aren't logged in, the metadata within the VHDX can change significantly due to OS background tasks. - The "Data Mover" Timeout: Error 850000 is often a generic "Data Mover" failure. If the service takes too long to enumerate the "changes" in a large 500 GB file, the session between the Storage Account and the Vault times out.
- Throttling: Premium shares have a fixed egress rate. The 500 GB share has a larger "metadata footprint" than the 100 GB share. If the backup service hits the storage account's egress limit while trying to read the VHDX blocks, it will drop the connection.
Pull Full Job Details and Error Text
- Go to Recovery Services Vault > Backup Jobs > Failed Job > Error Details is the first step.
- The “DataTransfer” or “DataMover” section can reveal specific causes (throttling, network timeouts, authentication issues, file locks, etc.).
- This is crucial because Error 850000 is generic; the subtask details provide the real root cause.
Check Storage Account Networking & SAS Token Lifetime
- “Allow trusted Microsoft services” is necessary but not sufficient if: Private Endpoints or firewall rules exist and The vault’s service tag AzureBackup is not allowed.
- SAS tokens used by Backup are autogenerated and short-lived. If the storage account rotates keys or has an SAS policy that invalidates tokens mid-transfer, the backup can fail at the transfer stage.
Run Built-in AFS Backup Diagnostics
In Recovery Services Vault > Backup Instances > Run Diagnostics.
This runs AzureBackupAFSBackupScenarioHelper, which validates:
- Configuration
- Network connectivity
- Authentication / SAS token issues
This can pinpoint subtle misconfigurations before retrying backup.
If you still encounter an error, this need help from backend team to verify the file share backup failure and this need backend clean up before retrying the next backup.
References:
Troubleshoot Azure Files backup - Azure Backup | Microsoft Learn
Restore Azure Files - Azure Backup | Microsoft Learn
Hope this helps. and please feel free to reach out if you have any further questions. Thanks