Migrating away from Azure File Sync with tiering

Hunter Thompson 25 Reputation points
2025-11-24T22:22:53.5566667+00:00

I am currently using Azure File Sync with on-premise Windows file servers backed by storage appliances acting as a cache holding a portion of the primary data set housed in Azure. We plan to migrate away from this setup to a purely on-premise NAS solution. I need to copy all the data (around 300TB) to the new NAS and do not have enough local storage to perform a full recall to my current Windows file server setup to the new NAS. The new NAS is not Windows based and cannot run Azure File Sync to perform a recall. What is the best way to go about this? I am thinking I will need to order an Azure Data Box export to seed the initial copy, and then use Robocopy from an intermediate Windows machine from one of the current on prem servers to the new NAS leading up to a cutover.

I have never used RoboCopy to copy off tiered files, how does it handle tiered files? Will it copy directly from Azure without trying to tier all files locally to the file server, which does not have enough space? Or will I need to use AzCopy directly from the storage account? AzCopy might be problematic as there is some lag time between locked files syncing on-premise (where users are accessing them) up to the cloud storage account and I will need as short of a cutover window as possible.

Azure Files
Azure Files
An Azure service that offers file shares in the cloud.
0 comments No comments
{count} votes

Answer accepted by question author
  1. Adam Zachary 2,025 Reputation points
    2025-11-25T02:07:20.1433333+00:00

    I’ve worked with this exact setup before, and the key thing to understand is that RoboCopy cannot pull tiered files directly from Azure. If you point RoboCopy at a tiered path, it will try to recall the file to the local server first. Since you don’t have enough space, that won’t work.

    Here’s the practical way to do this with the least pain:

    Use Azure Data Box for the bulk export. That gets you the 300 TB without needing local recall. This is the right first step.

    After the Data Box seed, switch to AzCopy for the delta. AzCopy reads directly from the Azure Files share, not from the on-prem cache. That avoids tiering recalls and avoids running out of space.

    Keep RoboCopy only for the final small cutover window. Use it from an on-prem Windows box to capture the last few locked files and permissions right before users are moved. By this point, the volume is small enough that recall is manageable.

    So, in short, "Data Box" for the bulk, AzCopy for the live delta, RoboCopy only for the final tiny change set. RoboCopy cannot bypass tiering, AzCopy can.

    1 person found this answer helpful.

2 additional answers

Sort by: Most helpful
  1. Venkatesan S 925 Reputation points Microsoft External Staff Moderator
    2025-11-24T23:10:52.7066667+00:00

    Hi Hunter Thompson,

    Thanks for reaching out in Microsoft Q&A,

    The new NAS is not Windows based and cannot run Azure File Sync to perform a recall. What is the best way to go about this?

    To migrate around 300TB of data from Azure File Sync with tiered files on on-premises Windows servers to a new purely on-premises NAS without enough local storage for full recall, the best approach is as follows:

    • Use Azure Data Box Export to seed the entire dataset bulk transfer. This exports your Azure Files share directly from the cloud to a physical device, which you then restore onto the new NAS, avoiding any local hydration or recall on your current Windows servers and not requiring local disk space.

    I have never used RoboCopy to copy off tiered files, how does it handle tiered files? Will it copy directly from Azure without trying to tier all files locally to the file server, which does not have enough space? Or will I need to use AzCopy directly from the storage account? AzCopy might be problematic as there is some lag time between locked files syncing on-premise (where users are accessing them) up to the cloud storage account and I will need as short of a cutover window as possible.

    • After restoring the initial dataset on the NAS from the Data Box, use AzCopy for incremental synchronization of changes that occur during the migration window. AzCopy reads directly from Azure Files in the cloud, preventing additional recall activity on your on-prem Windows servers and avoiding local storage constraints.
    • Schedule multiple AzCopy sync runs before cutover to minimize the delta data volume, and perform a final AzCopy sync during the cutover window after freezing or stopping access to the old file shares for minimal downtime and consistency.
    • Avoid using Robocopy to copy tiered files directly from the current file servers to the NAS. Robocopy triggers file recalls from Azure File Sync to locally hydrate files, which your servers cannot support due to insufficient disk space, and this will cause failures or disk exhaustion.
    • Take backups before starting and monitor Azure File Sync recall progress, local storage availability, and file lock states during the delta sync phase.
    • Adjust your Cloud Tiering Recall Policy and local volume free space as needed if you plan any phased recalls.

    This combined Azure Data Box seed plus AzCopy incremental sync approach is the most efficient, reliable, and storage-safe method to migrate large, tiered datasets from Azure File Sync to a non-Windows NAS while minimizing network usage, cutover downtime, and risk of data loss or access disruption.

    Reference:

    Please let us know if you have any further queries. I’m happy to assist you further.

    Please do not forget to "Accept the answer” and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.

    1 person found this answer helpful.
    0 comments No comments

  2. Ritika sharma 0 Reputation points
    2025-11-25T04:05:45.8766667+00:00

    Cloud tiering, an optional feature of Azure File Sync, decreases the amount of local storage required while keeping the performance of an on-premises file server.

    When enabled, this feature stores only frequently accessed (hot) files on your local server. Infrequently accessed (cool) files are split into namespace (file and folder structure) and file content. The namespace is stored locally, and the file content is stored in an Azure file share in the cloud.

    When a user opens a tiered file, Azure File Sync seamlessly recalls the file data from the Azure file share.

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.