Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Sunday, December 17, 2017 5:38 PM
Hi,
Im in the process of migrating from an ESXi host, til an Hyper-V host with storage spaces.
The Hyper-V server is 90% percent for testing purposes, and has the following hardware:
1x AMD 1700x Ryzen
1x ASRock x370 Taichi motherboard
4x 16GB DDR4 memory
2x SSD
6x6 TB HGST 7K 128MB cache disks
The storage pool is containing the 6x6TB disks, and all volumes on the virtual disks are formatted with ReFS with 64KB Blocks.
The server will primarilly stores files between 6-10GB in size, that isnt IOPS intense.
Ive recently ran a series of performance tests, and realized that the storate pool was performing extremely poorly with Parity.
If im copying a 10GB file from a server within the same VLAN over SMB, the transfer rate is between 90-100MB/s
When copying the same file to one of the SSDs or one of the 6TB disk as a standalone disk (evicted from the pool) on the Hyper-V server, im then able to fully maxout the 1Gbit link between the servers
I believe i have tested all the settings, that im aware that could be changed to tune this - but i cant seem to find a solution.
Is Storage Spaces supposed to run this slow, when using parity?
The host is pratically running idle, since no VMs are running - so it should be able to calculate the parity almost instantly.
Ive attached 2 pictures from perfmon - one with the performance of a single disk (D:), and one with the performance of the storage spaces (G:).
Perfmon - Storage Spaces:
Perfmon - Single Disk
Any help regarding this issue, would be very much appreciated.
Sunday, December 17, 2017 5:45 PM
Hello,
Which operating system are you using ?
In my opinion, nothing really strange.. look at this :
https://social.technet.microsoft.com/wiki/contents/articles/15200.storage-spaces-designing-for-performance.aspx
Sunday, December 17, 2017 6:16 PM
Hi,
Im running WS2016 Core edition.
Ive just tried to install Starwind Virtual SAN, and create a .vhdx on each of the disks, and then combine them within starwind. Using starwind im able to use the full gigabit link, and the disk performance looks much better.
Sunday, December 17, 2017 6:22 PM
I think you should use Starwind then,
I quote : "The caveat of a parity space is low write performance compared to that of a simple or mirrored storage space, since existing data and parity information must be read and processed before a new write can occur. Parity spaces are an excellent choice for workloads that are almost exclusively read-based, highly sequential, and require resiliency, or workloads that write data in large sequential append blocks (such as bulk backups)."
Sunday, December 17, 2017 8:15 PM
hi,
Thanks for the answer.
I think its a huge shame, that Microsoft designed it this way.
A Raid 5 or 6 (Parity) should be able to perform better, even that its designed for read intense operations.
Monday, December 18, 2017 9:49 AM
Hi,
A parity space may also have slow performance because existing data and parity information must be read and processed before a new write can occur. And you could try dedicated SSD drives to improve a parity space performance.
Best Regards,
Mary
Please remember to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.
Tuesday, December 19, 2017 7:40 AM
Hi Mary,
Ive already considered that, but ran into an issue.
I want a parity 7K storage pool, with an mirrored SSD pool infront.
This seems to be impossible to achieve, and i can somehow understand why both the SSD pool and the 7K pool, must be the same layout type.
When looking at consumer products like Synology, its possible to have different layouts working together - also this is an option in enterprise graded storage units like IBMs Storwize.
Isnt it possible to make the host write to mirror (Raid10) at first, and then move it to a parity volume and calculate the parity at this point?
Tuesday, December 19, 2017 8:14 AM
Hi,
>Isnt it possible to make the host write to mirror (Raid10) at first, and then move it to a parity volume and calculate the parity at this point?
That's like configuration in S2D. Writes land first in the mirrored portion and are gradually moved into the parity portion later
https://blogs.technet.microsoft.com/filecab/2016/08/29/deep-dive-volumes-in-spaces-direct/
/en-us/windows-server/storage/refs/mirror-accelerated-parity
Best Regards,
Mary
Please remember to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.
Tuesday, December 19, 2017 8:35 AM
Hi Mary,
Is this feature available in the "standalone" version of storage spaces?
If so - i cant seem to find any guides, on how to enable this feature.
Tuesday, December 19, 2017 9:11 AM
Hi,
Based on my knowledge, this configuration is on S2D. And S2D is on Cluster environment.
For standaonle storage space, there's only the tier. When there is new data coming in, the data would write to SSD Tier directly until SSD Tier is full.
Here is blog for server 2012R2. For tier in storage space on standalone server 2016, it is the same.
Best Regards,
Mary
Please remember to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.
Tuesday, December 19, 2017 9:38 AM
Hi Mary,
That is fine, as long as the SSD cache automatically moves the data to the 7K pool.
I still cant figure out, if its possible to create a mirrored volume on the SSD disks, and a parity pool on the 7K disks, and then use the SSD disks as the cache - it seems that you cant create tiering using different pool layouts (mirror, simple, parity) - thats all i need to confirm to proceed :)
Tuesday, December 19, 2017 9:49 AM
Hi Kristia Leth,
> it seems that you cant create tiering using different pool layouts (mirror, simple, parity) - thats all i need to confirm to proceed
As far as I know, for general tier configuration, you could only choose the specific layouts and set faster tier and standard tier
Here is the configuration for tier, you may also check for a reference.
Best Regards,
Mary
Please remember to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.
Tuesday, December 19, 2017 10:43 AM
Hi Mary,
So its not possible to create the above scenario, on an standalone Storage Spaces platform (if im right).
That is a major showdown....
Using storage spaces for offsite replica of the backup storage, would be amazing due to the high read performance.
But when writing 2-500GB og data to the backup replica every day, we need to be able to perform this operation fast = 1TB SSD write cache.
Wednesday, December 20, 2017 5:25 AM
Hi Kristian Leth,
Based on my knowledge, for standalone server 2016, if you set storage space, you may refer to the tier configuration in server 2012R2 to improve the performance.
Appreciate your support and understanding.
It is also appreciated that the other members in our forum can share their experience with us about this scenario.
Best Regards,
Mary
Please remember to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.
Wednesday, December 20, 2017 6:27 AM
Hi Mary,
I believe ive read the tier configuration from WS2012, that you have provided.
It dosent seem to be possible, to have a parity pool and a mirrored SSD cache pool.
But if im wrong, please do tell - i would very much like to have the SSD cache infront.
Thursday, December 21, 2017 1:32 AM
Hi Kristian Leth,
>It dosent seem to be possible, to have a parity pool and a mirrored SSD cache pool.
I'm afraid for standalone server, we could only set the tier like server 2012R2. And as you said, couldn't have a parity pool and a mirrored SSD cache pool.
Appreciate your support and understanding.
Best Regards,
Mary
Please remember to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.
Thursday, December 21, 2017 4:42 PM
Multi-Resilient volumes is the feature of Storage Spaces Direct (S2D) and it is not available in standalone Storage Spaces. For a simple two-node configurations I prefer StarWind VSAN, IMHO S2D really starts to shine starting from 5 nodes.
Thursday, December 21, 2017 7:11 PM
Am I correct to infer that this is one machine set up as a Hyper-V host?
If that's the case, then use a hardware RAID controller with flash-backed cache and set up a RAID 6 array across all disks.
Storage Spaces requires a 2-Way or 3-Way mirror to have moderately decent performance. The catch is, a 2-Way mirror hits 50% of storage while a 3-Way mirror hits 67% of storage for the redundancy bits. That's a huge cost compared to two disks in a RAID 6 array.
ReFS is for Storage Spaces Direct (S2D) _only_ except in very specific cases.
Philip Elder Microsoft High Availability MVP Blog: http://blog.mpecsinc.ca Twitter: @MPECSInc
Thursday, December 21, 2017 7:16 PM
hi,
Thanks for the answer.
I think its a huge shame, that Microsoft designed it this way.
A Raid 5 or 6 (Parity) should be able to perform better, even that its designed for read intense operations.
Parity in Storage Spaces is for archival purposes only. We set up 60-bay shared JBODs in a Storage Spaces 2-node or 3-node cluster for archival purposes with a few SAS SSDs and the rest NearLine SAS HDDs. 400+TB/JBOD is the goal. The background image on my blog is one such setup.
Storage Spaces Parity is not, and never was, designed for use in a high IOPS environment like one that is hosting VMs.
Even after the initial push by Microsoft that Storage Spaces Direct (S2D) hyper-converged platform could use Mirror Accelerated Parity (MAP) with VMs they've backed off some to 3-Way Mirror being the best direction for specific workloads. When set up correctly, S2D and MAP works well in the hyper-converged setting with VMs. The catch is getting the "set up correctly" part down. ;)
Philip Elder Microsoft High Availability MVP Blog: http://blog.mpecsinc.ca Twitter: @MPECSInc
Friday, December 22, 2017 6:40 AM
Hi Philip,
Thx for the detailed answer.
I do agree that storage spaces virtual disks in parity, isnt designed for high IOPS - but...
I have a large file server (10TB+), that only stores large files (6-10GB each).
When copying a large file to the server on storage spaces, i still get 25-35 MB/s - i get the point that if its a large or a small file dosent matter, when calculating the parity.
The server isnt being used for high IOPS - its being used for large files.
So to wrap it up - i get why a parity volume is in-efficient in high IOPS (write), but i would have expected a better write performance, than what storage spaces is delivering.
Did a test with the same disks to an iSCSI lun on a Synology SOHO box.
When testing write and read performance, im able to max out the gigabit link - and looking at the performance monitor, i can see that the CPU, Disk and memory is not having a hard time at all - so the ethernet link is the bottleneck (as expected).
The point being, that i would expect a much higher write performance for a Storage spaces parity virtual disk.
Friday, December 22, 2017 4:38 PM
Hi Philip,
Thx for the detailed answer.
I do agree that storage spaces virtual disks in parity, isnt designed for high IOPS - but...
I have a large file server (10TB+), that only stores large files (6-10GB each).
When copying a large file to the server on storage spaces, i still get 25-35 MB/s - i get the point that if its a large or a small file dosent matter, when calculating the parity.
The server isnt being used for high IOPS - its being used for large files.
So to wrap it up - i get why a parity volume is in-efficient in high IOPS (write), but i would have expected a better write performance, than what storage spaces is delivering.
Did a test with the same disks to an iSCSI lun on a Synology SOHO box.
When testing write and read performance, im able to max out the gigabit link - and looking at the performance monitor, i can see that the CPU, Disk and memory is not having a hard time at all - so the ethernet link is the bottleneck (as expected).
The point being, that i would expect a much higher write performance for a Storage spaces parity virtual disk.
In order to get some decent write to Parity performance one needs to set up SSDs in a Journal Tier. It's hidden. But, the Journal Tier takes all incoming bits and caches them while waiting for parity calculations and flush to the Parity Tier.
We generally set up at least three SSDs as it will be a 3-Way Mirror with enough space to cache incoming files.
The other setting is the Write-Back Cache setting. As a rule 1GB should be enough. But, one can play a bit with those settings to get an idea of what's best for the particular setup being worked with.
And finally, yes, if incoming data is being written straight through to the Parity Tier one can expect dismal performance. Period.
Philip Elder Microsoft High Availability MVP Blog: http://blog.mpecsinc.ca Twitter: @MPECSInc
Friday, December 22, 2017 8:12 PM
Hi Philip,
Yet again - thx for the answer.
This isnt possible with a standalone "Storage Spaces" setup is it?
I havent been able to find any guides or such, on how to have en SSD cache / WriteBack (Mirror) with 2 SSDs infront of an Parity virtual disk.
Saturday, December 23, 2017 12:41 AM
It is, but Storage Space's cost versus the benefit relative to a plain old RAID controller with 1GB or more flash backed cache doesn't yield a return.
These are the two closest things to a guide that I've found:
The second link is the standard guide with the PowerShell step -Usage Journal in it.
Between the two and our own testing we were able to get our first clustered Storage Spaces archival setup going with just shy of 400TB in one JBOD.
Philip Elder Microsoft High Availability MVP Blog: http://blog.mpecsinc.ca Twitter: @MPECSInc
Friday, December 29, 2017 4:52 PM
Hi,
Ive managed to add the SSD drives to the pool, and then set the to Journal mode.
This didnt speed up the pool / virtual disk at all, even that all the settings look correct.
Since the physical disks are a part of a pool that is defined as Mirror, Simple or Parity - should i then expect the same write performance, if i need to rebuild the pool in case of a dead disk or such?