Share via


NIC Teaming and iSCSI

Question

Monday, January 9, 2017 8:46 PM | 1 vote

Hi,

From the following article: https://blogs.technet.microsoft.com/askpfeplat/2013/03/18/is-nic-teaming-in-windows-server-2012-supported-for-iscsi-or-not-supported-for-iscsi-that-is-the-question/ we know that we can use iSCSI over LBFO and that it has some advantages over MPIO in SOME cases. And that iSCSI over LBFO is supported for targets.

Now, I have some storages that only have 2 NIC, some have 3, others 4. On the other side, my hyper-v servers have 4,5,6,8 NICs. Thus I have somewhat 2 solutions for implementation.

If I use MPIO, I would need to create virtual adapters on the storage server to create the paths on alternate VLANs up to 6 VLANs (As some Hyper-V server would have 6 NICs for iSCSI traffic) or I would need to create teams on the Hyper-v servers to access the storage (over 2 vlans) and run MPIO on top of the teams.

In the other case, I would just team on both sides and run iSCSI traffic over the teams (initiator and target). One seems to involve way less moving parts than the other (straight up teaming instead of teaming + vlan + MPIO). And I usually feel that simpler is usually better.

So would that work while still offering good performances ? Would the iSCSI traffic be able to use the bandwidth of the aggregated NICs properly?

Thanks for any comments

All replies (6)

Monday, January 9, 2017 10:18 PM ✅Answered

Never run MPIO on top of NIC teaming.  Use one or the other.

Traffic will conform to the lowest common denominator, otherwise known as the skinniest pipe in the communication link.  If a iSCSI Target has only two NICs, the theoretical max you could ever hope to get out of it would be the throughput of the slowest of the two NICs times two.

Have you performed benchmarks that have proven to you that storage is bottleneck in your environment?  Just because you increase the available bandwidth to storage does not mean all applications can benefit from it.

Are you looking at this solution because you have several 1 GE links?  A better solution would be to go to two 10 GE connections instead of trying to work things out with multiple 1 GE links.

And, though iSCSI is now supported over teamed NICs, it is still recommended to use MPIO.

. : | : . : | : . tim


Tuesday, January 10, 2017 2:37 PM | 2 votes

Hello O.Ragain, 

I agree with Tim. I would highly recommend using MPIO over NIC teaming for iSCSI traffic. Storage traffic requires minimum latency, teaming could create latency on your storage. 
This comparison would be helpful for you: 

LACP vs MPIO


Tuesday, January 10, 2017 5:06 PM

Hi , eventually no , the LBFO speed will always be the speed of 1 adapter as the load balancing algorithm is still using MAC address , ports etc. to calculate how to distribute bandwidth on the NIC.

MPIO will always be better for iSCSI.

However you could create virtual switches on the LBFO teams and then create vNIC's for iSCSI traffic, at least the number of physical nics you have in your server so that the algorithm does what it is supposed to do. In that configuration since you do not know on which switch the traffic will arrive , I suppose you have two switches, you will need some sort of interconnection between them in order to make sure that the traffic actually can arrive on the specified port of the target.

On the target side I would not use LBFO since it is not really needed.

This is a scenario for CNA basically.


Tuesday, January 10, 2017 5:50 PM

Thanks for the replies, helps a lot.

I was looking at Teaming because my storage and my hosts have completely random numbers of NICs (dealing with history here).

So pretty much, build subnet range (or VLANs) for each Ethernet cards for the iscsi traffic and use MPIO is the best performance wise. Darn, that is gonna require a lot of reconfiguring :)

Oh well, on to it.

Again, thanks for the input


Wednesday, January 11, 2017 1:54 AM

I would work towards a common, 2-NIC configuration for MPIO across all servers, no matter how many NICs you have.  The only reason I might, and I really mean might, consider more than two would be if my benchmarking on a particular application said that it was needed.  Then, rather than trying to manage a one-off network configuration, I would seriously consider moving iSCSI to 10 GE instead of 1GE.

. : | : . : | : . tim


Monday, January 16, 2017 7:53 AM | 3 votes

I would work towards a common, 2-NIC configuration for MPIO across all servers, no matter how many NICs you have.  The only reason I might, and I really mean might, consider more than two would be if my benchmarking on a particular application said that it was needed.  Then, rather than trying to manage a one-off network configuration, **I would seriously consider moving iSCSI to 10 GE instead of 1GE.
**

+100500

That's probably cheapest and the most efficient update could be done these days! The only remark I personally have is - go RDMA capable hardware if you can (Mellanox CX4xxx?). TCP Vs RoCE2 is day and night! 

Cheers,

Anton Kolomyeytsev [MVP]

StarWind Software Chief Architect

Profile:   Blog:   Twitter:   LinkedIn:  

Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.