Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Question
Saturday, May 3, 2014 4:54 PM
I have been playing around with NIC teaming on a Windows 2012 server (not R2). When transferring from two clients at the same time I do not get 2Gbs, instead it's just 1Gbs split between the two. How can I increase the throughput?
Here are my server offloading settings. All offloading is enabled on both physical NICs and the team NIC. Should I enable the other offloading settings?
Receive-Side Scaling State : enabled
Chimney Offload State : disabled
NetDMA State : disabled
Direct Cache Access (DCA) : disabled
Receive Window Auto-Tuning Level : disabled
Add-On Congestion Control Provider : none
ECN Capability : disabled
RFC 1323 Timestamps : disabled
Initial RTO : 3000
Receive Segment Coalescing State : disabled
EnableMultiChannel : True
Below is a test I did with a simple robocopy to test SMB.
With RSS enabled and offloading enabled on NICs, after reboot
Single client - 66.84 MB/s
Two clients simultaneous:
Client1 - 28.48 MB/s
Client2 - 17.03 MB/s
Here is an iperf test.
Single client - 86.1 MBytes/sec
Two client simultaneous:
Client1 - 40.7 MBytes/sec
Client2 - 42.0 MBytes/sec
All replies (12)
Sunday, May 4, 2014 3:07 PM
Do you need RSS capable NICs for this to work?
Monday, May 5, 2014 9:58 AM
Hi,
What are the Teaming mode and Load blancing mode you have configured? Did you have multiple simultaneous conversations with the server or just a single conversation? It seems that it doesn't increase the bandwidth for a single conversation.
In addition, Windows Server 2008 R2 depended on the NIC manufacturers to develop the teaming software to support NIC teaming. In Windows Server 2012 the NIC teaming capability is now native to the Operating System.
Best regards,
Susie
Monday, May 5, 2014 2:31 PM
I was sending from two clients at the same time, two different physical computers both with Gb NICs. I tested each individually to make sure I could get close to the 1Gbs throughput, then together. Instead of getting 2Gbs with both sending at the same time, they just split 1Gbs between them.
Teaming mode: Switch Independent
Load balancing mode: address hash
Standby adapter: none (all active)
Ultimately I want to use this for DPM, but I'm testing it first to see if it is even worth it.
Monday, May 5, 2014 3:06 PM
Your switches need to be LACP compatible and ports configured properly. Without this your network card will only talk through one of the cards because it can't split traffic between the team members only load balance them.
Monday, May 5, 2014 3:24 PM
But shouldn't it load balance traffic between the two NICs? So when two clients send data to the server, one would send over NIC1 on the server and the other client would send over NIC2? I thought Windows NIC teaming took out the switch config, simplified it?
What about data going out of the server? Will that balance over the two NICs? So maybe there is 2Gbs out?
Friday, May 30, 2014 2:36 AM | 1 vote
Hi,
Based on my knowledge of NIC teaming, since you have configured the load balancing mode to Address Hash without standby adapter, the traffic would be load balanced between the NICs in the team. The hash algorithm creates a hash based on components of the packet, and then it assigns packets that have that hash value to one of the available network adapters which keeps all packets from the same TCP stream on the same network adapter.
Best regards,
Susie
Monday, May 4, 2015 11:31 AM | 1 vote
I had exactly the same results in my lab and exactly the same question. This question (Why NIC teaming does not increase bandwidth? ) sterms from the following (for example) MS's assertion:
https://technet.microsoft.com/en-us/library/dn550728.aspx
"
NIC Teaming (LBFO) recommendations
We recommend that you team physical network adapters in the management operating system. This provides bandwidth aggregation and network traffic failover if a network hardware failure or outage occurs.
The NIC Teaming feature, also known as load balancing and failover (LBFO), provides two basic sets of algorithms for teaming.
- Switch-dependent modes*. Requires the switch to participate in the teaming process. Typically requires all the network adapters in the team to be connected to the same switch.*
- Switch-independent modes*. Does not require the switch to participate in the teaming process. Although not required, team network adapters can be connected to different switches.*
Both modes provide for bandwidth aggregation and traffic failover if a network adapter failure or network disconnection occurs. However, in most cases only switch-independent teaming provides traffic failover for a switch failure."
This assertion is NOT correct because it says nothing about teaming limitations for the inbound traffic:
http://blog.pluralsight.com/nic-teaming-windows-server-2012
"
3. Adding an NIC increases available bandwidth
If one NIC is good does that mean two is better? Well, in the case of NIC Teaming, network traffic is balanced across all active NICs, providing the ability to double your available bandwidth or more depending on the number of NICs in your server. There are some catches we need to be aware of though. There are two modes you can configure: Switch Independent and Switch Dependent.
With Switch Independent, the teaming configuration will work with any network switch. This means you can use non-intelligent switches in your network and still use NIC Teaming because all of the intelligence of how outbound traffic is distributed is managed by Windows Server 2012. The downside is that all inbound traffic is sent to only one NIC and is not distributed between all active NICs. This works great for web or FTB servers with heavy outbound traffic."
- that's the reason why there's no bandwidth aggregation when *"transferring from two clients at the same time" (there should be bandwidth aggregation if you would transferring from a SERVER to two clients at the same time).
*
The question why MS publishes incorrect information on technet still remains.
Regards,
Michael
Tuesday, February 16, 2016 2:21 PM
You really don't even need to look at TechNet for this info. It's all in the help file in PowerShell.
--Dynamic: Uses the source and destination TCP ports and the IP addresses to create a hash for outbound traffic. Moves outbound streams from
team member to team member as needed to balance team member utilization. When you specify this algorithm with the TeamingMode parameter and
the SwitchIndependent value, inbound traffic is routed to a particular team member.
Wednesday, February 17, 2016 9:15 AM
inbound traffic is routed to a particular team member. - yes, it is. But I see the same results for the OUTbound traffic...
Thursday, February 18, 2016 12:29 PM
Hello
We spent/lot some time here to find "why" our four 10 GBits adapter did not allow us to reach an amazing theorical bandwidth of 40 Gbits.
Even using LACP settings and high-end Nexus switches we could never reach a higher bandwidth than 10 Gbits.
Our datacom team found the explanation after sniffing network packets:
From the "source" server network data was always sent to only ONE NIC of the "40 GBits" team.
Always.
It was a random one, each time.
But the source server is somehow aware of the MAC address destination NIC.
Florent
Thursday, February 18, 2016 12:49 PM
"From the "source" server network data was always sent to only ONE NIC of the "40 GBits" team.
Always." - what about downloading FROM the server with 4x10GB (which should "Both modes provide for bandwidth aggregation ")?
Thursday, February 18, 2016 3:20 PM
Hello
Not sure of the result f that download test and we do not have this setup anymore available for testing unfortunately.
Florent