Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Question
Friday, March 22, 2019 1:16 PM | 3 votes
Short version of below - we can't seem to successfully send UDP packets larger than 1472 bytes when running Build 1809 of Windows, though it worked OK in previous versions.
We have an existing C# application (really a set of applications) that runs on a local wired network that periodically sends out UDP packets to communicate status to other applications on different computers on the network. Some of these packets are small, but some are fairly large. This application is written in C# using .NET 4.5.1 and uses the UdpClient class to broadcast and receive broadcasts. Prior to testing with Windows 10 build 1809 / Windows Server 2019 build 1809, everything worked fine - we sent and received the larger packets without a problem. However, starting with build 1809, it appears that we can no longer successfully send the large packets. Here is some of the testing that we did:
System 1: Windows 10 Build 1803 (MTU is 1500)
System 2: Windows 10 Build 1809 (MTU is 1500)
I wrote a test program that sends out a small (200 bytes or so) and large (8000 bytes or so) UDP packet using UdpClient, while also listening for those packets. Here is what happens when I send from each system:
-Send from System 1:
-System 1 sees both packets
-System 2 sees both packets
-Send from System 2:
-System 1 sees only the small packet
-System 2 sees both packets
This happens every single time - the large packets never arrive successfully. Further testing revealed that the magic number was 1472 bytes. That or less worked, and more than that failed. This is why I suspect that something with fragmenting/MTU is not working correctly. We had not seen this problem before, so I fired up Wireshark to take a look at what might be going on. However, and this is where it gets weird, starting Wireshark on the Build 1809 system suddenly makes it able to send the packet, even if I exit Wireshark. Rebooting the system, though, reverts it to its original state of being unable to send successfully. I should note that I always see "Fragmented IP Protocol" for these packets in Wireshark on the receiving end of things regardless of whether they are working correctly or not.
I did some reading online and found that RDP over UDP had a major overhaul in Build 1809, but I did not see anything about UDP packets larger than the MTU having a problem, nor was I able to find anyone else reporting this particular issue.
We have made no code changes to this part of our code in a long time - it worked on Windows 7, Server 2012R2, Server 2016, and Windows 10 prior to build 1809. Is there something new in Build 1809 that requires us to set a flag somewhere or configure something on the network adapter? I don't know what all Wireshark does when it boots up, but it seems to "fix" things somehow, so if anyone has an idea what that might be specifically that could help as well.
As an additional note - I did make sure my network drivers were up to date, and one of the systems was a Windows 10 VM running 1803 that I upgraded to 1809. It worked OK as 1803 and immediately stopped working correctly after the update to 1809.
All replies (14)
Tuesday, June 18, 2019 8:06 PM âś…Answered | 6 votes
I encountered this problem recently at my company when we updated a number of our systems to 1809 and 1903. I did a bunch of debugging and analysis and I think I know what the problem is and how to work around it.
Problem is that the UDP packets larger than MTU in size (larger than 1472 bytes of max payload), will be corrupted and dropped by the network stack. If you analyze the difference between data sent and data received, you will find that the first IP frame contains correct data. Subsequent frames, however, will have two bytes of data corrupted. This will generally repeat once for each subsequent set of 1472 bytes of data in the reassembled datagram. I did see a couple of other corruption patterns, depending on the data I was sending, but this was generally the most common one.
Moreover, the corrupted two bytes have direct correlation to the data being sent. So, the values will always be the same for the same set of data. My theory was that the origin of this corruption is the UDP checksum calculation going wrong. Generally this checksum computation is done in the protocol stack. Many NICs, however, support UDP/TCP checksum offloading.
I realized this optimization was the cause of the problem when I ran a packet sniffer and saw that the data was correct before it was sent out on the wire and was clearly received corrupted on the other end.
My conclusion is that this optimization (UDP Checksum Offload) was broken in the 1809 release and continues to remain broken in 1903. To work around it for now, the optimization needs to be disabled (seems to be enabled by default). This can be done in the Device Manager. Find your network adapter card in the list and go into Advanced tab. We only deal with IPv4, but IPv6 might also be affected, so you might need to disable both.
Hope this helps and hope the information will be passed along to the devs and officially fixed.
Monday, March 25, 2019 6:28 AM
Hi,
Thanks for your question.
I feel sorry that I am not a master in code.
I suggest you to post your question in MSDN forum, it is a developer community which is more related to your question.
https://social.msdn.microsoft.com/Forums/en-US/home
Best Regards,
Eric
Please remember to mark the replies as answers if they help and unmark them if they provide no help.
If you have feedback for TechNet Subscriber Support, contact [email protected].
Monday, March 25, 2019 12:18 PM
Thanks - I will try posting it over there as well.
Monday, March 25, 2019 5:37 PM
I posted this in the Visual Studio form, and the first reply over there indicated that I should post in this forum. Was something in the network configuration changed with build 1809? I am running literally the exact same code in build 1803 or 1809 of Windows and getting different results.
Tuesday, March 26, 2019 5:13 PM
I tried testing this using Packet Sender, and I see the same issues as I do using my own code.
Thursday, March 28, 2019 6:23 AM
Hi,
Thanks for your reply.
About more information, you can refer to IETF RFC 1122.
I feel sorry that I am not a master in C# coding, I can't give you professional suggestion in this are.
Here is address of C# forum.
https://social.msdn.microsoft.com/Forums/vstudio/en-US/home?forum=csharpgeneral
Thanks for your understanding.
Best Regards,
Eric
Please remember to mark the replies as answers if they help and unmark them if they provide no help.
If you have feedback for TechNet Subscriber Support, contact [email protected].
Monday, April 22, 2019 6:02 PM | 2 votes
Hi Nola,
I can confirm your findings are what we are experiencing as well. We develop a commercial software suite that sends out UDP broadcasts amongst our components, and after upgrading to 1809 UDP packets larger than the MTU are no longer working. Downgrading to 1803 and everything works fine. We do not use C#, so I can confirm the issue is not in the C# libraries, but is something in the Core OS. Microsoft definitely broke something in Win10-1809. Hoping for a patch soon.
Friday, May 10, 2019 7:47 PM | 3 votes
I confirm the same findings. Our UDP data sharing failed after upgrading client computers to 1809. The solution was to revert to 1803.
At one point early in the investigation I was able to broadcast large (>4000 byte) packets via UDP from one 1809 PC to another 1809 but not the opposite direction. On one of those I had issued "netsh winsock reset". No knowing if that helped or hindered I issued "netsh winsock reset" on the other PC. From that point on I could only send packets with <= 1472 bytes via UDP between the 1809 PCs, i.e. "netsh winsock reset" made changes that certainly worsened the issue.
I am currently testing with three PCs: one Win10 1709 and two Win10 1809. The following were determined via a "test program" not written in C#. Broadcasting <= 1472 bytes via UDP works in both Win10 1709 and 1809 without error. Broadcasting 1472+ bytes works from Win10 1709 to 1809 without error. Broadcasting 1472+ bytes from 1809 to 1809 will work "in a fashion" if the data payload is all zeros! When sending 1480 bytes from 1809, the UDP checksum in the source (seen via Wireshark) is different from the checksum received at the destination (the checksum in the reassembled UDP at the destination was manually verified to be correct). Moreover the 1480-byte original "zeros" payload will have 0xFF 0xFF in bytes 1479 and 1480 when received at the destination, i.e. those bytes should have been 0x00 bytes. When sending all 0x01 bytes, the last two bytes will be 0xFB. When sending all 0x02 bytes, the last two bytes will be 0xF7, etc. Sending payloads of double and triple 1480-bytes causes the 0xFF 0xFF to appear 1480 bytes after the previous instance.
Note that 1480 bytes (1472 + 8) happens to correspond to the largest MTU size plus the size of the UDP header.
The current thought is that UDP 1472+ byte packets are being dropped because the checksum calculation does not verify. The exception (payload of all 0x00) works because the unexpected 0xFF 0xFF bytes do not change the checksum calculation.
Thursday, May 23, 2019 1:11 PM | 2 votes
We just tested the newest Windows 10 version 1903 and it still does not resolve this issue.
Thursday, June 13, 2019 11:27 AM
That was on my to-do list, so I am sad to see it still being a problem in build 1903. I have engaged (near the time I first posted this) with Microsoft support directly, but they have not come up with much.
Monday, June 24, 2019 7:58 PM | 1 vote
I tested this on build 1809 of Windows 10, and it does appear to fix things there. However, I am also running build 1809 of Windows Server 2019, and I have disabled every offload that I could and it still won't send larger packets through. That being said, I think even for Windows Server, this is probably pointing in the right direction.
Wednesday, June 26, 2019 9:59 PM
Can't really speak about this situation on Windows Server 2019, haven't really played with it at all and I don't think any of our systems or customers are running it. I wonder if some of the networking code is implemented a bit differently there, especially since it aims to maximize networking performance.
I will make a note that disabling checksum offload doesn't fix things for Windows Server 2019, in case it does come up. Thanks!
Thursday, January 16, 2020 12:34 AM
Just want to add that this issue is also present in Windows 10 1909.
Wednesday, July 15, 2020 7:53 AM
Does anyone know if this is fixed in version 2004?