Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Question
Sunday, September 4, 2016 1:16 PM
Hi,
I have a Windows Server 2012 R2 Hyper-V cluster of three nodes, each node is configured with below NICs:
1. Management (remote access only)
2. VMs-Connectivity
2. Heartbeat
3. Live Migration
each VM cannot ping or reach the Hyper-V host in which it is placed while it can ping it or reach it if you live migrate it to another host. VMs connectivity to domain and network is fine. The issue is not related to a certain Hyper-V host or VM, I mean if for instance VM01 is hosted on HV01, it cannot ping it nor reach it but if you move it to HV02 then it can ping HV01 ...How can we troubleshoot this? The option to validate the cluster is not possible as we have 74 VMs running on this cluster and all are production VMs.
Note: We have Vlan tagging configured on the team interface for the management NIC and in each VM vNIC configuration.
Also Windows firewall and network discovery on Hyper-V hosts are off
All replies (24)
Sunday, October 9, 2016 7:36 AM ✅Answered
After checking SCVMM event error ID 2915, 2916 : WinRM service issue
the solution divided on two steps
- Rebuild the WMI repository on the host by command line more info according to this solution CLICK HERE.
- Check the Use by management option for the physical NICs associated in Virtual Management NIC Team (find the attached)
After that VMs can now can ping the host and vis versa
Sunday, September 4, 2016 4:51 PM
dears, your support is much appreciated.
Sunday, September 4, 2016 5:28 PM
Hi,
Refer below link to fix the issue:
Sounds like you're using a private virtual switch as your hyper-v switch for that guest.
If this isn't the case, try running a tracert and adding the results here so we can have a better idea of what's going on.
For this particular instance however the issue was caused by the ICMPv4-In rule being disabled. In other words the guest OS firewall was the cause.
Hope will help you, Mark as answer.
Thanks
| 0down voteaccepted | Sounds like you're using a private virtual switch as your hyper-v switch for that guest. |
| 0down voteaccepted | Sounds like you're using a private virtual switch as your hyper-v switch for that guest. |
Sunday, September 4, 2016 5:31 PM
Hi,
Refer below link to fix the issue:
Sounds like you're using a private virtual switch as your hyper-v switch for that guest.
If this isn't the case, try running a tracert and adding the results here so we can have a better idea of what's going on.
For this particular instance however the issue was caused by the ICMPv4-In rule being disabled. In other words the guest OS firewall was the cause.
Hope will help you, Mark as answer.
Thanks
0down voteaccepted Sounds like you're using a private virtual switch as your hyper-v switch for that guest.
0down voteacceptedHi, Sounds like you're using a private virtual switch as your hyper-v switch for that guest
Thanks for that. I am using an external virtual switch, also Windows firewall is disabled already on the guest VMs.
Monday, September 5, 2016 5:06 AM
Hi AhmadJY,
Would you please post the result of ipconfig /all on both VM and host? It could help analyzing the issue.
>>Note: We have Vlan tagging configured on the team interface for the management NIC and in each VM vNIC configuration.
Did you configure VLAN on the physical nic?
Best Regards,
Leo
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected].
Monday, September 5, 2016 6:34 AM
Hi AhmadJY,
Would you please post the result of ipconfig /all on both VM and host? It could help analyzing the issue.
>>Note: We have Vlan tagging configured on the team interface for the management NIC and in each VM vNIC configuration.
Did you configure VLAN on the physical nic?
Best Regards,
Leo
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected].
Hi,
VM:
Windows IP Configuration
Host Name . . . . . . . . . . . . : xxx
Primary Dns Suffix . . . . . . . : domain.com
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
DNS Suffix Search List. . . . . . : domain.com
Ethernet adapter Ethernet:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft Hyper-V Network Adapter
Physical Address. . . . . . . . . : 00-15-5D-08-65-7C
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 10.9.8.61(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 10.9.8.1
DNS Servers . . . . . . . . . . . : 89.189.226.4
89.189.226.5
NetBIOS over Tcpip. . . . . . . . : Enabled
Tunnel adapter isatap.{C092C396-C00B-440C-9EC5-A978EC2162C8}:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft ISATAP Adapter
Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Host:
Windows IP Configuration
Host Name . . . . . . . . . . . . :yyy
Primary Dns Suffix . . . . . . . : domain.com
Node Type . . . . . . . . . . . . : Hybrid
IP Routing Enabled. . . . . . . . : No
WINS Proxy Enabled. . . . . . . . : No
DNS Suffix Search List. . . . . . : domain.com
local
Ethernet adapter Management - VLAN 235:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft Network Adapter Multiplexor Dri
ver #3
Physical Address. . . . . . . . . : E0-DB-55-01-F5-40
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::c4ee:187a:dc78:f058%28(Preferred)
IPv4 Address. . . . . . . . . . . : 10.9.8.101(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 10.9.8.1
DHCPv6 IAID . . . . . . . . . . . : 484498261
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1E-02-C2-74-E0-DB-55-01-F5-46
DNS Servers . . . . . . . . . . . : 89.189.226.4
89.189.226.5
NetBIOS over Tcpip. . . . . . . . : Enabled
Ethernet adapter Cluster:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft Network Adapter Multiplexor Dri
ver #4
Physical Address. . . . . . . . . : E0-DB-55-01-F5-46
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::a1c7:9614:4f00:dffd%29(Preferred)
IPv4 Address. . . . . . . . . . . : 172.16.31.101(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 501275477
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1E-02-C2-74-E0-DB-55-01-F5-46
DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
fec0:0:0:ffff::2%1
fec0:0:0:ffff::3%1
NetBIOS over Tcpip. . . . . . . . : Enabled
Ethernet adapter LiveMigration:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft Network Adapter Multiplexor Dri
ver #2
Physical Address. . . . . . . . . : E0-DB-55-01-F5-3C
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::490:34d2:26f8:9e5e%27(Preferred)
IPv4 Address. . . . . . . . . . . : 192.168.1.101(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 467721045
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1E-02-C2-74-E0-DB-55-01-F5-46
DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
fec0:0:0:ffff::2%1
fec0:0:0:ffff::3%1
NetBIOS over Tcpip. . . . . . . . : Enabled
Ethernet adapter Ethernet 2:
Connection-specific DNS Suffix . : local
Description . . . . . . . . . . . : iDRAC Virtual NIC USB Device
Physical Address. . . . . . . . . : 5C-F9-DD-D9-17-B9
DHCP Enabled. . . . . . . . . . . : Yes
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::e0bb:4ef8:3630:640f%32(Preferred)
IPv4 Address. . . . . . . . . . . : 169.254.0.2(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Lease Obtained. . . . . . . . . . : Sunday, September 4, 2016 12:05:51 PM
Lease Expires . . . . . . . . . . : Wednesday, September 14, 2016 12:05:51 PM
Default Gateway . . . . . . . . . :
DHCP Server . . . . . . . . . . . : 169.254.0.1
DHCPv6 IAID . . . . . . . . . . . : 1297938909
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1E-02-C2-74-E0-DB-55-01-F5-46
DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1
fec0:0:0:ffff::2%1
fec0:0:0:ffff::3%1
NetBIOS over Tcpip. . . . . . . . : Enabled
Tunnel adapter Local Area Connection* 12:
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft Failover Cluster Virtual Adapte
r
Physical Address. . . . . . . . . : 02-E1-3E-53-BF-77
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Link-local IPv6 Address . . . . . : fe80::e4ef:2b29:af41:f54a%25(Preferred)
IPv4 Address. . . . . . . . . . . : 169.254.2.82(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.0.0
Default Gateway . . . . . . . . . :
DHCPv6 IAID . . . . . . . . . . . : 419600908
DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1E-02-C2-74-E0-DB-55-01-F5-46
NetBIOS over Tcpip. . . . . . . . : Enabled
Tunnel adapter isatap.{7C0E97EF-0680-4ED2-B17C-F4AE5E398EA7}:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft ISATAP Adapter
Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Tunnel adapter isatap.{00455C67-A6F7-416B-B29F-6EBB7536B519}:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft ISATAP Adapter #2
Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Tunnel adapter isatap.{7D9F97C6-7310-4F44-A03E-6592D1F2C287}:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft ISATAP Adapter #3
Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Tunnel adapter isatap.{DE07C04E-B56A-41F1-BD70-BE7777471FAF}:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : Microsoft ISATAP Adapter #4
Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
Tunnel adapter isatap.local:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . : local
Description . . . . . . . . . . . : Microsoft ISATAP Adapter #5
Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
The Vlan tag is configured on the logical team interface of the two management NICs (vlan 235) but it is not configured on the team interface on which VMs virtual switch is configured, the vlan tag is configured inside each VM vNIC settings.
Also I noticed I cannot ping from host to VM as well if VM is on the same host, if it is moved to another host I can ping it from original host
Monday, September 5, 2016 12:32 PM
Did you try to set a VLAN tag on the physical host Interface
Aymanq
Monday, September 5, 2016 12:49 PM
You can run the validation wizard on a running cluster with absolutely no problem to running VMs. The only test that would impact the running VMs is the storage test, so you can safely run the validation wizard and select not to run the storage test. And, if you have a disk witness, you can even run the storage tests by selecting that they run only on the disk witness. Then the storage on which the VMs are stored will not be impacted. Since you are experiencing issues with your network, you can readily run the validation wizard and run all tests but storage to see if the wizard points out something.
I do see from your ipconfig /all that you have one network on the host that is not configured properly for IP usage. It is showing a 169.254.0.0 address, which means that you have not configured it. For clustering, it is best to remove/disable any network that you are not configuring.
Also, it looks like you might be using VLAN 235 on the cluster management network. Have you defined a VLAN on the VM network? VMs require that the VLAN be configured on the NIC properties within each VM.
. : | : . : | : . tim
Monday, September 5, 2016 1:08 PM
Did you try to set a VLAN tag on the physical host Interface
Aymanq
The management team interface has a vlan tag. The team on which the external virtual switch is configured does not have a vlan tag, vlan tagging is being configured from each VM vNIC configuration.
Monday, September 5, 2016 1:08 PM
You can run the validation wizard on a running cluster with absolutely no problem to running VMs. The only test that would impact the running VMs is the storage test, so you can safely run the validation wizard and select not to run the storage test. And, if you have a disk witness, you can even run the storage tests by selecting that they run only on the disk witness. Then the storage on which the VMs are stored will not be impacted. Since you are experiencing issues with your network, you can readily run the validation wizard and run all tests but storage to see if the wizard points out something.
I do see from your ipconfig /all that you have one network on the host that is not configured properly for IP usage. It is showing a 169.254.0.0 address, which means that you have not configured it. For clustering, it is best to remove/disable any network that you are not configuring.
Also, it looks like you might be using VLAN 235 on the cluster management network. Have you defined a VLAN on the VM network? VMs require that the VLAN be configured on the NIC properties within each VM.
. : | : . : | : . tim
Thanks for that, then I will run a cluster validation without storage test and check the results.
Monday, September 5, 2016 1:13 PM
I have an update, I enabled network discovery in "Guest or Public" and "Domain" network profiles from "Advanced sharing settings" on each Hyper-V, after that I could ping from VM to its host and vice versa.. the weird thing is that even I turned network discvery on and then click "Save changes", settings revert back to Turn off network discovery" however I can ping now from VM to its host and vice versa.
I will try again tomorrow the ping and confirm issue is resolved, otherwise I will validate cluster "without storage test" and check results
Tuesday, September 6, 2016 8:47 AM
Till few minutes ago it was working fine, suddenly issue came back. Validate cluster did not show relevant error, it shows only errors related to Ethernet 2 (you can see its details above in ipconfig /all results).
Tuesday, September 6, 2016 10:33 AM
You can run the validation wizard on a running cluster with absolutely no problem to running VMs. The only test that would impact the running VMs is the storage test, so you can safely run the validation wizard and select not to run the storage test. And, if you have a disk witness, you can even run the storage tests by selecting that they run only on the disk witness. Then the storage on which the VMs are stored will not be impacted. Since you are experiencing issues with your network, you can readily run the validation wizard and run all tests but storage to see if the wizard points out something.
I do see from your ipconfig /all that you have one network on the host that is not configured properly for IP usage. It is showing a 169.254.0.0 address, which means that you have not configured it. For clustering, it is best to remove/disable any network that you are not configuring.
Also, it looks like you might be using VLAN 235 on the cluster management network. Have you defined a VLAN on the VM network? VMs require that the VLAN be configured on the NIC properties within each VM.
. : | : . : | : . tim
Thanks for that, then I will run a cluster validation without storage test and check the results.
I disabled Ethernet 2 adapter but still same issue..
Tuesday, September 6, 2016 10:35 AM
one more thing that is weird, i can ping to some VMs from host and from VMs to host but not from other set of VMs on same host to that host nor from host to these set of VMs !
Tuesday, September 6, 2016 6:33 PM
I think you should look into the firmware of the servers and the NIC drivers, maybe to try to update them.
Aymanq
Tuesday, September 6, 2016 8:18 PM
I think you should look into the firmware of the servers and the NIC drivers, maybe to try to update them.
Aymanq
yeah you are probably right, I will have to discuss this tomorrow with customer.
Thursday, September 8, 2016 6:34 AM
Hi AhmadJY,
Welcome to share the information if you got any updates.
Best Regards,
Leo
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected].
Thursday, September 8, 2016 6:38 AM
Hi AhmadJY,
Welcome to share the information if you got any updates.
Best Regards,
Leo
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected].
Yes I checked today with customer, he will check if he can update the firmware of the Dell Bladecenter as well as network drivers. The Hyper-V servers model is Dell PowerEdge M260.
But he asked me if the issue could be related to the Microsoft teaming configured; is there any chance the team can make this issue??
Thursday, September 8, 2016 10:10 AM
New update:
Suddenly host started to ping VM and vice versa, no change was done!!
I am thinking of network drivers issue or Windows team issue, what do you think experts?
Friday, September 9, 2016 6:29 AM
Hi Ahmad,
It's difficult to tell the reason, both are the possible causes.
Anyway, glad to hear the issue was solved. And it may be helpful for other people who met the same issue.
Best Regards,
Leo
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected].
Friday, September 9, 2016 6:34 AM
Hi Ahmad,
It's difficult to tell the reason, both are the possible causes.
Anyway, glad to hear the issue was solved. And it may be helpful for other people who met the same issue.
Best Regards,
Leo
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected].
Hi Leo,
In fact after hour or two issue come back...
The customer will check servers firmware and network drivers and update them if possible.. After that we will check new status
Monday, September 12, 2016 5:21 AM
Hi Ahmad,
How about the situation now?
What's the result of updating the firmware?
Best Regards,
Leo
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected].
Tuesday, September 13, 2016 11:31 AM
Hi Ahmad,
How about the situation now?
What's the result of updating the firmware?
Best Regards,
Leo
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected].
Hi Leo,
Thanks for following up..
The customer informed that he will check and try to update to new firmware version for the blade center as will as NICs drivers update for the servers within this week.
By next week or the week after, I will let you know the results for the same.
Tuesday, September 20, 2016 9:37 AM
Hi AhmadJY,
Did you get any updates on the issue?
Best Regards,
Leo
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected].