Share via


Windows cluster fileshare witness fails to configure - win 2019

Question

Thursday, April 2, 2020 4:42 PM

I encounter this error when i try configuring the file share witness on 2 node windows cluster on AWS EC2 instances.  I ensured cluster and windows node objects have enough permission on share folder. But still i get this error, i have no clue why this occurs. 

An error was encountered while modifying the quorum settings.
Your cluster quorum settings have not been changed.

There was an error configuring the file share witness '\x.x.x.x\share folder'.

Unable to save property changes for 'File Share Witness'.

Unknown error (0xc000006b)

All replies (5)

Friday, April 10, 2020 4:48 PM ✅Answered

Thanks Anne.

After a long troubleshooting and referring so many logs, finally i rejoined both nodes to domain. That resolved the issue.  


Thursday, April 2, 2020 5:22 PM

I encounter this error when i try configuring the file share witness on 2 node windows cluster on AWS EC2 instances.  I ensured cluster and windows node objects have enough permission on share folder. But still i get this error, i have no clue why this occurs. 

An error was encountered while modifying the quorum settings.
Your cluster quorum settings have not been changed.

There was an error configuring the file share witness '\x.x.x.x\share folder'.

Unable to save property changes for 'File Share Witness'.

Unknown error (0xc000006b)

Cluster logs

000013d0.00001900::2020/04/02-17:08:24.784 INFO  [RES] Network Name <Cluster Name>: Identity: Initializing Name: AWGPRWMSSQLCLU2, NetbiosName: AWGPRWMSSQLCLU2, Type: Singleton
000013d0.00001900::2020/04/02-17:08:24.784 INFO  [RES] Network Name <Cluster Name>: Identity: Initializing core resource id
000013d0.00001900::2020/04/02-17:08:24.784 INFO  [RES] Network Name <Cluster Name>: Identity: Obtaining new token
000013d0.00001900::2020/04/02-17:08:24.789 INFO  [RES] Network Name: [NNLIB] Priming local KDC cache to \DC for domain xxxx
000013d0.00001900::2020/04/02-17:08:24.790 INFO  [RES] Network Name: [NNLIB] PopulateKerbKDCLookupCache - DC flags 0
000013d0.00001900::2020/04/02-17:08:24.790 INFO  [RES] Network Name: [NNLIB] LsaCallAuthenticationPackage success with a request of size 112, result size 0 (status: 0, subStatus: 0)
000013d0.00001900::2020/04/02-17:08:24.790 INFO  [RES] Network Name: [NNLIB] Priming local KDC cache to \DC for domain label apps
000013d0.00001900::2020/04/02-17:08:24.790 INFO  [RES] Network Name: [NNLIB] LsaCallAuthenticationPackage success with a request of size 82, result size 0 (status: 0, subStatus: 0)
000013d0.00001900::2020/04/02-17:08:25.053 WARN  [RES] Network Name: [NNLIB] LogonUserExEx fails for user AWGPRWMSSQLCLU2$: 1787 (useSecondaryPassword: 0)
000013d0.00001900::2020/04/02-17:08:25.053 WARN  [RES] Network Name: [NNLIB] LogonUserCall fails for user AWGPRWMSSQLCLU2$: (useSecondaryPassword: 1), password length is 0
000013d0.00001900::2020/04/02-17:08:25.053 INFO  [RES] Network Name: [NNLIB] Logon failed for user AWGPRWMSSQLCLU2$ (Error 3221225579), DC \xxx, domain 
000013d0.00001900::2020/04/02-17:08:25.053 INFO  [RES] Network Name <Cluster Name>: Identity: Obtaining Windows Token for Name: AWGPRWMSSQLCLU2, SamName: AWGPRWMSSQLCLU2$, Type: Singleton, Result: -1073741717, LastDC: \DC
000013d0.00001900::2020/04/02-17:08:25.053 INFO  [RES] Network Name: Agent: OnInitializeReply, Failure on (f4c237fe-4e1b-4549-be21-a3611a6ffe0d,Identity): -1073741717
000013d0.00001900::2020/04/02-17:08:25.053 INFO  [RES] Network Name <Cluster Name>: SyncReplyHandler Configuration, result: -1073741717
000013d0.00000d4c::2020/04/02-17:08:25.054 ERR   [RES] Network Name <Cluster Name>: Initializing Identity module failed with error -1073741717
000013d0.00000d4c::2020/04/02-17:08:25.054 WARN  [RHS] Error 3221225579 from ResourceControl 16777581 for resource Cluster Name.
00001a24.000019b0::2020/04/02-17:08:25.054 ERR   [RES] File Share Witness <File Share Witness>: Failed to get virtual server token from core NetName resource, error 3221225579.
00001a24.000019b0::2020/04/02-17:08:25.054 ERR   [RES] File Share Witness <File Share Witness>: Failed to retrieve the virtual server token from the core netname resource with 3221225579.
00001a24.000019b0::2020/04/02-17:08:25.054 ERR   [RES] File Share Witness <File Share Witness>: Validation of input property buffer failed, error 3221225579.
00001a24.000019b0::2020/04/02-17:08:25.054 WARN  [RHS] Error 3221225579 from ResourceControl 20971654 for resource File Share Witness.
0000217c.00001860::2020/04/02-17:08:25.394 INFO  [RCM] rcm::RcmApi::DeleteResource: (File Share Witness, force=false)
00001a24.00000b38::2020/04/02-17:08:25.394 INFO  [RES] File Share Witness <File Share Witness>: Resource is being deleted. Cleaning up ...
00001a24.00000b38::2020/04/02-17:08:25.394 WARN  [RHS] Error 87 from ResourceControl 22020102 for resource File Share Witness.


Sunday, April 5, 2020 5:25 PM

Hey, 

You should grant the read/write permission on the File Share Witness for each cluster node and to the Failover Cluster object. 

This might help: http://www.dbaglobe.com/2014/07/fix-issue-when-configuring-file-share.html

As an alternative, you can StarWind VSAN free to create shared iSCSI drive and add it as Disk Quorum to your cluster. Check the following guide:

https://www.starwindsoftware.com/resource-library/starwind-virtual-san-for-hyper-v-2-node-hyperconverged-scenario-with-windows-server-2016/

Cheers,

Alex Bykvoskyi

StarWind Software

Blog:   Twitter:   LinkedIn:  

Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.


Monday, April 6, 2020 3:52 AM | 1 vote

Hi Vinoth B,

Please check if you have add both nodes' Computer account and the CNO to the file share ACL and give the "Full control" to them.

Thanks for your time!

Best Regards,

Anne

Please remember to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact [email protected].


Friday, April 10, 2020 4:47 PM

After a long troubleshooting and referring so many logs, finally i rejoined both nodes to domain. That resolved the issue.