Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Question
Monday, August 22, 2016 4:09 PM
I have a library that stores files to a unc location on the destination server. This is used in a WinForms app where most users don't have access to that location, so I use impersonation to impersonate an admin user set up specifically to do this.
Part of the algorithm is to write the files to a unique filename on that server. So for instance, a user has MyPhoto.jpg. There may already be MyPhoto.jpg, MyPhoto1.jpg, MyPhoto2.jpg which I don't want to overwrite. So I try to find the next photo in the sequence (MyPhoto3.jpg). Here's pseudocode:
using (CreateImpersonator())
{
var files = Directory.GetFiles(unc, "MyPhoto*.jpg");
var newFile = GetNextFilenameInSequence(files);
return newFile;
}
Pretty straightforward and it has been working fine for a long time. However, for the last few months I get "The network BIOS session limit was exceeded." errors and this is happening at the Directory.GetFiles call. This code has been working fine for a long time and I've verified no code changes in quite awhile.
I have to wonder if there is a NetBIOS session still attached to the impersonation user even though I've disposed of the impersonated user. CreateImpersonator() is just creating an object that sets up and holds onto a WindowsImpersonationContext. Disposing consists of calling Undo and then calling Dispose on the WindowsImpersonationContext object, which I think should be enough.
The other thing I suspected was that I had multiple threads trying to save files all at the same time, thereby generating multiple concurrent Directory.GetFiles calls. However, there's only one process running at the time and even if there are multiple threads in that process, the call to save files is in a lock, so I'm pretty certain there's only one call to Directory.GetFiles at a time.
I'm just about out of ideas of what to look at. Could this be a problem on the server? Or is there something else I need to dispose of to kill the NetBIOS session?
All replies (20)
Thursday, August 25, 2016 2:58 PM ✅Answered | 1 vote
The error leads me to believe that perhaps you have too many active sessions to the UNC path and therefore you're getting an error trying to connect again. Have you gone to the server and taken a look at the list of active sessions on the share. For Server 2003 I believe this is under System Tools\Shared Folders\Sessions? Have you looked at the server's error log to see if it logged any messages. Could also be a licensing issue.
Monday, August 22, 2016 5:27 PM
Does it happen even if the folder is initially empty?
Monday, August 22, 2016 6:03 PM
Has to do with drivers being mapped to that UNC location on the same box.
Do you have a drive mapped to that location on the box where the error is occurring?
If so, remove it , reboot and it should stop occurring.
If you like this or another reply, vote it up!
If you think this or another reply answers the original question, mark it or propose it as an answer.
Mauricio Feijo
www.mauriciofeijo.com
Monday, August 22, 2016 6:10 PM
I haven't tried that. I may try to set something up in my dev environment to start with an empty folder.
I'm still not really sure what situation is triggering the problem, so reproducing it may be a problem. This is happening in a nightly process and only happens some of the time. This last time it happened (8/20), the program had saved 35 files successfully. I picked a day earlier that week where it didn't crash and it had saved 44, so I'm not sure I can blame it on larger batch sizes.
Monday, August 22, 2016 6:15 PM
You might want to repalce this with a proper SQL-DB using the Filestream storage atribute for the file contents. Because it sounds like you just reinvented that wheel with shares. Wich might have simply reached it's scalability limit.
It could be get files is trying to get all files on a network share asynchronously (one of the jobs you should never async) and thus the number of files causes that exception/break of the limit.
Remember to mark helpfull answers as helpfull and close threads by marking answers.
Monday, August 22, 2016 6:55 PM
While that might fix the issue, I would rather not rewrite all the code since it's been tested in production for a few years now. And I'm a bit concerned about the load I would put on the DB since I have to do a byte-for-byte comparison of every file every morning.
I don't think its a scalability limit. I wrote a test app this morning that 1000 request, 64 at a time with Directory.GetFiles against this folder and it worked flawlessly and reasonably fast. In most cases, that should be a lot more taxing than the actual production app.
For reference, I'm dealing with almost 73000 files totaling about 66GB.
Monday, August 22, 2016 7:18 PM
I haven't used Filestream before, so I'm researching performance. I see it only stores an id in the table and the file is actually stored on disk instead of in the table. That sounds fine, but I guess I'm trying to understand what options I have to get the content for my byte comparison. I'd rather not transfer that through the database connection, though I suppose I could store an MD5 hash and fix that problem by comparing hashes.
Monday, August 22, 2016 7:49 PM
uler3161,
Did you have a chance to read my reply? Did you verify if that is the case? Would you provide feedback?
If you like this or another reply, vote it up!
If you think this or another reply answers the original question, mark it or propose it as an answer.
Mauricio Feijo
www.mauriciofeijo.com
Monday, August 22, 2016 8:23 PM
It will be awhile before I can get it set up to test.
Monday, August 22, 2016 8:31 PM
While that might fix the issue, I would rather not rewrite all the code since it's been tested in production for a few years now. And I'm a bit concerned about the load I would put on the DB since I have to do a byte-for-byte comparison of every file every morning.
I don't think its a scalability limit. I wrote a test app this morning that 1000 request, 64 at a time with Directory.GetFiles against this folder and it worked flawlessly and reasonably fast. In most cases, that should be a lot more taxing than the actual production app.
For reference, I'm dealing with almost 73000 files totaling about 66GB.
Rather then do a binary comparision, you migh want to just run the hash values across the network. Even just calculating the hash at client and server side to then send them across would be a massive reduction of the Network load. And the network is known to be a even bigger bottleneck then the disk or CPU in such a case.
More advanced approaches to "exclude change" go first by change date, then size (since you last looked). That is considered sufficient for every Backup or Update Programm, as NTFS will keep track of both as reliably as any Transactional DB with Journalling (wich is what NTFS is nowadays).
Filestreams idea is to reconcile the two opposite approaches:
Store on disk, keep references to teh files in the DB.
Stroe the data in a VARBINARY column.
It is bound to be a bit slower then direct disk access, but the performance is a lot better then the Varbinary approach. And in your cases it only has to beat Windows Shares in performance anyway.
Plus if you choose to go for the Hashing approach, the DB could automatically calculate the new hash everytime a files content is changed (on update trigger) or re-calculate it every morning. That way the server would not even need to do that job every time it is queried.
Remember to mark helpfull answers as helpfull and close threads by marking answers.
Monday, August 22, 2016 8:39 PM
Has to do with drivers being mapped to that UNC location on the same box.
Do you have a drive mapped to that location on the box where the error is occurring?
If so, remove it , reboot and it should stop occurring.
If I understand what you mean, this does not seem to be the case. The box that is running this nightly process has no drives mapped to the destination UNC location.
Tuesday, August 23, 2016 9:36 AM
Hi uler3161,
Thank you for posting here.
For the error message “The network BIOS session limit was exceeded.”, this behavior could occur if Internet Information Services (IIS) on Windows 2000 uses a mapped drive for a Web or FTP site rather than a universal naming convention (UNC) share. For more details of the error message, please refer to the Microsoft article.
To work around this behavior, use UNC connections to the file server instead of mapping a drive. You could access a file on a shared network resource by entering the file's location in UNC format. For more information of Accessing Network Resources Without Mapping a Drive or Port, please refer to the article.
Here is the link for Using Mapped Drives with IIS. It could make sense of mapping.
I hope this would be helpful to you.
If you have something else, please feel free to contact us.
Best Regards,
Wendy
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate the survey.
Tuesday, August 23, 2016 3:26 PM
This is a code used in a WinForms app (on Windows 7 clients) and in a .NET command line line program (Running on Server 2008) storing files to a share on a Server 2003 instance using a UNC path. It does not involve IIS, Windows 2000, mapped drives or FTP. And for clarification, I haven't seen the problem in the WinForms app, but it is happening in the command line program. Same code, but the command line is processing a larger batch of files.
I have added code to try to give me some more information by logging the output of "nbtstat -s" when the error occurs. If I understand what this command does, I should see a rather large table of sessions.
Thursday, August 25, 2016 5:39 AM | 1 vote
Hi uler3161,
Thank you for feedback.
>> And for clarification, I haven't seen the problem in the WinForms app, but it is happening in the command line program. Same code, but the command line is processing a larger batch of files.
For your question, maybe the command line does not have permission.
Could you provide more information? Is the command line invoked using system in the code? Or you use command line directly?
Based on our company policy, there is one question in one thread. If it is another issue, please post a new thread.
I hope this would be helpful to you.
If you have something else, please feel free to contact us.
Best Regards,
Wendy
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate the survey.
Thursday, August 25, 2016 2:29 PM
That is what I meant, yes.
If you like this or another reply, vote it up!
If you think this or another reply answers the original question, mark it or propose it as an answer.
Mauricio Feijo
www.mauriciofeijo.com
Thursday, August 25, 2016 3:00 PM
The error leads me to believe that perhaps you have too many active sessions to the UNC path and therefore you're getting an error trying to connect again. Have you gone to the server and taken a look at the list of active sessions on the share. For Server 2003 I believe this is under System Tools\Shared Folders\Sessions? Have you looked at the server's error log to see if it logged any messages. Could also be a licensing issue.
That's what I was thinking. I thought nbtstat -s would tell me, but I think I need to run it at the point I have the problem. I haven't been able to do that yet. If I run it at some other point, I don't see any sessions.
Thursday, August 25, 2016 3:16 PM
I looked at System Tools\Shared Folders\Sessions. There are 28 sessions. The command line app crashed this morning, but I don't know what the sessions looked like at that point. I may try running the app again and monitor this screen to see what the sessions are. Thanks for the suggestion.
Thursday, August 25, 2016 3:25 PM
Alright, now I'm getting somewhere :)
There are a lot of sessions for the user that I'm impersonating as. What I don't know is why. I assume the sessions must stick around for awhile, even though I've un-impersonated the user.
Is there a way to force a NetBIOS session to close?
Thursday, August 25, 2016 3:43 PM
I would have thought the session would be disconnected once the user is no longer connected but I could imagine that it waits until the idle timeout before disconnecting. A disconnected session should be fine.
If you're running into net session issues then you might consider mapping the drive instead of using the UNC path. But for that to be effective you'd need to map the drive once and share it for all users. It is unclear how you're using impersonation but since you mentioned the users don't have permissions to the remote share I assume you're impersonating some service account and then saving the user's data. So in that case you'd set up the share using that account instead. The share would remain open for the life of the app (ideally). But if the app runs for a long time then you can idle out after a while if you wanted.
Thursday, August 25, 2016 4:01 PM
I think the sessions do time out. I didn't see any that went over about 45 seconds of connected time. I think the problem is that I do a bunch of these calls and impersonate every time, so I get a lot of sessions. And since this seems to be a random problem, I must be right on the edge of going over the session limit. Though I find it odd that I haven't hit this problem long ago.
It definitely does look like there's a timeout. I wish there was a away to explicitly close the session. Apparently NetBIOS allows for this, but I don't see a way to do that in C#. I think all the NetBIOS stuff is abstracted away.
We're impersonating a domain account we set up specifically for accessing this location. Mapping the drive may work for the command line app, but I don't think that's going to work for the WinForms app that the users use.
I think I'm going to look at changing some batch size stuff I'm doing to see if I can cut down the number of sessions. If I can make that work, it should be the easiest solution for me.