Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Wednesday, January 20, 2010 1:00 PM
Hi,
Can multiple transactions be made on the same table, at the same time, but on different rows ?
I have a table in witch "batch" field is set to values 1 - 10(or more).
And threads that process rows by making selects with the condition batch_field=n and only work on those rows.
I get this error: Transaction (Process ID 77) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I tryed setting the transaction isolation level to "READ UNCOMMITTED" as I figured the select needs access to all rows, although the updates will access only a given subset. The error is the same.
Any ideea ?
Thanks !
Wednesday, January 20, 2010 4:01 PM ✅Answered | 1 vote
Multiple transactions can work on the same table until the requested locks by different transactions does not cause deadlocks. In your case using the ROWLOCK (http://www.sql-server-performance.com/articles/per/lock_contention_nolock_rowlock_p3.aspx) might help a bit in reducing the number of deadlocks.
And using the SNAPSHOT Isolation will also help you. Please read this article http://msdn.microsoft.com/en-us/library/tcbchxcb(VS.80).aspx on using the SANPSHOT isolation.
And more to that if you see any of your threads gets deadlocked then use an exception hadling mechanism to retry the same update after some time. So this will solve the slight possibility of deadlocks that you may see after using ROWLOCK/ SNAPSHOT isolation.
But please note controlling the locking with lock hints is not a good practice and it is always better to leave the decision to SQL Serever lock manager to contol the concurrency and data integrity.
[
](http://www.sql-server-performance.com/articles/per/lock_contention_nolock_rowlock_p3.aspx)Ranjith | My Blog
Wednesday, January 20, 2010 4:25 PM ✅Answered | 1 vote
Yes, that is definitely doable. Note, though, that if yo don't have an index on the columns used in the WHERE clause fo each session (batch_number column), then each will lock all rows, and hence block for others and/or overlap. So, start with basics and make sure that each batch is efficient in how much data it accesses and what data it locks. If that isn't sufficient, one can go further and contemplate other options (for instance as suggested by Ranjith).Tibor Karaszi, SQL Server MVP | http://www.karaszi.com/sqlserver/default.asp | http://sqlblog.com/blogs/tibor_karaszi
Wednesday, January 20, 2010 1:49 PM | 1 vote
It all boils down to what locks are acquired for your reads and modifications. If the locks overlap and aren't compatible, then you have blocking. And if you are unlucky, the blocking can be turned into deadlocking. One thing in SQL Server to be careful with is to support the select and modification command with indexes. This result in fewer date to be read, but also locks in finer granularity meaning less risk for blocking. I.e., in general make sure you support your WHERE clauses with indexes. And to be more specific on would work the execution plans and study what locks are acquired for the modifications one perofrm (can be a time-consuming task).
One thing that can reduce blocking is to work on one of the new snapshot isolation options available as of 2005. But I suggest as a start is to make sure that the code is efficient to start with (ablve suggestions).Tibor Karaszi, SQL Server MVP | http://www.karaszi.com/sqlserver/default.asp | http://sqlblog.com/blogs/tibor_karaszi
Wednesday, January 20, 2010 2:41 PM
Let's say I have this table:
declare @large_table table
(
id int identity(1,1),
lots_of_other_fields varchar(50),
batch_number int
)
insert into @large_table
select '', 1 union all
select '', 1 union all
select '', 1 union all
select '...', 1 union all
select '', 2 union all
select '', 2 union all
select '', 2 union all
select '...', 2
select * from @large_table
I need to do a time & perhaps CPU intensive bunch of operations on all the data. I'd like to do them on multiple threads, dividing the data between them; in this case thread 1 would operate on all the rows with the batch_number=1 and thread 2 on all of those with batch_number=2.
At the end, I may want to undo all of the operations ( not commit if some major errors appear). That means that all threads need a separate session to the DB (obviously :) ) and all of them will work in transactions.
The rows WILL be modified, but each thread will only modify it's rows ... does that make any sense ? Can it be done ?