Within the past few days I was investigating one of the new features of SQL Server 2025: Optimized Locking. I was curious about the capabilities, the behaviour and as well the limits of the feature. Optimized Locking is based on two primary components:

  • Transaction ID (TID)
  • Lock after qualification (LAQ)

Here we have the transaction ID working as a unique identifier for a transaction. Each row which is modified by this transaction will be labelled with its transaction ID. This produces only one single lock on the TID which is used instead of many key- or RID-locks. To be precise: Update- and Exclusive Locks will be placed, but released immediately without waiting for the transaction to be commited.
This behaviour helps lock manager enormously to keep the locks he has to maintain at a minimum and thus saves a lot of space (memory).

Lock after qualification (LAQ) at the other hand provides a silent qualification for rows affected by an update in the background without having the need to place Shared Locks on those while scanning through. Only if a row has been qualified – means it will be affected by this update – an attempt to place an Update Lock will happen.
In order to benefit from this mechanism, Read Committed Snapshot Isolation (RCSI) must be enabled on database-level.

My Demo environment looks as follows:

  • SQL Server 2025 CTP 2.0 (17.0.700.9)
  • SQL Server Management Studio 21 (21.3.6) with Copilot activated

My colleague Stéphane Haby wrote blog posts about SQL Server 2025, for example this one:


In the meantime there was SQL Server 2025 CTP 2.1 released by Microsoft with a few improvements explained on BOL:

https://learn.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2025?view=sql-server-ver17

Copilot is not part of the game when we discuss Optimized Locking, but as AI is omnipresent these days, I want to mention the blog post of my colleague Steven Naudet where he describes the enabling process for Copilot in SSMS:

Now, back to business, back to Optimized Locking – Let’s see this performance tuning feature in action!

First things first – I’ve created a new database called “OptimizedLocking” and checked what features are in place. To determine if Optimized Locking is enabled on a database you can us either the function DATABASEPROPERTYEX or grab the information from sys.databases directly:

-- Query the dedicated DATABASEPROPERTYEX für Optimized Locking
USE [OptimizedLocking]
GO
SELECT DATABASEPROPERTYEX(DB_NAME(), 'IsOptimizedLockingOn') AS IsOptimizedLockingOn
GO
USE [master]
GO
SELECT name AS DatabaseName, is_read_committed_snapshot_on, is_optimized_locking_on, is_accelerated_database_recovery_on 
FROM sys.databases
WHERE name = 'OptimizedLocking'
GO

Why do I request additionally the columns “is_read_committed_snapshot_on” and “is_accelerated_database_recovery_on”?
Well, the latter is absolutely necessary to get Optimized Locking work and RCSI enables lock after qualification (LAQ) which I will show you a little bit later.

Now it’s time to create a simple table within the database “OptimizedLocking”:

-- Create a table to show locks
USE [OptimizedLocking]
GO
DROP TABLE IF EXISTS T1
GO
CREATE TABLE T1
(
T1_ID INT NOT NULL,
Value INT
);

INSERT INTO T1 (T1_ID, Value) 
VALUES (1,10),(2,20),(3,30);
GO

SELECT * FROM dbo.T1
GO

SELECT DB_NAME(database_id) AS DatabaseName, OBJECT_NAME(object_id) AS TableName, index_type_desc FROM sys.dm_db_index_physical_stats(DB_ID('OptimizedLocking'), OBJECT_ID('dbo.T1'), NULL, NULL, 'DETAILED')
GO

The content of the table T1 looks as follows – and it’s of course a Heap because I didn’t neither add a PRIMARY KEY constraint to any of it’s column nor a CLUSTERED INDEX was specified:

I ran two concurrent update statements, the first will change the row with the ID = 1 (in the left pane of SSMS) and the second one tries to change the row with the ID = 2 (in the middle pane). Within the pane on the right I placed the statement to show you the locks which are present.
The second statement is blocked, because it can not acquire the necessary Update Lock which is needed to update the row:

Now I change the database in terms of setting OPTIMIZED_LOCKING to ON and I do the same demo as above again. As mentioned above, ACCELERATED_DATABASE_RECOVERY is necessary as well to fulfil this:

USE [master]
GO
ALTER DATABASE [OptimizedLocking] SET ACCELERATED_DATABASE_RECOVERY = ON;
ALTER DATABASE [OptimizedLocking] SET OPTIMIZED_LOCKING = ON;
GO

As you can see now, the behaviour remains the same, the second statement is blocked trying to place a Shared Lock on the rows – but we don’t have those 4 “normal” locks we had before but only 2 XACT Locks instead, means we have a reduction of the amount of locks that have to be managed by the lock manager:

Let’s go one step further and enable RCSI on database-level and see what happens now:

USE [master]
GO
ALTER DATABASE [OptimizedLocking] SET READ_COMMITTED_SNAPSHOT = ON;
GO

Et voilà, both statement have successfully had their Exclusive Lock requests (escalated from the Update Lock placed in advance) approved/granted:

Heap vs. Clustered Index

But what happens now, if we use a Clustered Index instead of leaving our data (un)organized as a Heap? Let’s dig into this as well by sorting the T1_ID column using a clustered index:

USE [OptimizedLocking]
GO
CREATE CLUSTERED INDEX CI_T1 ON dbo.T1 (T1_ID)
GO

All the features that we switched on earlier are now deactivated to be able to begin from scratch again:

But what’s going on now? None of the former mentioned features is enabled but both statements have their requested locks granted:

This is because the potentially critical Exclusive Locks (X) are placed on different keys (i.e. different rows) and the Intent Exclusive Locks (IX) set on page level are compatible with each other.

Wrap-Up

During my consulting at customer site I often see database with a huge amount of Heaps.
Don’t misunderstand me, Heaps are great on tables, where we expect only INSERTS (e.g. a logging-table) but if we deal with the other DML-Statements as well at the same time, a Clustered Index would be the better choice.
Related to this demo and the feature “Optimized Locking” can be said, that if we are using a Clustered Index on our tables, we don’t need this feature to be enabled in terms of concurrency, but regarding the overhead for the lock manager it’s definitely worth using “Optimized Locking”.

But wait: until now we dealt with the default isolation level of SQL Server – READ_COMMITTED. What will happen if we turn this into a higher level, for example SERIALIZABLE? Will Optimized Locking support us in having fewer locks and blockings?
The answer is no – not at all. Do you agree?
And this makes sense regarding the mechanism of such an isolation level like SERIALIZABLE. When we choose such a high isolation level, we are (or at least should be) aware that we are blocking others for a (longer) period of time because we tell SQL Server to behave like this. Everyone who is  familiar with the locking behaviour in depth in SQL Server using different isolation levels knows, that SQL Server has to change/align his behaviour to be able to take the ACID principle into account.

Isolation Level SERIALIZABLE

Here is the output of the same demo I did above several times except that I executed the first UPDATE statement with the transaction isolation level SERIALIZABLE. As you can see, the second UPDATE statement (in the middle pane) is blocked during the attempt of placing an Exclusive Lock on the row with ID = 2 because the first UPDATE statement had to place an Exclusive Lock on the range where the row with ID = 2 is part of, based on the requirements of the Isolation Level:

Conclusion

From my point of view, “Optimized Locking” is a real game changer. Regardless of the underlying structure (Heap or Clustered Index), locking becomes easier and in a certain way more “lightweight”.

As we saw, the optimal way to use this feature is to have turned on Read Committed Snapshot Isolation (RCSI) in addition and – if this make sense for storing the data in this way in terms of the access pattern – use a Clustered Index to organize tables. Either way, however, we benefit from the fact that the Lock Manager has to manage fewer locks which saves a significant amount of memory in any case.