{"id":36991,"date":"2025-02-07T21:09:22","date_gmt":"2025-02-07T20:09:22","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=36991"},"modified":"2025-02-16T23:56:34","modified_gmt":"2025-02-16T22:56:34","slug":"customer-case-study-sql-server-table-partitioning","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/","title":{"rendered":"Customer case study &#8211; SQL Server table partitioning"},"content":{"rendered":"\n<p><strong>A client submitted the following issues :<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Their database hosts multiple tables, including one table of approximately 1 TB, consisting of several billion rows. This table is continuously fed by various data sources. Additionally, it is also used to generate reports and charts based on the stored data.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The client wanted to archive certain data to be able to delete it (selectively) without affecting the applications that use the table.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>They also wanted to reduce the duration of maintenance jobs for indexes, statistics, and those responsible for verifying data integrity.<\/li>\n<\/ul>\n\n\n\n<p>During an audit of their environment, we also noticed that the database data files were hosted on a volume using an MBR partition type (which is limited to 2 TB).<\/p>\n\n\n\n<p>2 tables were involved (but only one needed to be partitioned, the &#8220;Data&#8221; table) because we had a foreign key constraint between the tables. Here&#8217;s what the tables looked like :<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Table name<\/th><th>Columns name<\/th><\/tr><\/thead><tbody><tr><td>Mode<\/td><td>ID<\/td><\/tr><tr><td>Data<\/td><td>Date, Value, Mode ID<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-possible-solution-table-partitioning\">Possible solution : table partitioning<\/h2>\n\n\n\n<p>Partitioning in SQL Server is a technique used to divide large tables or indexes into smaller, more manageable pieces called partitions, while still being treated as a single logical entity. This improves performance, manageability, and scalability, especially for large datasets. In SQL Server, partitioning is done using a partition function which defines how data is distributed based on a column (e.g., date, ID) and a partition scheme which is a mapping mechanism that determines where partitions of a partitioned table or index will be physically stored. Queries benefit from partition elimination, meaning SQL Server scans only relevant partitions instead of the entire table, optimizing execution. This is widely used in data warehousing, archiving, and high-transaction databases to enhance query performance and simplify maintenance.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-production-implementation\">Production implementation ?<\/h2>\n\n\n\n<p><strong>How can we implement this with minimal downtime since we cannot modify the main table, which is used continuously ?<\/strong><\/p>\n\n\n\n<p>In our case, we decided to create a new table that is an exact copy of the source table however this new table will be partitioned. Here\u2019s how we proceeded :<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-creating-the-corresponding-filegroups\">Creating the corresponding filegroups<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>We have one filegroup per year from 2010 to 2060. We extend up to 2060 to avoid creating a job that dynamically generates filegroups based on certain criteria.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The files contained in these filegroups will be created in a new volume using the GPT partition type, allowing us to move the table to the new volume.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Creating the partition function<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The partition function will be based on a datetime column, which will determine the partition ID according to the input value.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Creating the partition scheme<\/h3>\n\n\n\n<p>This will define where the data is physically stored. The partition scheme maps the partition ID to the filegroups.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-from-here-we-have-at-least-two-possibilities\">From here, we have at least two possibilities :<\/h3>\n\n\n\n<p><strong>Create a partitioned (copy) table<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>This table will have the same structure as the source table but without indexes.<\/li>\n\n\n\n<li>The new table will be partitioned using CREATE TABLE() ON partition_scheme().<\/li>\n\n\n\n<li>Initially, we copy data from the source table to the destination table up to a fixed date. This limit allows us to define a delta.<\/li>\n\n\n\n<li>We then build the indexes.<\/li>\n\n\n\n<li>The remaining data to be copied is the delta. The delta can be determined by selecting all rows with a date strictly greater than the copied data. This is easier when using the right indexes.<\/li>\n<\/ul>\n\n\n\n<p><strong>Using indexes (same process but without the last step)<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>We build the corresponding indexes (before copying data), which are an exact copy of those in the source table.<\/li>\n\n\n\n<li>We copy data from the source table to the destination table up to a fixed date.<\/li>\n\n\n\n<li>We copy the delta.<\/li>\n\n\n\n<li>Finally, we switch the tables using the sp_rename stored procedure.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"600\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1-1024x600.jpg\" alt=\"\" class=\"wp-image-37047\" srcset=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1-1024x600.jpg 1024w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1-300x176.jpg 300w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1-768x450.jpg 768w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1.jpg 1040w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-how-can-we-copy-the-delta\">How can we copy the delta?<\/h3>\n\n\n\n<p>In our case, we stopped the application for a few minutes and then imported all rows from the fixed date onward (since writes were paused for a specific period, we copied the delta). <\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"644\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s2-2-1024x644.jpg\" alt=\"\" class=\"wp-image-37060\" srcset=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s2-2-1024x644.jpg 1024w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s2-2-300x189.jpg 300w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s2-2-768x483.jpg 768w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s2-2.jpg 1036w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-implementation\">Implementation<\/h2>\n\n\n\n<p>In our client&#8217;s case, we have dependencies with another table through a foreign key constraint. This is why we have two tables appearing (however, the partitioned table is T_Data).<\/p>\n\n\n\n<p>The code below reproduces the case we worked on.<\/p>\n\n\n\n<p>Database creation :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>USE master\nGO\n\nIF DB_ID('partitioning_demo') IS NOT NULL\nBEGIN\nALTER DATABASE partitioning_demo SET SINGLE_USER WITH ROLLBACK IMMEDIATE\nDROP DATABASE partitioning_demo\nEND\n\nCREATE DATABASE partitioning_demo\nON PRIMARY\n(\nNAME = PARTITIONING_0_Dat,\nFILENAME = 'S:\\Data\\partitioning_demo_0_Dat.mdf',\nSIZE = 512MB,\nFILEGROWTH = 512MB\n)\nLOG ON\n(\nname = PARTITIONING_0_Log,\nfilename = 'S:\\Logs\\partitioning_demo_0_Log.ldf',\nsize = 1024MB,\nfilegrowth = 512MB\n)\nGO<\/code><\/pre>\n\n\n\n<p>Tables creation :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>USE partitioning_demo\nGO\n\nCREATE TABLE dbo.T_Mode\n(\nMode_ID INT NOT NULL\nCONSTRAINT PK_Mode_ID PRIMARY KEY(Mode_ID)\n)\nGO\n\nCREATE TABLE dbo.T_Data\n(\nData_Date DATETIME NOT NULL,\nData_Value DECIMAL(18,3) NOT NULL,\nFK_Mode_ID INT NOT NULL\n\nCONSTRAINT PK_T_Data_1 PRIMARY KEY(Data_Date, FK_Mode_ID)\nCONSTRAINT FK_T_Data_T_Mode FOREIGN KEY(FK_Mode_ID) REFERENCES dbo.T_Mode(Mode_ID)\n)\nGO\n\nCREATE NONCLUSTERED INDEX &#091;NCI-1] ON dbo.T_Data(Data_Value)\nGO\n<\/code><\/pre>\n\n\n\n<p>Generate some data for the T_Mode table :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\nUSE partitioning_demo\nGO\n\nSET NOCOUNT ON\n\nDECLARE @i INT = 0, @Min INT = 1, @Max INT = 300\n\nWHILE @i &lt;= @Max\nBEGIN\nINSERT INTO dbo.T_Mode (Mode_ID) VALUES (@i)\nSET @i = @i + 1\nEND\nGO<\/code><\/pre>\n\n\n\n<p>Generate some data for the T_Data table :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>USE partitioning_demo\nGO\n\nSET NOCOUNT ON\n\nDECLARE\n@i BIGINT,\n@NbLinesMax BIGINT,\n@StartDateTime DATETIME,\n@DataDate DATETIME,\n@DataValue DECIMAL(18,3),\n@NbLinesFKModeID INT,\n@FKModeID INT\n\nSET @i = 0\n\nSET @NbLinesMax = 7884000 --7884000 - nb minutes in 14 years (from 2010 to 2024)\nSET @StartDateTime = DATEADD(yy, DATEDIFF(yy, 0, DATEADD(year, -15, GETDATE())), 0) --We start in 2010 : 01.01.2010 00:00:00\nSET @NbLinesFKModeID = (SELECT COUNT(*) FROM partitioning_demo.dbo.T_Mode) - 1\n\nWHILE @i &lt;= @NbLinesMax\nBEGIN\n\nSET @DataDate = DATEADD(mi, @i, @StartDateTime)\nSET @DataValue = ROUND(RAND(CHECKSUM(NEWID())) * (100000000000000), 3) --Generate random values for the Data_Value column\n\nSET @FKModeID = ABS(CHECKSUM(NEWID()) % (@NbLinesFKModeID - 1 + 1)) + 1 --Generate random values for the FK_Mode_ID column\n\nINSERT INTO dbo.T_Data (Data_Date, Data_Value, FK_Mode_ID)\nVALUES (@DataDate, @DataValue, @FKModeID)\nSET @i = @i + 1\nEND\n\nGO<\/code><\/pre>\n\n\n\n<p>Here is what our data looks like :<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"394\" height=\"419\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/image-3.png\" alt=\"\" class=\"wp-image-36993\" srcset=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/image-3.png 394w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/image-3-282x300.png 282w\" sizes=\"auto, (max-width: 394px) 100vw, 394px\" \/><\/figure>\n\n\n\n<p>We create the corresponding filegroups. We create more filegroups than necessary to anticipate future insertions:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>USE master\nGO\n\nDECLARE @Year INT = 2010, @YearLimit INT = 2060, @SQLCmd NVARCHAR(max) = ''\n\nWHILE (@Year &lt;= @YearLimit)\nBEGIN\n\nSET @SQLCmd = @SQLCmd+'ALTER DATABASE partitioning_demo ADD FILEGROUP PARTITIONING_FG_'+CAST(@Year AS NVARCHAR(4))+'; '\n\nSET @SQLCmd = @SQLCmd+'ALTER DATABASE partitioning_demo ADD FILE (NAME = PARTITIONING_F_'+CAST(@Year AS NVARCHAR(4))+'_Dat, FILENAME = ''S:\\Data\\PARTITIONING_F_'+CAST(@Year AS NVARCHAR(4))+'_Dat.mdf'', SIZE = 64MB, FILEGROWTH = 64MB) TO FILEGROUP PARTITIONING_FG_'+CAST(@Year AS NVARCHAR(4))+';'\n\nSET @Year = @Year + 1\nEND\n\n--PRINT @SQLCMD\nEXEC(@SQLCMD)<\/code><\/pre>\n\n\n\n<p>We create our partition function to process data from 2010 to 2060 :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>USE partitioning_demo\nGO\nDECLARE\n\n@i INT = -15,\n@PartitionYear DATETIME = 0,\n@PartitionFunctionName NVARCHAR(50) = '',\n@SQLCMD NVARCHAR(MAX) = ''\n\nSET @PartitionFunctionName = 'PF_T_Data'\n\nSET @SQLCMD = 'CREATE PARTITION FUNCTION ' + @PartitionFunctionName + ' (DATETIME) AS RANGE RIGHT FOR VALUES ('\n\nWHILE (@i &lt;= 35)\nBEGIN\nSET @PartitionYear = (SELECT DATEADD(yy, DATEDIFF(yy, 0, DATEADD(year, @i, GETDATE())), 0)) -- Start of a year, e.g. 2010-01-01 00:00:00.000\n\nIF (@i &lt;&gt; 35)\nBEGIN\nSET @SQLCMD = @SQLCMD + '''' + CAST(@PartitionYear AS NVARCHAR(150)) + ''', '\nEND\nELSE\nBEGIN\nSET @SQLCMD = @SQLCMD + '''' + CAST(@PartitionYear AS NVARCHAR(150)) + ''')'\nEND\n\nSET @i = @i + 1\nEND&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n\n--PRINT @SQLCMD\nEXEC(@SQLCMD)<\/code><\/pre>\n\n\n\n<p>We create our partition scheme:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>USE partitioning_demo\nGO\n\nDECLARE\n@PartitionFunctionName NVARCHAR(50) = '',\n@PartitionSchemeName NVARCHAR(50) = '',\n@SQLCMD NVARCHAR(MAX) = '',\n@FGName NVARCHAR(100) = '',\n@FGNames NVARCHAR(MAX) = ''\n\nSET @PartitionFunctionName = 'PF_T_Data'\nSET @PartitionSchemeName = 'PSCH_T_Data'\n\nSET @SQLCMD = 'CREATE PARTITION SCHEME ' + @PartitionSchemeName + ' AS PARTITION ' + @PartitionFunctionName + ' TO ('\n\nDECLARE filegroup_cursor CURSOR FOR\n\n&nbsp;&nbsp;&nbsp; SELECT &#091;name] FROM partitioning_demo.sys.filegroups ORDER BY data_space_id ASC\n\nOPEN filegroup_cursor\nFETCH filegroup_cursor INTO @FGName\n\nWHILE @@FETCH_STATUS = 0\nBEGIN\n\nSET @FGNames = @FGNames + '&#091;' + @FGName + '],'\n&nbsp;&nbsp;&nbsp; FETCH filegroup_cursor INTO @FGName\nEND\n\nCLOSE filegroup_cursor\nDEALLOCATE filegroup_cursor\n\nSET @FGNames = LEFT(@FGNames, LEN(@FGNames) - 1) --Remove the ',' character at the end\nSET @SQLCMD = @SQLCMD + @FGNames + ')'\n\n--PRINT @SQLCMD\nEXEC(@SQLCMD)<\/code><\/pre>\n\n\n\n<p>We will now create the new table. This is the one that will be partitioned and to which we will switch later :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>USE partitioning_demo\nGO\n\nCREATE TABLE dbo.T_Data_Staging\n(\nData_Date DATETIME NOT NULL,\nData_Value DECIMAL(18,3) NOT NULL,\nFK_Mode_ID INT NOT NULL\n\n) ON PSCH_T_Data(Data_Date)\nGO<\/code><\/pre>\n\n\n\n<p>We now copy the data from the source table to the destination table. Since our table is a heap, we can use the query hint (TABLOCK). This enables minimal logging, optimal locking and parallel inserts (however, the recovery model must be bulk-logged or simple). In the case below, we only copy the year 2010 (for example) :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>USE partitioning_demo\nGO\n\nDECLARE @YearToProcess INT, @BeginDataDate DATETIME, @EndDataDate DATETIME;\n\nSET @YearToProcess = 2010\nSET @BeginDataDate = DATEADD(yy, DATEDIFF(yy, 0, CAST(@YearToProcess AS NVARCHAR)), 0)\nSET @EndDataDate = DATEADD(ms, -3, DATEADD(yy, DATEDIFF(yy, 0, CAST(@YearToProcess AS NVARCHAR)) + 1, 0))\n\nINSERT INTO dbo.T_Data_Staging WITH (TABLOCK)\n(\nData_Date,\nData_Value,\nFK_Mode_ID\n)\n\nSELECT * FROM dbo.T_Data WHERE Data_Date BETWEEN @BeginDataDate AND @EndDataDate\nGO<\/code><\/pre>\n\n\n\n<p>However, it is possible to adjust the value of the @EndDataDate variable to copy the data up to the desired point.<\/p>\n\n\n\n<p>We now create the corresponding indexes. These are indeed present in the source table. Additionally, these indexes are partitioned.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>USE partitioning_demo\nGO\n\nALTER TABLE dbo.T_Data_Staging ADD CONSTRAINT PK_T_Data_2 PRIMARY KEY(Data_Date, FK_Mode_ID)\nGO\n\nALTER TABLE dbo.T_Data_Staging ADD CONSTRAINT FK_T_Data_T_Mode_2 FOREIGN KEY(FK_Mode_ID) REFERENCES dbo.T_Mode(Channel_ID)\nGO\n\nCREATE NONCLUSTERED INDEX &#091;NCI-1-2] ON dbo.T_Data_Staging(Data_Value) ON PSCH_T_Data(Data_Date)\nGO<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-another-possible-strategy\">Another possible strategy<\/h3>\n\n\n\n<p>From this point, another strategy is possible, which is as follows :<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>We create the partitioned table.<\/li>\n\n\n\n<li>We create the corresponding indexes.<\/li>\n\n\n\n<li>We copy the data.<\/li>\n<\/ul>\n\n\n\n<p>However, with this strategy, it is possible that the TempDB database may grow significantly. In this case, it is possible to use the following query hints :<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>min_grant_percent<\/li>\n\n\n\n<li>max_grant_percent<\/li>\n<\/ul>\n\n\n\n<p>We can also temporarily update statistics asynchronously. Once the vast majority of the data has been copied, we simply need to retrieve the delta and switch the tables. In our case, we had to stop the application for a few minutes before performing the switch as follows :<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"865\" height=\"813\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s3-2.jpg\" alt=\"\" class=\"wp-image-37071\" srcset=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s3-2.jpg 865w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s3-2-300x282.jpg 300w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s3-2-768x722.jpg 768w\" sizes=\"auto, (max-width: 865px) 100vw, 865px\" \/><\/figure>\n\n\n\n<pre class=\"wp-block-code\"><code>USE partitioning_demo\nGO\n\nBEGIN TRANSACTION Table_Switch\n\nEXEC sp_rename 'dbo.T_Data', 'T_Data_Old'\nEXEC sp_rename 'dbo.T_Data_Staging', 'T_Data'\n\nCOMMIT TRANSACTION Table_Switch<\/code><\/pre>\n\n\n\n<p>We can then verify that our data has been properly distributed among the different filegroups:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>SELECT\n\nOBJECT_NAME(p.object_id) as obj_name,\nf.name,\np.partition_number,\np.rows,\np.index_id,\nCASE\nWHEN p.index_id = 1 THEN 'CLUSTERED INDEX'\nWHEN p.index_id &gt;= 2 THEN 'NONCLUSTERED INDEX'\nEND AS index_info\n\nFROM sys.system_internals_allocation_units a\nJOIN sys.partitions p\nON p.partition_id = a.container_id\nJOIN sys.filegroups f \non a.filegroup_id = f.data_space_id\nWHERE p.object_id = OBJECT_ID (N'dbo.T_Data_Staging')\n\nORDER BY f.name ASC<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"741\" height=\"381\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/image-4.png\" alt=\"\" class=\"wp-image-36992\" srcset=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/image-4.png 741w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/image-4-300x154.png 300w\" sizes=\"auto, (max-width: 741px) 100vw, 741px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-database-maintenance\">Database maintenance<\/h2>\n\n\n\n<p>For database maintenance, we can use Ola Hallengren\u2019s solution (<a href=\"https:\/\/ola.hallengren.com\/\">https:\/\/ola.hallengren.com<\/a>).<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-database-integrity-check\">Database integrity check<\/h3>\n\n\n\n<p>This solution allows us to verify data integrity and filter by filegroup. In our case, we have defined one filegroup per year, and the filegroup on which we write regularly is the one for the current year.<\/p>\n\n\n\n<p>Thus, we could implement the following strategy to reduce the time needed for data integrity verification:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Create a job that checks the integrity of the current filegroup once per day.<\/li>\n\n\n\n<li>Create a job that checks the integrity of other filegroups once per week.<\/li>\n<\/ul>\n\n\n\n<pre class=\"wp-block-code\"><code>USE partitioning_demo\nGO\n\nDECLARE @Databases NVARCHAR(100), @FilegroupsToCheck NVARCHAR(max)\n\nSET @Databases = 'partitioning_demo'\nSET @FilegroupsToCheck = @Databases + '.PARTITIONING_FG_' + CAST(YEAR(GETDATE()) AS NVARCHAR)\n\nEXECUTE &#091;dba_tools].&#091;dbo].&#091;DatabaseIntegrityCheck]\n@Databases = @Databases,\n@CheckCommands = 'CHECKFILEGROUP',\n@FileGroups = @FilegroupsToCheck,\n@LogToTable = 'Y'\nGO<\/code><\/pre>\n\n\n\n<p>The verification of the other filegroups can be done as follows :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>DECLARE @Database NVARCHAR(250), @FilegroupsToCheck NVARCHAR(MAX)\n\nSET @Database = 'partitioning_demo'\nSET @FilegroupsToCheck = 'ALL_FILEGROUPS, -' + @Database + '.PARTITIONING_FG_' + CAST(YEAR(GETDATE()) AS NVARCHAR)\n\nEXECUTE &#091;dba_tools].&#091;dbo].&#091;DatabaseIntegrityCheck]\n@Databases = 'partitioning_demo',\n@CheckCommands = 'CHECKFILEGROUP',\n@FileGroups = @FilegroupsToCheck,\n@LogToTable = 'Y'\nGO<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Index maintenance :<\/h3>\n\n\n\n<p>Index maintenance can be a long and resource-intensive operation in terms of CPU and I\/O. Based on our research, there is no way to rebuild or reorganize a specific partition using Ola Hallengren\u2019s solution.<\/p>\n\n\n\n<p>Indeed, it may be beneficial to maintain only the current partition (the one where data is updated) and exclude the other partitions. To achieve this, the following example can be used:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>USE partitioning_demo\nGO\n\nDECLARE\n\n@IndexName NVARCHAR(250),\n@IndexId INT,\n@ObjectId INT,\n@PartitionNumber INT,\n@SchemaName NVARCHAR(50),\n@TableName NVARCHAR(100),\n@IndexFragmentationValue INT,\n@IndexFragmentationLowThreshold INT,\n@IndexFragmentationHighThreshold INT,\n@FilegroupToCheck NVARCHAR(250),\n@PartitionToCheck INT,\n\n@SQLCMD NVARCHAR(MAX) = ''\n\nSET @FilegroupToCheck = 'PARTITIONING_FG_' + CAST(YEAR(GETDATE()) AS NVARCHAR)\nSET @PartitionNumber = (\nSELECT\nDISTINCT\np.partition_number\nFROM sys.system_internals_allocation_units a\nJOIN sys.partitions p\nON p.partition_id = a.container_id\nJOIN sys.filegroups f on a.filegroup_id = f.data_space_id\nWHERE f.&#091;name] = @FilegroupToCheck)\n\nDECLARE index_cursor CURSOR FOR\n\nSELECT DISTINCT idx.&#091;name], idx.index_id, idx.&#091;object_id], pts.partition_number, scs.&#091;name], obj.&#091;name]\nFROM sys.indexes idx\nINNER JOIN sys.partitions pts\nON idx.object_id = pts.object_id\nINNER JOIN sys.objects obj\nON idx.object_id = obj.object_id\nINNER JOIN sys.schemas scs\nON obj.schema_id = scs.schema_id\nWHERE&nbsp; pts.partition_number = @PartitionNumber\nOPEN index_cursor\nFETCH index_cursor INTO @IndexName, @IndexId, @ObjectId, @PartitionNumber, @SchemaName, @TableName\n\nWHILE @@FETCH_STATUS = 0\n\nBEGIN\nSELECT @IndexFragmentationValue = MAX(avg_fragmentation_in_percent)\nFROM sys.dm_db_index_physical_stats(DB_ID('partitioning_demo'), @ObjectId, @IndexId, @PartitionNumber, 'LIMITED')\nWHERE alloc_unit_type_desc = 'IN_ROW_DATA' AND index_level = 0\n\nIF (@IndexFragmentationValue &lt; 5)\nBEGIN\nPRINT 'No action to perform for the index : &#091;' + @IndexName + '] ON &#091;' + @SchemaName + '].&#091;' + @TableName + ']'\nEND\n\nELSE IF (@IndexFragmentationValue BETWEEN 5 AND 20)\nBEGIN\nSET @SQLCMD = @SQLCMD + 'ALTER INDEX &#091;' + @IndexName + '] ON &#091;' + @SchemaName + '].&#091;' + @TableName + '] REORGANIZE PARTITION = ' + CAST(@PartitionNumber AS NVARCHAR) + ';' + CHAR(13)\nEND\n\nELSE IF (@IndexFragmentationValue &gt; 20)\nBEGIN\nSET @SQLCMD = @SQLCMD + 'ALTER INDEX &#091;' + @IndexName + '] ON &#091;' + @SchemaName + '].&#091;' + @TableName + '] REBUILD PARTITION = ' + CAST(@PartitionNumber AS NVARCHAR) + ';' + CHAR(13)\nEND\n\n&nbsp;&nbsp;&nbsp; FETCH index_cursor INTO @IndexName, @IndexId, @ObjectId, @PartitionNumber, @SchemaName, @TableName\n\nEND\n\nCLOSE index_cursor\nDEALLOCATE index_cursor\n\n--PRINT @SQLCMD\nEXEC(@SQLCMD)<\/code><\/pre>\n\n\n\n<p>Once the partition maintenance is completed, we can then maintain all other indexes (such as those of other tables) in the following way:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>EXECUTE &#091;dba_tools].&#091;dbo].&#091;IndexOptimize]\n@Databases = 'USER_DATABASES',\n@UpdateStatistics = 'ALL',\n@Indexes = 'ALL_INDEXES,-&#091;partitioning_demo].&#091;dbo].&#091;PK_T_Data_2],-&#091;partitioning_demo].&#091;dbo].&#091;NCI-1-2]',\n@LogToTable = 'Y'<\/code><\/pre>\n\n\n\n<p>Thank you, Amine Haloui.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A client submitted the following issues : During an audit of their environment, we also noticed that the database data files were hosted on a volume using an MBR partition type (which is limited to 2 TB). 2 tables were involved (but only one needed to be partitioned, the &#8220;Data&#8221; table) because we had a [&hellip;]<\/p>\n","protected":false},"author":147,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229,198,368,99],"tags":[994,366],"type_dbi":[2874],"class_list":["post-36991","post","type-post","status-publish","format-standard","hentry","category-database-administration-monitoring","category-database-management","category-development-performance","category-sql-server","tag-partitioned-table","tag-partitioning","type-sql-server"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Customer case study - SQL Server table partitioning - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Customer case study - SQL Server table partitioning\" \/>\n<meta property=\"og:description\" content=\"A client submitted the following issues : During an audit of their environment, we also noticed that the database data files were hosted on a volume using an MBR partition type (which is limited to 2 TB). 2 tables were involved (but only one needed to be partitioned, the &#8220;Data&#8221; table) because we had a [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-02-07T20:09:22+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-02-16T22:56:34+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1040\" \/>\n\t<meta property=\"og:image:height\" content=\"609\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Amine Haloui\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Amine Haloui\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/\"},\"author\":{\"name\":\"Amine Haloui\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/221331d69d49c63fca67069b49b813fe\"},\"headline\":\"Customer case study &#8211; SQL Server table partitioning\",\"datePublished\":\"2025-02-07T20:09:22+00:00\",\"dateModified\":\"2025-02-16T22:56:34+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/\"},\"wordCount\":1196,\"commentCount\":0,\"image\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1-1024x600.jpg\",\"keywords\":[\"partitioned table\",\"Partitioning\"],\"articleSection\":[\"Database Administration &amp; Monitoring\",\"Database management\",\"Development &amp; Performance\",\"SQL Server\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/\",\"name\":\"Customer case study - SQL Server table partitioning - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1-1024x600.jpg\",\"datePublished\":\"2025-02-07T20:09:22+00:00\",\"dateModified\":\"2025-02-16T22:56:34+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/221331d69d49c63fca67069b49b813fe\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#primaryimage\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1.jpg\",\"contentUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1.jpg\",\"width\":1040,\"height\":609},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Customer case study &#8211; SQL Server table partitioning\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/221331d69d49c63fca67069b49b813fe\",\"name\":\"Amine Haloui\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/64707272207cd8d2667aefcb212f3ff5d19a15813da5aad6553f109d1f1afec1?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/64707272207cd8d2667aefcb212f3ff5d19a15813da5aad6553f109d1f1afec1?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/64707272207cd8d2667aefcb212f3ff5d19a15813da5aad6553f109d1f1afec1?s=96&d=mm&r=g\",\"caption\":\"Amine Haloui\"},\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/aminehaloui\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Customer case study - SQL Server table partitioning - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/","og_locale":"en_US","og_type":"article","og_title":"Customer case study - SQL Server table partitioning","og_description":"A client submitted the following issues : During an audit of their environment, we also noticed that the database data files were hosted on a volume using an MBR partition type (which is limited to 2 TB). 2 tables were involved (but only one needed to be partitioned, the &#8220;Data&#8221; table) because we had a [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/","og_site_name":"dbi Blog","article_published_time":"2025-02-07T20:09:22+00:00","article_modified_time":"2025-02-16T22:56:34+00:00","og_image":[{"width":1040,"height":609,"url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1.jpg","type":"image\/jpeg"}],"author":"Amine Haloui","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Amine Haloui","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/"},"author":{"name":"Amine Haloui","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/221331d69d49c63fca67069b49b813fe"},"headline":"Customer case study &#8211; SQL Server table partitioning","datePublished":"2025-02-07T20:09:22+00:00","dateModified":"2025-02-16T22:56:34+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/"},"wordCount":1196,"commentCount":0,"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1-1024x600.jpg","keywords":["partitioned table","Partitioning"],"articleSection":["Database Administration &amp; Monitoring","Database management","Development &amp; Performance","SQL Server"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/","url":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/","name":"Customer case study - SQL Server table partitioning - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#primaryimage"},"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1-1024x600.jpg","datePublished":"2025-02-07T20:09:22+00:00","dateModified":"2025-02-16T22:56:34+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/221331d69d49c63fca67069b49b813fe"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#primaryimage","url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1.jpg","contentUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/02\/s1.jpg","width":1040,"height":609},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/customer-case-study-sql-server-table-partitioning\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Customer case study &#8211; SQL Server table partitioning"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/221331d69d49c63fca67069b49b813fe","name":"Amine Haloui","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/64707272207cd8d2667aefcb212f3ff5d19a15813da5aad6553f109d1f1afec1?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/64707272207cd8d2667aefcb212f3ff5d19a15813da5aad6553f109d1f1afec1?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/64707272207cd8d2667aefcb212f3ff5d19a15813da5aad6553f109d1f1afec1?s=96&d=mm&r=g","caption":"Amine Haloui"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/aminehaloui\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/36991","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/147"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=36991"}],"version-history":[{"count":53,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/36991\/revisions"}],"predecessor-version":[{"id":37285,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/36991\/revisions\/37285"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=36991"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=36991"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=36991"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=36991"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}