{"id":10733,"date":"2018-01-03T20:56:34","date_gmt":"2018-01-03T19:56:34","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/"},"modified":"2023-07-17T17:02:50","modified_gmt":"2023-07-17T15:02:50","slug":"experiencing-updating-statistics-on-a-big-table-by-unusual-ways","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/","title":{"rendered":"Experiencing update statistics on a big table with circuitous ways"},"content":{"rendered":"<p>This is my first blog of this new year and since a while by the way. Indeed, last year, I put all my energy to realign my Linux skills with the new Microsoft\u2019s strategy that opens SQL Server to Open Source world. But at the same time, I did a lot of interesting stuff at customers shops and I decided to write about one\u00a0 of them to begin this new year 2018.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-20297 size-full\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg\" alt=\"blog 124 - 0 - stats\" width=\"400\" height=\"267\" \/><\/p>\n<p>In this blog post, I will highlight\u00a0a distinctive approach, according to me, to optimize an update statistics operation for one particular and big table.\u00a0I already had to manage such\u00a0maintenance tasks in\u00a0one of my previous jobs as DBA and I continue to learn\u00a0more about it but from a different position now.\u00a0\u00a0The fact is as consultant, I usually try to provide to my customer the best approach regarding both the issue and the context. In reality, from my experience, the latter is much more important than you may think and sometimes we have to consider different other ways to get the expected outcome. I think this is not a necessarily a bad thing because following a different path (not the best) may reveal different other interesting options we may consider to make our final recipe.<\/p>\n<p>Let\u2019s go back to my customer case and let set the context. One big database (1.7TB) on SQL Server 2014 SP2 and an update statistics operation that is part of a global database maintenance strategy and takes a very long time (roughly 30h in the best-case scenario). We identified the biggest part of the execution time is related to one big and non-partitioned table (let\u2019s say <strong><em>dbo.bigtable<\/em><\/strong>) with the following figures: 148 millions of rows \/ 222GB in size \/ 119 columns \/ 94 statistics \/\u00a014 indexes. Regarding the two last figures, the majority of statistics we\u00a0re in fact auto-generated by the SQL Server engine over the time and from different application release campaigns. Furthermore, it is worth mentioning that the DBA had to use a customized sample value (between 10 and 50 percent) to minimize the impact of update statistics operation for this particular table regarding the table size and the configured maintenance windows timeframe.<\/p>\n<p><strong><span style=\"text-decoration: underline;\">My first and obvious approach<\/span><\/strong><\/p>\n<p>My first approach consisted in warning the database administrator about the number of statistics on this table (probably a lot of them are not in use anymore?) as well as the development team about the bad designed table. In addition, the number of rows in this table may also indicate that it contains a lot of archive data and we may reduce the global size by using archiving processes (why not built-in partitioning features and incremental statistics\u00a0because we&#8217;re running with enterprise edition?). However, reviewing the model was not an option for my customer because it will require a lot of work and the DBA wasn&#8217;t keen on the idea of archiving data (business constraints) or removing auto generated statistics on this specific table. So, what I considered a good approach was not a success for adoption and the human aspect was definitely a big part of it. In a nutshell, at this stage a standby situation \u2026<\/p>\n<p><strong><span style=\"text-decoration: underline;\">When a high-performance storage comes into\u00a0rescue \u2026<\/span><\/strong><\/p>\n<p>In parallel my customer was considering to replace his old storage by a new one and Pure Storage was in the loop. Pure Storage is one of the flash storage providers on the market and the good news is I already was in touch with <a href=\"https:\/\/twitter.com\/scrimi78\" target=\"_blank\" rel=\"noopener noreferrer\">@scrimi78 <\/a> (Systems Engineer at Pure Storage) in Switzerland. During this project, we had a lot of interesting discussions and interaction\u00a0s about Pure Storage products and I appreciated his availability to provide technical documentation and explanation. At the same time, they lent us generously a Pure Storage to play with snapshot volumes that will be used for database refresh between a production and dev environments.<\/p>\n<p>In the context of my customer, we already were aware of the poor performance capabilities of the old storage and the replacement by a Pure Storage \/\/ M20 model was very beneficial for the database environments as shown by the following figures:<\/p>\n<p>I only put the IO-related statistics of the production environment we had during the last year and we may notice a significant drop of average time after moving the production database files on the new storage layout. Figures are by year and month.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-20320\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-1-wait-stats-e1515013710936.jpg\" alt=\"blog 124 - 1 - wait stats\" width=\"800\" height=\"317\" \/><\/p>\n<p>We noticed the same from file IO statistics figures about the concerned database.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-20302\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-3-file-io-stats-e1515007319939.jpg\" alt=\"blog 124 - 3 - file io stats\" width=\"800\" height=\"371\" \/><\/p>\n<p>Very impressive isn\u2019t it? But what about our update statistics here? We naturally observed a significant drop in execution time to 6 hours (80% of improvement) because generally speaking this an IO-bound operation and especially in our case. The yellow columns represent operations we had to stop manually to avoid impacting the current application workload (&gt; 35hours of execution time). You may also notice we changed the sample value to 100 percent after installing the Pure Storage \/\/ 20 model compared to previous runs with a sample value of 50 percent.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-20305\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-3-update-stats-execution-time-e1515010135471.jpg\" alt=\"blog 124 - 3 - update stats execution time\" width=\"800\" height=\"427\" \/><\/p>\n<p><strong><span style=\"text-decoration: underline;\">Changing the initial way to update statistics for this table \u2026<\/span><\/strong><\/p>\n<p>The point here is we know that we may now rely on the storage performance to update statistics efficiently and why not to push the limit of the storage by changing the way of running our update statistics operation \u2013 basically sequentially by default with one statistic at time. So, we decided to write a custom script to carry out the update operation in parallel to boost the overall execution performance. Since <a href=\"https:\/\/blogs.msdn.microsoft.com\/sql_server_team\/boosting-update-statistics-performance-with-sql-2014-sp1cu6\/\" target=\"_blank\" rel=\"noopener noreferrer\">SQL Server 2014 SP1 CU6<\/a> we may benefit from an improved support for parallel statistics by using the <a href=\"https:\/\/support.microsoft.com\/en-us\/help\/3156157\/running-multiple-update-statistics-for-different-statistics-on-a-singl\" target=\"_blank\" rel=\"noopener noreferrer\">trace flag 7471<\/a> that changes the locking behavior such that SQL Server engine no longer acquires X LOCK on UPDSTATS resource on this table. The script consists in creating a pool of parallel SQL jobs that update one particular statistic on a single table. I put it below if you want to use it but in a meantime, let\u2019s say it is also possible to go through an interesting smart alternative solution based on the service broker capabilities <a href=\"https:\/\/sqlperformance.com\/2016\/05\/sql-statistics\/parallel-rebuilds\">here<\/a>. What is certain is we will integrate one or other version \u2013 with some adjustments \u2013 to you <a href=\"https:\/\/www.dbi-services.com\/offering\/products\/dmk-management-kit\/\" target=\"_blank\" rel=\"noopener noreferrer\">DMK management kit tool<\/a>. Here\u00a0my proposal\u00a0based on concurrent SQL jobs (feel free to comment):<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">SET NOCOUNT ON;\n\nDECLARE \n\t@database_name sysname = N'AdventureWorks2012', -- Target database\n\t@table_name sysname = N'bigTransactionHistory', -- Target table\n\t@schema_name sysname = N'dbo',                  -- Target schema\n\t@batch_upd_stats INT = 3,                       -- Number of simultaneous jobs\n\t@stat_sample INT = 0, \t\t\t\t\t        -- 0 = default sample rate value | 100 = FULLSCAN | Otherwise WITH SAMPLE @stat_sample PERCENT\n\t@debug BIT = 1\t\t\t\t\t\t\t\t\t-- 0 = debug mode disabled -| 1 - Debug mode | 99 - Verbose mode\n\t\n\n-- @stats_sample variable table\n-- Will contains statistics with custom sampling rate value\n-- Otherwise will use the default sample rate value from SQL Server\n-- You may also use an user table for more flexibility that that will \n-- be used from this script \nDECLARE @stats_sample TABLE\n(\n\tcolumn_name sysname,\n\tstats_name sysname,\n\tsample_stat tinyint\n);\n\nINSERT @stats_sample VALUES ('TransactionID', '_WA_Sys_00000001_4CC05EF3', 100),\n\t\t\t\t\t\t    ('ProductID', '_WA_Sys_00000002_4CC05EF3', 100);\n\n\n\n-- working parameters\nDECLARE\n\t@nb_jobs_running INT = 0,\n\t@count INT = 0,\n\t@i INT,\n\t@j INT,\n\t@max INT,\n\t@sqlcmd NVARCHAR(MAX) = N'',\n\t@job_name_pattern sysname,\n\t@start_date DATETIME2(0),\n\t@end_date DATETIME2(0),\n\t@stats_name sysname,\n\t@sample_stat int\n\t;\n\n\n-- Creation of update stat jobs\nIF OBJECT_ID('tempdb..#update_stats_tt', 'U') IS NOT NULL\n\tDROP TABLE #update_stats_tt;\n\nSELECT \n\tid = IDENTITY(INT, 1,1),\n\ts.name AS [schema_name],\n\tt.name AS table_name,\n\tst.name AS stats_name,\n\tsample_stat,\n\t'USE [msdb]\n\t\n\tDECLARE @jobId BINARY(16);\n\t\n\tEXEC  msdb.dbo.sp_add_job @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', \n\t\t\t@enabled=1, \n\t\t\t@notify_level_eventlog=0, \n\t\t\t@notify_level_email=2, \n\t\t\t@notify_level_netsend=2, \n\t\t\t@notify_level_page=2, \n\t\t\t@delete_level=0, \n\t\t\t@category_name=N''[Uncategorized (Local)]'', \n\t\t\t@owner_login_name=N''sa'', @job_id = @jobId OUTPUT\n\t--select @jobId\n\t\n\tEXEC msdb.dbo.sp_add_jobserver @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', @server_name = N''' + @@SERVERNAME + '''\n\t\n\tEXEC msdb.dbo.sp_add_jobstep @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', @step_name=N''UPDATE STATS'', \n\t\t@step_id=1, \n\t\t@cmdexec_success_code=0, \n\t\t@on_success_action=1, \n\t\t@on_fail_action=2, \n\t\t@retry_attempts=0, \n\t\t@retry_interval=0, \n\t\t@os_run_priority=0, @subsystem=N''TSQL'', \n\t\t@command=N''UPDATE STATISTICS ' + QUOTENAME(s.name) + '.' + QUOTENAME(t.name) + ' ' + st.name + CASE COALESCE(sample_stat, @stat_sample, 0)\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tWHEN 0 THEN ' '\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tWHEN 100 THEN ' WITH FULLSCAN'\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tELSE ' WITH SAMPLE ' + CAST(COALESCE(sample_stat, @stat_sample, 0) AS VARCHAR(15)) + ' PERCENT'\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tEND + ''', \n\t\t@database_name=N''' + @database_name + ''', \n\t\t@flags=0\n\n\tEXEC msdb.dbo.sp_update_job @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', \n\t\t\t@enabled=1, \n\t\t\t@start_step_id=1, \n\t\t\t@notify_level_eventlog=0, \n\t\t\t@notify_level_email=2, \n\t\t\t@notify_level_netsend=2, \n\t\t\t@notify_level_page=2, \n\t\t\t@delete_level=0, \n\t\t\t@description=N'''', \n\t\t\t@category_name=N''[Uncategorized (Local)]'', \n\t\t\t@owner_login_name=N''sa'', \n\t\t\t@notify_email_operator_name=N'''', \n\t\t\t@notify_netsend_operator_name=N'''', \n\t\t\t@notify_page_operator_name=N''''\n\t' AS upd_stats_cmd\nINTO  #update_stats_tt\nFROM \n\tsys.stats AS st\nJOIN \n\tsys.tables AS t ON st.object_id = t.object_id\nJOIN\n\tsys.schemas AS s ON s.schema_id = t.schema_id\nLEFT JOIN \n\t@stats_sample AS ss ON ss.stats_name = st.name\nWHERE \n\tt.name =  @table_name\n\tAND s.name = @schema_name\nORDER BY \n\tstats_id;\n\nIF @debug = 99\n\tSELECT * FROM #update_stats_tt;\n\n-- Enable traceflag 7471 to allow U lock while stat is updating\nPRINT '--&gt; Enable trace flag 7471 during update stats operation';\n\nSET @sqlcmd = N'DBCC TRACEON(7471, -1);' + CHAR(13)\nEXEC sp_executesql @sqlcmd;\n\nPRINT '-----------------------------------';\n\nSET @start_date = CURRENT_TIMESTAMP;\n\nSET @max = (SELECT MAX(id) FROM #update_stats_tt);\nSET @i = 0;\n\nIF @debug = 99\n\tSELECT 'DEBUG --&gt;  @max (nb stats) : ' + CAST(@max AS VARCHAR(15));\n\n-- Entering to the loop ...\nWHILE (@i &lt;= @max OR @nb_jobs_running &lt;&gt; 0)\nBEGIN\n\n\tSET @j = @i + 1;\n\n\tIF @debug = 99\n\tBEGIN\n\t\tSELECT 'DEBUG --&gt;  @i : ' + CAST(@i AS VARCHAR(15));\n\t\tSELECT 'DEBUG --  @j = @i + 1 : ' + CAST(@j AS VARCHAR(15));\n\tEND\n\n\t-- Computing number of update stats jobs to create\n\t-- regarding both the max configured of simulataneous jobs and current running jobs\n\tSET @count = @batch_upd_stats - @nb_jobs_running;\n\n\tIF @debug = 99\n\t\tSELECT 'DEBUG --  @count : ' + CAST(@count AS VARCHAR(15));\n\n\t-- Here we go ... creating update stats sql_jobs\n\tWHILE (@j &lt;= @i + @count)\n\tBEGIN\n\n\t\tSET @sqlcmd = '';\n\t\tSET @stats_name = NULL;\n\t\tSET @sample_stat = NULL;\n\t\tSET @sqlcmd = NULL;\n\n\t\tSELECT \n\t\t\t@stats_name = stats_name,\n\t\t\t@sample_stat = sample_stat,\n\t\t\t@sqlcmd = upd_stats_cmd + CHAR(13) + '---------------------' + CHAR(13)\n\t\tFROM \n\t\t\t#update_stats_tt\n\t\tWHERE \n\t\t\tid = @j;\n\n\t\tIF @debug = 99\n\t\tBEGIN\n\t\t\tSELECT 'DEBUG --  @j loop : ' + CAST(@j AS VARCHAR(15));\n\t\t\tSELECT @stats_name, @sample_stat\n\t\tEND\n\n\t\tIF @debug = 1\n\t\t\tPRINT 'UPDATE STATISTICS ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' ' + @stats_name + CASE COALESCE(@sample_stat, @stat_sample, 0)\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tWHEN 0 THEN ' '\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tWHEN 100 THEN ' WITH FULLSCAN'\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tELSE ' WITH SAMPLE ' + CAST(COALESCE(@sample_stat, @stat_sample, 0) AS VARCHAR(15)) + ' PERCENT'\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t  END + '';\n\t\tIF @debug IN (0,1) \n\t\tBEGIN\n\t\t\tPRINT '--&gt; Create SQL job UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + @stats_name + '';\n\t\t\t\n\t\t\tEXEC sp_executesql @sqlcmd;\n\t\tEND\n\n\t\tSET @j += 1;\n\n\tEND\n\n\t-- We need to rewind by 1 to target the next stat to update\n\tSET @j -= 1;\n\n\tPRINT '-----------------------------------';\n\n\t-- Start all related update stats jobs \n\tSET @sqlcmd = N'';\n\n\tSELECT @sqlcmd += 'EXEC msdb.dbo.sp_start_job @job_name = ''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + stats_name + ''';' + CHAR(13)\n\tFROM #update_stats_tt\n\tWHERE id BETWEEN (@i + 1) AND (@i + @count);\n\n\tIF @debug = 1\n\t\tPRINT @sqlcmd;\n\t\n\tIF @debug IN (0,1)\n\tBEGIN\n\t\tPRINT '--&gt; Starting UPDATE_STATS_' + @schema_name + '_' + @table_name + ' jobs';\n\t\tEXEC sp_executesql @sqlcmd;\n\t\tPRINT '-----------------------------------';\n\tEND\n\n\t-- Waiting 10 seconds before checking running jobs\n\tWAITFOR DELAY '00:00:10';\n\n\t-- Construction job pattern to the next steps - check running jobs and stop terminated jobs\n\tSET @job_name_pattern = 'UPDATE_STATS_' + @schema_name + '_' + @table_name + '_';\n\t\n\tIF @debug = 99\n\t\tSELECT 'DEBUG - @job_name_pattern = ' + @job_name_pattern\n\n\tSELECT \n\t\t@nb_jobs_running = COUNT(*)\n\tFROM \n\t\tmsdb.dbo.sysjobactivity ja (NOLOCK)\n\tLEFT JOIN \n\t\tmsdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id\n\tJOIN \n\t\tmsdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id \n\tJOIN \n\t\tmsdb.dbo.sysjobsteps js  (NOLOCK) ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id\n\tWHERE \n\t\tja.session_id = (\n\t\t\t\t\t\t\tSELECT TOP 1 session_id \n\t\t\t\t\t\t\tFROM msdb.dbo.syssessions  (NOLOCK) \n\t\t\t\t\t\t\tORDER BY agent_start_date DESC)\n\t\tAND start_execution_date is not null\n\t\tAND stop_execution_date is NULL\n\t\tAND j.name LIKE @job_name_pattern + '%';\n\n\tIF @debug = 99\n\t\tSELECT 'DEBUG --  @nb_jobs_running : ' + CAST(@nb_jobs_running AS VARCHAR(15));\n\t\n\tIF @nb_jobs_running = @batch_upd_stats\n\t\tPRINT '--&gt; All SQL jobs are running. Waiting for 5s ...';\n\n\t-- Waiting until at least one job is terminated ...\n\tWHILE (@nb_jobs_running = @batch_upd_stats)\n\tBEGIN\n\n\t\t-- Count nb of running jobs only\n\t\tSELECT \n\t\t\t@nb_jobs_running = COUNT(*)\n\t\tFROM \n\t\t\tmsdb.dbo.sysjobactivity ja (NOLOCK)\n\t\tLEFT JOIN \n\t\t\tmsdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id\n\t\tJOIN \n\t\t\tmsdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id \n\t\tJOIN \n\t\t\tmsdb.dbo.sysjobsteps js  (NOLOCK) ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id\n\t\tWHERE ja.session_id = (\n\t\t\t\t\t\t\t\tSELECT TOP 1 session_id \n\t\t\t\t\t\t\t\tFROM msdb.dbo.syssessions  (NOLOCK) \n\t\t\t\t\t\t\t\tORDER BY agent_start_date DESC)\n\t\t\tAND start_execution_date is not null\n\t\t\tAND stop_execution_date is NULL\n\t\t\tAND j.name LIKE @job_name_pattern + '%'\n\n\t\tWAITFOR DELAY '00:00:05';\n\n\tEND\n\n\tPRINT '-----------------------------------';\n\n\t-- Delete terminated SQL jobs \n\tSET @sqlcmd = '';\n\n\tSELECT \n\t\t@sqlcmd += 'EXEC msdb.dbo.sp_delete_job  @job_name = ''' + j.name + ''';' + CHAR(13)\n\tFROM \n\t\tmsdb.dbo.sysjobactivity ja (NOLOCK)\n\tLEFT JOIN \n\t\tmsdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id\n\tJOIN \n\t\tmsdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id \n\tWHERE \n\t\tj.name LIKE @job_name_pattern + '%'\n\t\tAND start_execution_date is not null\n\t\tAND stop_execution_date is not null\n\n\tIF @debug = 1\n\t\tPRINT @sqlcmd;\n\n\tIF @debug IN (0,1)\n\tBEGIN\n\t\tPRINT '--&gt; Removing terminated UPDATE_STATS_' + @schema_name + '_' + @table_name + '_XXX jobs'\n\t\tEXEC sp_executesql @sqlcmd;\n\t\tPRINT '-----------------------------------';\n\tEND\n\n\tSET @i = @j;\n\n\tIF @debug = 99\n\t\tSELECT 'DEBUG --  @i = @j : ' + CAST(@j AS VARCHAR(15));\n\t\nEND\n\n-- Disable traceflag 7471 (only part of the update stats maintenance\nPRINT '-----------------------------------';\nPRINT '--&gt; Disable trace flag 7471';\nSET @sqlcmd = N'DBCC TRACEOFF(7471, -1);' + CHAR(13)\nEXEC sp_executesql @sqlcmd;\n\nSET @end_date = CURRENT_TIMESTAMP;\n\n-- Display execution time in seconds\nSELECT DATEDIFF(SECOND, @start_date, @end_date) AS duration_S;\n\n<\/pre>\n<p>We initially run the test on a QA environment with 4 VCPUs and 22GB of RAM &#8211; that was pretty close to the production environment. We noticed when we began to increase the number of parallel jobs over 3 we encountered RESOURCE_SEMAPHORE waits. This is because of memory grants required for each update statistics command\u00a0. Unfortunately, no chance here to increase the amount of memory to push the limit further but we noticed a factor improvement of 1.5 in average (with still a sample of 100 percent).<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-20306\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-5-update-stats-execution-time-e1515010752651.jpg\" alt=\"blog 124 - 5 - update stats execution time\" width=\"800\" height=\"396\" \/><\/p>\n<p>At this point I asked myself if we may rely only on the storage layout performance to update statistics. After all, we managed to reduce the execution time below to the maximum windows maintenance timeframe \u2013 fixed to 8 hours in your context.<\/p>\n<p><strong><span style=\"text-decoration: underline;\">Analyzing further the data distribution<\/span><\/strong><\/p>\n<p>Relying on the storage performance and the new SQL Server capabilities was a good thing for us but however I kept in mind that updating 94 statistics was probably not a smart idea because I was convicted a big part of them was pretty useless. There is no easy way to verify it because we had a mix of stored procedures and ad-hoc statements from different applications that refer to this database (let\u2019s say we also have super users who run queries directly from SSMS). So I decided to put the question differently: If we may not remove some auto-generated statistics, do we have necessarily to update all of them with FULLSCAN for this specific table? What about data distribution for columns involved by auto generated statistics? In the context of the <strong><em>dbo.bigTable<\/em><\/strong> and regarding the number of rows we may easily run into cardinality estimation issues if the data distribution is not correctly represented especially in case of skewed data. Nevertheless, analyzing manually histogram steps of each statistic may be cumbersome and we decided to go through the stored procedures provided by Kimberly Tripp <a href=\"https:\/\/www.sqlskills.com\/blogs\/kimberly\/sqlskills-procs-analyze-data-skew-create-filtered-statistics\/\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>. However, the version we got did not support analyzing columns not involved in an index as mentioned below:<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">-- Considering for v2\n--\t, @slowanalyze\tchar(5) = 'FALSE'\t\t\n-- No index that exists with this column as the \n-- high-order element...analyze anyway. \n-- NOTE: This might require MULTIPLE table scans. \n-- I would NOT RECOMMEND THIS. I'm not even sure why I'm allowing this...<\/pre>\n<p>We had to update a little bit of code to fix it but don\u2019t get me wrong, the Kimberly\u2019s recommendation still makes sense because in the background SQL Server order data from the concerned column to get a picture of the data distribution. Without any indexes on this concerned column, analyzing data distribution may be a very time and resource-consuming (including tempdb for sorting data) task especially when the table becomes big in size as illustrated by\u00a0the following\u00a0sample of code executed by SQL Server while updating statistics.<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1\">SELECT StatMan([SC0]) \nFROM \n(\n\tSELECT TOP 100 PERCENT [TransactionID] AS [SC0] \n\tFROM [dbo].[bigTransactionHistory] WITH (READUNCOMMITTED)  \n\tORDER BY [SC0] \n) AS _MS_UPDSTATS_TBL  OPTION (MAXDOP 16)<\/pre>\n<p>This was the case for my <em><strong>dbo.bigTable<\/strong> <\/em>and I had to cut the operation into small pieces spread within several windows maintenance timeframes.<\/p>\n<p>So, we have to add \u00a0to the <strong><em>sp_SQLskills_AnalyzeColumnSkew <\/em><\/strong>stored procedure the creation of temporary index on the concerned column before analyzing data. This compromise allowed us to gain a lot of time and was validated on the QA environment.<\/p>\n<p>We tried different\u00a0\u00a0combinations of parameters and we finally finished by using the following ones <strong><em>(@difference = 10000<\/em><\/strong>, <strong><em>@numofsteps = 5)<\/em><\/strong> according to our background when we faced query performance and cardinality estimation issues on this specific table. We got an interesting output as shown below:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-20309\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-6-update-stats-execution-time-e1515011513709.jpg\" alt=\"blog 124 - 6 - update stats execution time\" width=\"800\" height=\"599\" \/><\/p>\n<p>The number of columns with skewed data are low compared to those with data distributed uniformly. <strong>[Partial skew]<\/strong> value means we got only differences from the column <strong>[<em>Nb steps (factor)<\/em>]<em> =&gt; <\/em><\/strong>2.5 by default.<\/p>\n<p>That was a pretty good news because for columns with no skew data we were able to consider updating them by either using the default sampling rate used by SQL Server (nonlinear algorithm under the control of the optimizer) or to specify a custom sampling rate value to make sure we are not scanning too much data. Regarding my context, mixing parallel jobs and a default sampling rate value for column statistics with no skewed data seems to be good enough (no query plan regression at this moment) but we will have probably to change in the future. Anyway, we managed to reduce the execution time to one hour as shown below:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-20310\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-7-update-stats-execution-time-e1515011894984.jpg\" alt=\"blog 124 - 7 - update stats execution time\" width=\"850\" height=\"374\" \/><\/p>\n<p>For columns with skewed data we are still keeping the FULLSCAN option and we plan to investigate filtered statistics to enhance further cardinality estimations in the future.<\/p>\n<p>The bottom line of this story is that I probably never thought to go through all the aforementioned options if the customer accepted to follow my first proposal (who knows?). Technically and humanly speaking it was a good learning experience. Obviously, you would think it was not the best or the simplest approach and you would be right. Indeed, there are drawbacks here as adding overhead and complexity to write custom scripts and maintaining ndividual statistics over the time as well. But from my experience in a consulting world everything is not often black or white and we also have to compose with a plenty of customer\u2019s context variations to achieve not necessarily what we consider the best but one satisfactory outcome for the business.<\/p>\n<p>Happy new year 2018!<\/p>\n<p><span style=\"float: none; background-color: #ffffff; color: #333333; cursor: text; font-family: Georgia,'Times New Roman','Bitstream Charter',Times,serif; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none;\">By David Barbarin<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This is my first blog of this new year and since a while by the way. Indeed, last year, I put all my energy to realign my Linux skills with the new Microsoft\u2019s strategy that opens SQL Server to Open Source world. But at the same time, I did a lot of interesting stuff at [&hellip;]<\/p>\n","protected":false},"author":26,"featured_media":10734,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[229],"tags":[1244,67,51,1245,1218,1246],"type_dbi":[],"class_list":["post-10733","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-database-administration-monitoring","tag-parallel-update-statistics","tag-performance","tag-sql-server","tag-tf-7471","tag-tuning","tag-update-statistics"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Experiencing update statistics on a big table with circuitous ways<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Experiencing update statistics on a big table with circuitous ways\" \/>\n<meta property=\"og:description\" content=\"This is my first blog of this new year and since a while by the way. Indeed, last year, I put all my energy to realign my Linux skills with the new Microsoft\u2019s strategy that opens SQL Server to Open Source world. But at the same time, I did a lot of interesting stuff at [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2018-01-03T19:56:34+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-07-17T15:02:50+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"400\" \/>\n\t<meta property=\"og:image:height\" content=\"267\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Microsoft Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Microsoft Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/\"},\"author\":{\"name\":\"Microsoft Team\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/bfab48333280d616e1170e7369df90a4\"},\"headline\":\"Experiencing update statistics on a big table with circuitous ways\",\"datePublished\":\"2018-01-03T19:56:34+00:00\",\"dateModified\":\"2023-07-17T15:02:50+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/\"},\"wordCount\":1881,\"commentCount\":0,\"image\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg\",\"keywords\":[\"parallel update statistics\",\"Performance\",\"SQL Server\",\"TF 7471\",\"tuning\",\"update statistics\"],\"articleSection\":[\"Database Administration &amp; Monitoring\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/\",\"name\":\"Experiencing update statistics on a big table with circuitous ways\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg\",\"datePublished\":\"2018-01-03T19:56:34+00:00\",\"dateModified\":\"2023-07-17T15:02:50+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/bfab48333280d616e1170e7369df90a4\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#primaryimage\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg\",\"contentUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg\",\"width\":400,\"height\":267},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Experiencing update statistics on a big table with circuitous ways\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/bfab48333280d616e1170e7369df90a4\",\"name\":\"Microsoft Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/c44a1a792c059f24055763aa77d80a244467f6eef724a8bd13db8d4a350b7a4c?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/c44a1a792c059f24055763aa77d80a244467f6eef724a8bd13db8d4a350b7a4c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/c44a1a792c059f24055763aa77d80a244467f6eef724a8bd13db8d4a350b7a4c?s=96&d=mm&r=g\",\"caption\":\"Microsoft Team\"},\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/microsoft-team\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Experiencing update statistics on a big table with circuitous ways","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/","og_locale":"en_US","og_type":"article","og_title":"Experiencing update statistics on a big table with circuitous ways","og_description":"This is my first blog of this new year and since a while by the way. Indeed, last year, I put all my energy to realign my Linux skills with the new Microsoft\u2019s strategy that opens SQL Server to Open Source world. But at the same time, I did a lot of interesting stuff at [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/","og_site_name":"dbi Blog","article_published_time":"2018-01-03T19:56:34+00:00","article_modified_time":"2023-07-17T15:02:50+00:00","og_image":[{"width":400,"height":267,"url":"http:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg","type":"image\/jpeg"}],"author":"Microsoft Team","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Microsoft Team","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/"},"author":{"name":"Microsoft Team","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/bfab48333280d616e1170e7369df90a4"},"headline":"Experiencing update statistics on a big table with circuitous ways","datePublished":"2018-01-03T19:56:34+00:00","dateModified":"2023-07-17T15:02:50+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/"},"wordCount":1881,"commentCount":0,"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg","keywords":["parallel update statistics","Performance","SQL Server","TF 7471","tuning","update statistics"],"articleSection":["Database Administration &amp; Monitoring"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/","url":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/","name":"Experiencing update statistics on a big table with circuitous ways","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#primaryimage"},"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg","datePublished":"2018-01-03T19:56:34+00:00","dateModified":"2023-07-17T15:02:50+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/bfab48333280d616e1170e7369df90a4"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#primaryimage","url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg","contentUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/blog-124-0-stats-e1515001301801.jpg","width":400,"height":267},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/experiencing-updating-statistics-on-a-big-table-by-unusual-ways\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Experiencing update statistics on a big table with circuitous ways"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/bfab48333280d616e1170e7369df90a4","name":"Microsoft Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/c44a1a792c059f24055763aa77d80a244467f6eef724a8bd13db8d4a350b7a4c?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/c44a1a792c059f24055763aa77d80a244467f6eef724a8bd13db8d4a350b7a4c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/c44a1a792c059f24055763aa77d80a244467f6eef724a8bd13db8d4a350b7a4c?s=96&d=mm&r=g","caption":"Microsoft Team"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/microsoft-team\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/10733","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/26"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=10733"}],"version-history":[{"count":1,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/10733\/revisions"}],"predecessor-version":[{"id":26790,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/10733\/revisions\/26790"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media\/10734"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=10733"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=10733"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=10733"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=10733"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}