{"id":43157,"date":"2026-02-25T21:06:48","date_gmt":"2026-02-25T20:06:48","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=43157"},"modified":"2026-02-25T21:06:50","modified_gmt":"2026-02-25T20:06:50","slug":"dctm-detect-and-analyze-slow-sql-queries","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/","title":{"rendered":"Dctm &#8211; Detect and analyze slow SQL queries"},"content":{"rendered":"\n<p>Documentum is a powerful but also complex software. Therefore, debugging performance issues often comes down to being able to implement and review various traces. In this blog, I will talk about how you can detect and analyze slow SQL queries from a Documentum Server. Several years ago, I wrote a <a href=\"https:\/\/www.dbi-services.com\/blog\/documentum-dfc-traces-setup-investigation\/\" id=\"10660\" target=\"_blank\" rel=\"noreferrer noopener\">blog post about DFC traces<\/a>. It included the setup and investigation process, using the EMC Python\/AWK parser scripts as a base. In addition, I also wrote <a href=\"https:\/\/www.dbi-services.com\/blog\/howto-detect-and-analyze-long-stuck-threads-in-tomcat\/\" id=\"29794\" target=\"_blank\" rel=\"noreferrer noopener\">another one about Tomcat slow\/stuck threads<\/a>. With these two posts, you could already cover the Documentum DFC clients (like JMS Apps, DA, D2, xPlore, etc.).<\/p>\n\n\n\n<p>As you might imagine, SQL traces in Documentum can be quite large. A lot of queries are sent to the Database to retrieve, insert or update information. Things like document metadata, users, permissions, audittrail entries, document creation, etc. It is less intensive than DFC or SSL\/Network traces, because those write dozens of lines for each and every action. On the other hand, SQL traces limit themselves to the query being executed, the execution time, and potentially some commits.<\/p>\n\n\n\n<p>A customer recently provided me a Repository log file with SQL traces related to a problematic action in DA. I was expecting maybe a 10 or 20-minute log file at most. But what I received was a 4-hour log file, because the action took that long to complete in DA. I thought this would be a good use case to share in a blog, to explain how you can quickly extract interesting details from such SQL traces.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-1-defining-variables\">1. Defining variables<\/h2>\n\n\n\n<p>As often, I find it easier for both you and me to use predefined variables. This allows you to just change these details and, for the rest of the blog, simply copy\/paste the exact same queries without having to wonder whether they will work.<\/p>\n\n\n\n<p>I&#8217;m using my Mac and I have a folder which contains the Repository log file:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nmac:~$ ls\ndbi01.log\nmac:~$\nmac:~$ # Variables definition\nmac:~$ source_repo_log_file=&quot;dbi01.log&quot;\nmac:~$ query_times=&quot;query_times.log&quot;\nmac:~$ extracted_session_log_file=&quot;extracted_session_log_file.log&quot;\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-2-extracting-the-sql-execution-times\">2. Extracting the SQL execution times<\/h2>\n\n\n\n<p>Once done, let&#8217;s proceed directly with extracting the SQL execution times. The durations will be sorted to easily understand whether there is a problem with some queries. In the initial log file, there are more than a hundred thousand SQL queries executed. Out of these, I selected only the 47 slowest &#8220;<em><strong>EXEC<\/strong><\/em>&#8221; entries below. That is not an arbitrary number; I initially retrieved the last 100 lines, but most of them were less than 0.1s. Therefore, I reduced that number to only execution times that might be interesting to investigate.<\/p>\n\n\n\n<p>In the first command, I am selecting all &#8220;<strong><em>EXEC<\/em><\/strong>&#8220;, which will include the base &#8220;<strong><em>EXEC<\/em><\/strong>&#8221; but also other types like &#8220;<strong><em>EXECBATCH<\/em><\/strong>&#8220;, if any. There are also some &#8220;<strong><em>FETCH<\/em><\/strong>&#8221; entries in the SQL traces, but I will ignore these in this blog. The second and third commands simply show what was extracted:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nmac:~$ ### Extracting all &quot;EXEC&quot; times from the Repo log\nmac:~$ ### keeping the 47 slowest (&gt;0.5s in this case)\nmac:~$ grep &quot;EXEC&quot; ${source_repo_log_file} | \\\n         awk &#039;{print $7}&#039; | \\\n         sort --version-sort | \\\n         tail -47 &gt; ${query_times}\nmac:~$\nmac:~$ wc -l ${query_times}\n    47 query_times.log\nmac:~$\nmac:~$\nmac:~$ ### Displaying the outcome sorted by asc time\nmac:~$ cat ${query_times}\n0.5566090000\n1.0761300000\n9.9041350000\n11.2433810000\n14.6858060000\n...\n786.4434090000\n961.8524350000\n1028.2339260000\n1500.6862730000\n1970.6185150000\nmac:~$\n<\/pre><\/div>\n\n\n<p>The above command can be useful if you only want to extract the execution time (7th column). But most of the time, what is useful is the full line. It will display the <strong><em>date<\/em><\/strong> (1st column), of course as well as the <em><strong>OS PID<\/strong><\/em> (2nd column) that is executing this query, which can be useful in some cases. But most importantly, it will display the <em><strong>Session ID<\/strong><\/em> (3rd column) linked to the request. For that purpose, you can slightly modify the first command from above, to extract the full line for all &#8220;<em><strong>EXEC<\/strong><\/em>&#8221; and sort based on the 7th column:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nmac:~$ ### Extracting all &quot;EXEC&quot; complete lines from the Repo log\nmac:~$ ### keeping the 47 slowest (&gt;0.5s in this case)\nmac:~$ grep &quot;EXEC&quot; ${source_repo_log_file} | \\\n         sort -k 7 --version-sort | \\\n         tail -47\n2025-12-19T12:25:33.056 102&#x5B;102] 0112345682105da7 &#x5B;SQL] 0 EXEC 0.5566090000\n2025-12-19T11:09:40.772 104&#x5B;104] 0112345682105d65 &#x5B;SQL] 0 EXEC 1.0761300000\n2025-12-19T12:01:28.042 110&#x5B;110] 0112345682105d9b &#x5B;SQL] 0 EXEC 9.9041350000\n2025-12-19T12:01:13.943 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 11.2433810000\n2025-12-19T11:59:16.078 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 14.6858060000\n...\n2025-12-19T11:03:02.001 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 786.4434090000\n2025-12-19T09:59:20.090 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 961.8524350000\n2025-12-19T11:59:01.360 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 1028.2339260000\n2025-12-19T10:49:55.553 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 1500.6862730000\n2025-12-19T09:29:55.637 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 1970.6185150000\nmac:~$\n<\/pre><\/div>\n\n\n<p>Another way to do (almost) the same thing is based on the previously exported execution times. If you loop over them with a certain command, you can retrieve the same lines again. The only difference is that this second method will be significantly slower (since it parses the log file for each execution time instead of only once as in the previous command) and slightly less accurate in some edge cases. Technically, if you have two lines with the exact same execution time, you would not get only 2 results, but 2 x 2 results (so there would be duplicates). In my case, I do not have duplicate execution times, so it produces 100% the same outcome:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nmac:~$ ### Extracting all &quot;EXEC&quot; complete lines from the Repo log\nmac:~$ ### based on previously extracted execution time\nmac:~$ while read line; do \\\n         grep &quot;EXEC.*&#x5B;&#x5B;:space:]]\\+${line}$&quot; ${source_repo_log_file}; \\\n       done &lt; &lt;(cat ${query_times})\n2025-12-19T12:25:33.056 102&#x5B;102] 0112345682105da7 &#x5B;SQL] 0 EXEC 0.5566090000\n2025-12-19T11:09:40.772 104&#x5B;104] 0112345682105d65 &#x5B;SQL] 0 EXEC 1.0761300000\n2025-12-19T12:01:28.042 110&#x5B;110] 0112345682105d9b &#x5B;SQL] 0 EXEC 9.9041350000\n2025-12-19T12:01:13.943 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 11.2433810000\n2025-12-19T11:59:16.078 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 14.6858060000\n...\n2025-12-19T11:03:02.001 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 786.4434090000\n2025-12-19T09:59:20.090 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 961.8524350000\n2025-12-19T11:59:01.360 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 1028.2339260000\n2025-12-19T10:49:55.553 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 1500.6862730000\n2025-12-19T09:29:55.637 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 1970.6185150000\nmac:~$\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-3-how-long-is-really-spent-on-queries\">3. How long is really spent on queries?<\/h2>\n\n\n\n<p>Based on the above results, the first thing that stood out was that <em><strong>44<\/strong><\/em> of the slowest queries appeared to be linked to the <em><strong>same Session ID<\/strong><\/em>. Therefore, except for the first 3 rows (0.5s, 1s, and 9.9s), all other queries came from a single source. This clearly indicates that the problematic action in DA is associated with that Session ID, and it is therefore what I need to investigate further. The next step was to extract all logs related to that Session ID:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nmac:~$ ### Extracting logs related to the specific session ID\nmac:~$ session_id=&quot;01123456821058fb&quot;\nmac:~$ grep &quot;${session_id}&quot; ${source_repo_log_file} \\\n         &gt; ${extracted_session_log_file}\nmac:~$\nmac:~$ wc -l ${extracted_session_log_file}\n    765 extracted_session_log_file.log\nmac:~$\n<\/pre><\/div>\n\n\n<p>Compared to the size of the log file (a few hundred thousand lines), there are only 765 lines related to that Session ID. But how much time is actually spent on these few requests? For that purpose, you can compare the sum of all &#8220;<em><strong>EXEC<\/strong><\/em>&#8221; <em><strong>times for the Repository log file<\/strong><\/em> vs <em><strong>the Session ID log file<\/strong><\/em> vs <em><strong>the 44 slowest queries<\/strong><\/em>:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [12,26,40]; title: ; notranslate\" title=\"\">\nmac:~$ ### Checking the SQL execution time\nmac:~$ ### for all SQL queries in the Repo log file\nmac:~$ grep &quot;EXEC&quot; ${source_repo_log_file} | \\\n         awk &#039;{print $7}&#039; | \\\n         awk &#039;{sum += $1}\n         END {\n           h = int(sum \/ 3600)\n           m = int((sum % 3600) \/ 60)\n           s = sum % 60\n           printf &quot;%02dh %02dm %.3fs\\n&quot;, h, m, s\n         }&#039;\n03h 11m 50.477s\nmac:~$\nmac:~$\nmac:~$ ### Checking the SQL execution time\nmac:~$ ### related to the specific session ID\nmac:~$ grep &quot;EXEC&quot; ${extracted_session_log_file} | \\\n         awk &#039;{print $7}&#039; | \\\n         awk &#039;{sum += $1}\n         END {\n           h = int(sum \/ 3600)\n           m = int((sum % 3600) \/ 60)\n           s = sum % 60\n           printf &quot;%02dh %02dm %.3fs\\n&quot;, h, m, s\n         }&#039;\n03h 09m 56.340s\nmac:~$\nmac:~$\nmac:~$ ### Checking the SQL execution time\nmac:~$ ### for the 44 slowest related to the specific session ID\nmac:~$ cat ${query_times} | \\\n         tail -44 | \\\n         awk &#039;{sum += $1}\n         END {\n           h = int(sum \/ 3600)\n           m = int((sum % 3600) \/ 60)\n           s = sum % 60\n           printf &quot;%02dh %02dm %.3fs\\n&quot;, h, m, s\n         }&#039;\n03h 09m 56.015s\nmac:~$\n<\/pre><\/div>\n\n\n<p>The above means that inside that ~4-hour log file, the Documentum Server executed more than a hundred thousand queries for a total sum of 3h 11m 50.5s. Of course, that is not necessarily consecutive time, as some queries might be executed in parallel by different processes. Out of these 3h 11m 50s spent on Database queries, the time related to the problematic Session ID, which represents slightly less than 0.2% of the executed SQL queries, is 3h 9m 56.3s. And out of these 0.2%, the 44 slowest queries represent 3h 9m 56.0s. That means:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>100% SQL queries &gt;&gt; 3h 11m 50.5s = 11&#8217;510.5s\n<ul class=\"wp-block-list\">\n<li><em><strong>99.8% SQL queries<\/strong><\/em> &gt;&gt; 1m 54s = <em><strong>114s<\/strong><\/em><\/li>\n\n\n\n<li>0.2% SQL queries &gt;&gt; 3h 9m 56.3s = 11&#8217;396.3s\n<ul class=\"wp-block-list\">\n<li><em><strong>0.165% SQL queries<\/strong><\/em> &gt;&gt; <em><strong>0.3s<\/strong><\/em><\/li>\n\n\n\n<li><em><strong>0.035% SQL queries<\/strong><\/em> &gt;&gt; 3h 9m 56.0s = <em><strong>11&#8217;396.0s<\/strong><\/em><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>In other words, <em><strong>0.035% of the SQL queries executed represent 99% of the time spent on the Database<\/strong><\/em>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-4-finding-the-responsible-queries\">4. Finding the responsible queries<\/h2>\n\n\n\n<p>Now that we have clear evidence related to the execution times, the next step is to retrieve the associated queries that are the source of the performance issue. Below, I am simply retrieving the line containing the execution time, as well as the previous line, which is supposed to contain the triggered query.<\/p>\n\n\n\n<p>This is normally safe because I am checking only the extracted Session ID logs. Since a session is most likely sequential, it is relatively straightforward to retrieve the query responsible for the long execution times. There might be parallel executions for some batch processing, but I am not sure whether that would use the same session ID or a dedicated one per batch\/thread.<\/p>\n\n\n\n<p>If you have long running SQLs for multiple sessions, then you might want\/need to take care of that and do the check session by session.<\/p>\n\n\n\n<p>If you try to use the full Repository log instead, you will probably end up with an incorrect N-1 line. In that case, you would need a more complex parsing mechanism based on the Session ID. You can check the blog I mentioned earlier for an example of DFC trace parsing using awk. For this blog, I am keeping it simple, but I might create another post with a more advanced parser that performs everything automatically. So, let&#8217;s extract the SQL queries related to the 44 slowest execution times:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nmac:~$ ### Extracting all 44 slowest queries\nmac:~$ ### from the specific session ID log file\nmac:~$ while read line; do \\\n         grep -B1 &quot;${line}&quot; ${extracted_session_log_file}; \\\n       done &lt; &lt;(cat ${query_times})\n2025-12-19T12:01:02.700 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dm_audittrail.session_id, dm_audittrail.user_name, dm_audittrail.event_name, dm_audittrail.event_description, dm_audittrail.time_stamp, dm_audittrail.object_type from dm_audittrail_sp  dm_audittrail where (dm_audittrail.event_description like &#039;&#x5B;TRANSACTION_TRACKING_EVENTS]%&#039;)\n2025-12-19T12:01:13.943 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 11.2433810000\n2025-12-19T11:59:01.393 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dm_audittrail.user_name, dm_audittrail.session_id, dm_audittrail.event_name, dm_audittrail.string_1, dm_audittrail.string_2, dm_audittrail.string_3, dm_audittrail.string_4, dm_audittrail.time_stamp from dm_audittrail_sp  dm_audittrail where (dm_audittrail.event_name in (&#039;dm_connect&#039;, &#039;dm_logon_failure&#039;))\n2025-12-19T11:59:16.078 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 14.6858060000\n2025-12-19T12:00:09.566 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dm_audittrail.session_id, dm_audittrail.user_name, dm_audittrail.time_stamp from dm_audittrail_sp  dm_audittrail where ((dm_audittrail.event_name=&#039;dm_connect&#039;) and dm_audittrail.user_name in (select all dm_repeating.users_names from dm_group_sp  dm_group, dm_group_rp dm_repeating where ((dm_group.group_name=&#039;dm_occasional_user_role&#039;)) and dm_repeating.r_object_id=dm_group.r_object_id )) order by dm_audittrail.r_object_id desc\n2025-12-19T12:01:02.696 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 53.1301340000\n2025-12-19T11:59:16.083 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dm_audittrail.session_id, dm_audittrail.user_name, dm_audittrail.time_stamp from dm_audittrail_sp  dm_audittrail where ((dm_audittrail.event_name=&#039;dm_disconnect&#039;) and dm_audittrail.user_name in (select all dm_repeating.users_names from dm_group_sp  dm_group, dm_group_rp dm_repeating where ((dm_group.group_name=&#039;dm_occasional_user_role&#039;)) and dm_repeating.r_object_id=dm_group.r_object_id )) order by dm_audittrail.r_object_id desc\n2025-12-19T12:00:09.563 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 53.4798570000\n2025-12-19T11:13:44.269 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000153&#039;)) ))\n2025-12-19T11:15:01.033 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 76.7641310000\n2025-12-19T11:18:59.434 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000154&#039;)) ))\n2025-12-19T11:20:16.256 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 76.8221030000\n2025-12-19T11:08:31.916 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000152&#039;)) ))\n2025-12-19T11:09:49.809 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 77.8924590000\n2025-12-19T10:10:44.752 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;281234568000010b&#039;)) ))\n2025-12-19T10:12:03.103 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 78.3507420000\n2025-12-19T11:03:02.009 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000151&#039;)) ))\n2025-12-19T11:04:20.605 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 78.5954180000\n2025-12-19T11:36:05.711 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;28123456800001b4&#039;)) ))\n2025-12-19T11:37:27.600 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 81.8885210000\n2025-12-19T11:24:26.958 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000155&#039;)) ))\n2025-12-19T11:25:49.718 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 82.7602900000\n2025-12-19T11:30:19.107 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;28123456800001aa&#039;)) ))\n2025-12-19T11:31:44.148 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 85.0403360000\n2025-12-19T09:32:33.679 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000102&#039;)) ))\n2025-12-19T09:34:06.477 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 92.7986000000\n2025-12-19T10:03:31.061 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;281234568000010a&#039;)) ))\n2025-12-19T10:05:07.569 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 96.5074910000\n2025-12-19T11:11:47.250 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000152&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T11:13:44.264 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 117.0139310000\n2025-12-19T11:09:49.815 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000152&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T11:11:47.247 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 117.4327020000\n2025-12-19T11:17:00.411 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000153&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T11:18:59.431 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 119.0201020000\n2025-12-19T11:15:01.040 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000153&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T11:17:00.406 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 119.3658150000\n2025-12-19T11:04:20.613 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000151&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T11:06:23.110 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 122.4972100000\n2025-12-19T11:22:23.775 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000154&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T11:24:26.953 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 123.1771960000\n2025-12-19T10:12:03.114 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;281234568000010b&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T10:14:09.285 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 126.1710020000\n2025-12-19T10:14:09.290 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;281234568000010b&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T10:16:15.546 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 126.2559010000\n2025-12-19T11:20:16.262 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000154&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T11:22:23.770 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 127.5082070000\n2025-12-19T11:33:56.973 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;28123456800001aa&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T11:36:05.705 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 128.7315320000\n2025-12-19T11:06:23.115 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000151&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T11:08:31.910 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 128.7948250000\n2025-12-19T11:37:27.606 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;28123456800001b4&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T11:39:39.265 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 131.6585850000\n2025-12-19T11:31:44.156 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;28123456800001aa&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T11:33:56.971 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 132.8144410000\n2025-12-19T11:39:39.270 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;28123456800001b4&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T11:41:53.017 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 133.7473340000\n2025-12-19T11:25:49.725 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000155&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T11:28:03.796 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 134.0701630000\n2025-12-19T11:28:03.801 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000155&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T11:30:19.104 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 135.3037450000\n2025-12-19T09:29:55.642 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000100&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T09:32:33.673 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 158.0316130000\n2025-12-19T09:34:06.482 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000102&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T09:36:51.885 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 165.4027790000\n2025-12-19T10:07:57.988 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;281234568000010a&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T10:10:44.749 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 166.7611230000\n2025-12-19T10:05:07.576 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;281234568000010a&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T10:07:57.984 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 170.4081960000\n2025-12-19T09:36:51.889 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000102&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T09:39:43.525 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 171.6363780000\n2025-12-19T09:39:43.530 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;6d1234568000011e&#039;)) ))\n2025-12-19T09:43:18.230 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 214.6996070000\n2025-12-19T09:59:20.094 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;total_file_count&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;6d1234568000011e&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T10:03:31.053 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 250.9586420000\n2025-12-19T08:51:17.559 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000100&#039;)) ))\n2025-12-19T08:57:05.010 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 347.4517930000\n2025-12-19T10:16:15.551 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all dmr_content.set_time from dmr_content_sp  dmr_content where (dmr_content.set_time=(select all max(dmr_content.set_time) from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000150&#039;)) ))\n2025-12-19T10:24:54.860 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 519.3084910000\n2025-12-19T10:49:55.557 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000150&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is null) and (dmr_content.is_archived=0))\n2025-12-19T11:03:02.001 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 786.4434090000\n2025-12-19T09:43:18.238 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;total_file_count&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;6d1234568000011e&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T09:59:20.090 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 961.8524350000\n2025-12-19T11:41:53.126 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all u.r_object_id, u.user_name, u.user_privileges, u.user_xprivileges, count(s.object_name) &quot;document_count&quot; from dm_user_sp  u, dm_sysobject_sp  s where ((u.user_name=s.owner_name)) and (s.i_has_folder = 1 and s.i_is_deleted = 0) group by u.user_name, u.user_privileges, u.user_xprivileges, u.r_object_id\n2025-12-19T11:59:01.360 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 1028.2339260000\n2025-12-19T10:24:54.866 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000150&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T10:49:55.553 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 1500.6862730000\n2025-12-19T08:57:05.019 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 select all count(*) &quot;totalsize&quot; from dmr_content_sp  dmr_content where ((dmr_content.storage_id=&#039;2812345680000100&#039;) and  exists (select r_object_id from dmr_content_r where dmr_content.r_object_id = r_object_id and parent_id is not null) and (dmr_content.is_archived=0))\n2025-12-19T09:29:55.637 105&#x5B;105] 01123456821058fb &#x5B;SQL] 0 EXEC 1970.6185150000\nmac:~$\n<\/pre><\/div>\n\n\n<p>There are quite a few interesting things we can see above. The <strong><em>slowest queries (&gt;76s) are all related to documents\/contents<\/em><\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>For each dm_filestore that exists in the Repository (12 above):\n<ul class=\"wp-block-list\">\n<li>1 x &#8216;<em><strong>select all dmr_content.set_time from dmr_content_sp \u2026<\/strong><\/em>&#8216;<\/li>\n\n\n\n<li>2 x &#8216;<strong><em>select all count(*) &#8220;totalsize&#8221; from dmr_content_sp \u2026<\/em><\/strong>&#8216; (objects with and without parent_id)<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>For each dm_ca_store that exists in the Repository (1 above):\n<ul class=\"wp-block-list\">\n<li>1 x &#8216;<em><strong>select all dmr_content.set_time from dmr_content_sp \u2026<\/strong><\/em>&#8216;<\/li>\n\n\n\n<li>2 x &#8216;<em><strong>select all count(*) &#8220;total_file_count&#8221; from dmr_content_sp \u2026<\/strong><\/em>&#8216; (objects with and without parent_id)<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>1 x &#8216;<em><strong>select all u.r_object_id, u.user_name, u.user_privileges, u.user_xprivileges, count(s.object_name) &#8220;document_count&#8221; from dm_user_sp u, dm_sysobject_sp s \u2026<\/strong><\/em>&#8216;<\/li>\n<\/ul>\n\n\n\n<p>This shows that the 40 slowest queries are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>retrieving the set_time for all documents in a storage<\/li>\n\n\n\n<li>counting the number of documents in a storage<\/li>\n\n\n\n<li>retrieving all users and their associated documents<\/li>\n<\/ul>\n\n\n\n<p>The <em><strong>slightly faster (but still slow) queries (11s &lt; x &lt; 53s)<\/strong><\/em> are linked to the <em><strong>audittrail<\/strong><\/em>, retrieving entries related to &#8216;<em><strong>[TRANSACTION_TRACKING_EVENTS]<\/strong><\/em>&#8216;, &#8216;<em><strong>dm_connect<\/strong><\/em>&#8216;, &#8216;<em><strong>dm_logon_failure<\/strong><\/em>&#8216;, or &#8216;<em><strong>dm_disconnect<\/strong><\/em>&#8216; events, as well as the &#8216;<em><strong>dm_occasional_user_role<\/strong><\/em>&#8216; group.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-5-linking-queries-to-the-initial-issue\">5. Linking queries to the initial issue<\/h2>\n\n\n\n<p>The final step, once you know all the problematic\/slowest requests, is either to try to improve the queries or at least find a way to avoid them. This can take various forms, such as changing the query (if it comes from custom development\/configuration), improving the execution plan at the DB level, or adding indexes.<\/p>\n\n\n\n<p>In this case, as a reminder, the issue was a problematic action in DA. A user was trying to execute the &#8220;<em><strong>External Transaction Activity<\/strong><\/em>&#8221; report. If I am not mistaken, this is a relatively new System Report available since version 23.4 and backported to earlier versions through a HotFix. However, even in 23.4, it also requires a hotfix to work properly, as the OOTB version is unusable because of some logging errors.<\/p>\n\n\n\n<p>This report is supposed to fetch and display:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>the list of users considered &#8220;<em><strong>external<\/strong><\/em>&#8221; (e.g. part of <em><strong>dm_external_users<\/strong><\/em> or <em><strong>dm_extuser_data_access<\/strong><\/em> groups)<\/li>\n\n\n\n<li>the number of transactions performed by these external users for the selected year and split by quarter<\/li>\n\n\n\n<li>the total number of transactions for all external users<\/li>\n<\/ul>\n\n\n\n<p>Nothing in this report should require document storage queries to be executed. Even worse, these long-running SQL queries are actually <em><strong>triggered twice<\/strong><\/em>. When you access the &#8220;<em><strong>External Transaction Activity<\/strong><\/em>&#8221; report page in DA, it immediately triggers all the report queries in the background, even if you have not selected a year or clicked any button. Then, when you select a year and click &#8220;<em><strong>Generate<\/strong><\/em>&#8221; or &#8220;<em><strong>Export<\/strong><\/em>&#8220;, it <em><strong>re-triggers the exact same queries a second time<\/strong><\/em>.<\/p>\n\n\n\n<p>In the Repository traces discussed in this blog, it was only the initial access to the form page &#8211; and it already took 4 hours. That customer would therefore likely need around 8 hours to generate and display the report. And this was not even their PROD environment, but a smaller DEV one.<\/p>\n\n\n\n<p>That is obviously a bug (and arguably a design issue). I am currently in discussion with <a href=\"https:\/\/www.opentext.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">OpenText<\/a> to find a way to fix the problem in version 23.4. Please note that in version 25.4, several improvements were made to the System Report, and it appears to have fixed this specific issue from 25.4 onwards.<\/p>\n\n\n\n<p>This blog and trace file were just an example, but one that really happened at a customer site. I hope it provided some useful insights and that it will help you in your investigations!<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Documentum is a powerful but also complex software. Therefore, debugging performance issues often comes down to being able to implement and review various traces. In this blog, I will talk about how you can detect and analyze slow SQL queries from a Documentum Server. Several years ago, I wrote a blog post about DFC traces. [&hellip;]<\/p>\n","protected":false},"author":20,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[525],"tags":[2609,67,98,3879,339],"type_dbi":[],"class_list":["post-43157","post","type-post","status-publish","format-standard","hentry","category-enterprise-content-management","tag-documentum-2","tag-performance","tag-sql","tag-sqltrace","tag-trace"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Dctm - Detect and analyze slow SQL queries - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Dctm - Detect and analyze slow SQL queries\" \/>\n<meta property=\"og:description\" content=\"Documentum is a powerful but also complex software. Therefore, debugging performance issues often comes down to being able to implement and review various traces. In this blog, I will talk about how you can detect and analyze slow SQL queries from a Documentum Server. Several years ago, I wrote a blog post about DFC traces. [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-25T20:06:48+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-25T20:06:50+00:00\" \/>\n<meta name=\"author\" content=\"Morgan Patou\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@MorganPatou\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Morgan Patou\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"24 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/\"},\"author\":{\"name\":\"Morgan Patou\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/c4d05b25843a9bc2ab20415dae6bd2d8\"},\"headline\":\"Dctm &#8211; Detect and analyze slow SQL queries\",\"datePublished\":\"2026-02-25T20:06:48+00:00\",\"dateModified\":\"2026-02-25T20:06:50+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/\"},\"wordCount\":1813,\"commentCount\":0,\"keywords\":[\"Documentum\",\"Performance\",\"SQL\",\"sqltrace\",\"Trace\"],\"articleSection\":[\"Enterprise content management\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/\",\"name\":\"Dctm - Detect and analyze slow SQL queries - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2026-02-25T20:06:48+00:00\",\"dateModified\":\"2026-02-25T20:06:50+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/c4d05b25843a9bc2ab20415dae6bd2d8\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Dctm &#8211; Detect and analyze slow SQL queries\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/c4d05b25843a9bc2ab20415dae6bd2d8\",\"name\":\"Morgan Patou\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/5d7f5bec8b597db68a09107a6f5309e3870d6296ef94fb10ead4b09454ca67e5?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5d7f5bec8b597db68a09107a6f5309e3870d6296ef94fb10ead4b09454ca67e5?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5d7f5bec8b597db68a09107a6f5309e3870d6296ef94fb10ead4b09454ca67e5?s=96&d=mm&r=g\",\"caption\":\"Morgan Patou\"},\"description\":\"Morgan Patou has over 12 years of experience in Enterprise Content Management (ECM) systems, with a strong focus in recent years on platforms such as Alfresco, Documentum, and M-Files. He specializes in the architecture, setup, customization, and maintenance of ECM infrastructures in complex &amp; critical environments. Morgan is well-versed in both engineering and operations aspects, including high availability design, system integration, and lifecycle management. He also has a solid foundation in open-source and proprietary technologies - ranging from Apache, OpenLDAP or Kerberos to enterprise-grade systems like WebLogic. Morgan Patou holds an Engineering Degree in Computer Science from ENSISA (\u00c9cole Nationale Sup\u00e9rieure d'Ing\u00e9nieurs Sud Alsace) in Mulhouse, France. He is Alfresco Content Services Certified Administrator (ACSCA), Alfresco Content Services Certified Engineer (ACSCE) as well as OpenText Documentum Certified Administrator. His industry experience spans the Public Sector, IT Services, Financial Services\/Banking, and the Pharmaceutical industry.\",\"sameAs\":[\"https:\/\/blog.dbi-services.com\/author\/morgan-patou\/\",\"https:\/\/x.com\/MorganPatou\"],\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/morgan-patou\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Dctm - Detect and analyze slow SQL queries - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/","og_locale":"en_US","og_type":"article","og_title":"Dctm - Detect and analyze slow SQL queries","og_description":"Documentum is a powerful but also complex software. Therefore, debugging performance issues often comes down to being able to implement and review various traces. In this blog, I will talk about how you can detect and analyze slow SQL queries from a Documentum Server. Several years ago, I wrote a blog post about DFC traces. [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/","og_site_name":"dbi Blog","article_published_time":"2026-02-25T20:06:48+00:00","article_modified_time":"2026-02-25T20:06:50+00:00","author":"Morgan Patou","twitter_card":"summary_large_image","twitter_creator":"@MorganPatou","twitter_misc":{"Written by":"Morgan Patou","Est. reading time":"24 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/"},"author":{"name":"Morgan Patou","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/c4d05b25843a9bc2ab20415dae6bd2d8"},"headline":"Dctm &#8211; Detect and analyze slow SQL queries","datePublished":"2026-02-25T20:06:48+00:00","dateModified":"2026-02-25T20:06:50+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/"},"wordCount":1813,"commentCount":0,"keywords":["Documentum","Performance","SQL","sqltrace","Trace"],"articleSection":["Enterprise content management"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/","url":"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/","name":"Dctm - Detect and analyze slow SQL queries - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2026-02-25T20:06:48+00:00","dateModified":"2026-02-25T20:06:50+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/c4d05b25843a9bc2ab20415dae6bd2d8"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/dctm-detect-and-analyze-slow-sql-queries\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Dctm &#8211; Detect and analyze slow SQL queries"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/c4d05b25843a9bc2ab20415dae6bd2d8","name":"Morgan Patou","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/5d7f5bec8b597db68a09107a6f5309e3870d6296ef94fb10ead4b09454ca67e5?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/5d7f5bec8b597db68a09107a6f5309e3870d6296ef94fb10ead4b09454ca67e5?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5d7f5bec8b597db68a09107a6f5309e3870d6296ef94fb10ead4b09454ca67e5?s=96&d=mm&r=g","caption":"Morgan Patou"},"description":"Morgan Patou has over 12 years of experience in Enterprise Content Management (ECM) systems, with a strong focus in recent years on platforms such as Alfresco, Documentum, and M-Files. He specializes in the architecture, setup, customization, and maintenance of ECM infrastructures in complex &amp; critical environments. Morgan is well-versed in both engineering and operations aspects, including high availability design, system integration, and lifecycle management. He also has a solid foundation in open-source and proprietary technologies - ranging from Apache, OpenLDAP or Kerberos to enterprise-grade systems like WebLogic. Morgan Patou holds an Engineering Degree in Computer Science from ENSISA (\u00c9cole Nationale Sup\u00e9rieure d'Ing\u00e9nieurs Sud Alsace) in Mulhouse, France. He is Alfresco Content Services Certified Administrator (ACSCA), Alfresco Content Services Certified Engineer (ACSCE) as well as OpenText Documentum Certified Administrator. His industry experience spans the Public Sector, IT Services, Financial Services\/Banking, and the Pharmaceutical industry.","sameAs":["https:\/\/blog.dbi-services.com\/author\/morgan-patou\/","https:\/\/x.com\/MorganPatou"],"url":"https:\/\/www.dbi-services.com\/blog\/author\/morgan-patou\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/43157","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/20"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=43157"}],"version-history":[{"count":9,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/43157\/revisions"}],"predecessor-version":[{"id":43167,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/43157\/revisions\/43167"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=43157"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=43157"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=43157"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=43157"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}