{"id":13359,"date":"2020-01-30T00:45:31","date_gmt":"2020-01-29T23:45:31","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/"},"modified":"2025-10-24T09:32:08","modified_gmt":"2025-10-24T07:32:08","slug":"tracking-logs-inside-a-documentum-container-part-ii","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/","title":{"rendered":"Tracking Logs Inside a Documentum Container (part II)"},"content":{"rendered":"<h2>Testing the log watcher<\/h2>\n<p>This is part II of the article. Part I is <a title=\"Tracking Logs Inside a Documentum Container (part I)\" href=\"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-i\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>.<br \/>\nAll the above code has to be included in the entrypoint script so it gets executed at container start up time but it can also be tested more simply in a traditional repository installation.<br \/>\nFirst, we&#8217;ll move the code into a excutable script, e.g. entrypoint.sh, and run it in the background in a first terminal. Soon, we will notice that lots of log messages get displayed, e.g. from jobs executing into the repository:<br \/>\n<code><br \/>\n---------------------------------------<br \/>\nJob Arguments:<br \/>\n(StandardJobArgs: docbase_name: dmtest.dmtest user_name: dmadmin job_id: 0800c35080007042 method_trace_level: 0 )<br \/>\nwindow_interval=120<br \/>\nqueueperson=null<br \/>\nmax_job_threads=3<br \/>\n&nbsp;<br \/>\n---------------------------------------<br \/>\n2020-01-02T07:03:15.807617\t1600[1600]\t0100c3508000ab80\t[DM_SESSION_I_ASSUME_USER]info:  \"Session 0100c3508000ab80 is owned by user dmadmin now.\"<br \/>\nExecuting query to collect XCP automatic tasks to be processed:<br \/>\nselect r_object_id from dmi_workitem where a_wq_name = 'to be processed by job' and r_auto_method_id != '0000000000000000' and (r_runtime_state = 0 or r_runtime_state = 1) order by r_creation_date asc<br \/>\nReport End  2020\/01\/02 07:03:15<br \/>\n2020-01-02T07:03:16.314586\t5888[5888]\t0100c3508000ab90\t[DM_SESSION_I_SESSION_QUIT]info:  \"Session 0100c3508000ab90 quit.\"<br \/>\nThu Jan 02 07:04:44 2020 [INFORMATION] [LAUNCHER 6311] Detected during program initialization: Version: 16.4.0080.0129  Linux64<br \/>\n2020-01-02T07:04:44.736720\t6341[6341]\t0100c3508000ab91\t[DM_SESSION_I_SESSION_START]info:  \"Session 0100c3508000ab91 started for user dmadmin.\"<br \/>\nThu Jan 02 07:04:45 2020 [INFORMATION] [LAUNCHER 6311] Detected while preparing job dm_Initialize_WQ for execution: Agent Exec connected to server dmtest:  [DM_SESSION_I_SESSION_START]info:  \"Session 0100c3508000ab91 started for user dmadmin.\"<br \/>\n&nbsp;<br \/>\n&nbsp;<br \/>\n2020-01-02T07:04:45.686337\t1600[1600]\t0100c3508000ab80\t[DM_SESSION_I_ASSUME_USER]info:  \"Session 0100c3508000ab80 is owned by user dmadmin now.\"<br \/>\n2020-01-02T07:04:45.698970\t1597[1597]\t0100c3508000ab7f\t[DM_SESSION_I_ASSUME_USER]info:  \"Session 0100c3508000ab7f is owned by user dmadmin now.\"<br \/>\nInitialize_WQ Report For DocBase dmtest.dmtest As Of 2020\/01\/02 07:04:45<br \/>\n&nbsp;<br \/>\n---------------------------------------<br \/>\nJob Arguments:<br \/>\n(StandardJobArgs: docbase_name: dmtest.dmtest user_name: dmadmin job_id: 0800c3508000218b method_trace_level: 0 )<br \/>\nwindow_interval=120<br \/>\nqueueperson=null<br \/>\nmax_job_threads=3<br \/>\n&nbsp;<br \/>\n---------------------------------------<br \/>\nStarting WQInitialisation job:<br \/>\n2020-01-02T07:04:45.756339\t1600[1600]\t0100c3508000ab80\t[DM_SESSION_I_ASSUME_USER]info:  \"Session 0100c3508000ab80 is owned by user dmadmin now.\"<br \/>\nExecuting query to collect Unassigned worktiems to be processed:<br \/>\nselect r_object_id, r_act_def_id from dmi_workitem where a_held_by = 'dm_system' and r_runtime_state = 0 order by r_creation_date<br \/>\nTotal no. of workqueue tasks initialized 0<br \/>\nReport End  2020\/01\/02 07:04:45<br \/>\n2020-01-02T07:04:46.222728\t6341[6341]\t0100c3508000ab91\t[DM_SESSION_I_SESSION_QUIT]info:  \"Session 0100c3508000ab91 quit.\"<br \/>\nThu Jan 02 07:05:14 2020 [INFORMATION] [LAUNCHER 6522] Detected during program initialization: Version: 16.4.0080.0129  Linux64<br \/>\n2020-01-02T07:05:14.828073\t6552[6552]\t0100c3508000ab92\t[DM_SESSION_I_SESSION_START]info:  \"Session 0100c3508000ab92 started for user dmadmin.\"<br \/>\nThu Jan 02 07:05:15 2020 [INFORMATION] [LAUNCHER 6522] Detected while preparing job dm_bpm_XCPAutoTaskMgmt for execution: Agent Exec connected to server dmtest:  [DM_SESSION_I_SESSION_START]info:  \"Session 0100c3508000ab92 started for user dmadmin.\"<br \/>\n&nbsp;<br \/>\n&nbsp;<br \/>\n2020-01-02T07:05:15.714803\t1600[1600]\t0100c3508000ab80\t[DM_SESSION_I_ASSUME_USER]info:  \"Session 0100c3508000ab80 is owned by user dmadmin now.\"<br \/>\n2020-01-02T07:05:15.726601\t1597[1597]\t0100c3508000ab7f\t[DM_SESSION_I_ASSUME_USER]info:  \"Session 0100c3508000ab7f is owned by user dmadmin now.\"<br \/>\nbpm_XCPAutoTaskMgmt Report For DocBase dmtest.dmtest As Of 2020\/01\/02 07:05:15<br \/>\n&nbsp;<br \/>\n---------------------------------------<br \/>\n&nbsp;<br \/>\n<\/code><br \/>\nThen, from a second terminal, we&#8217;ll start and stop several idql sessions and observe the resulting output. We will notice the familiar lines *_START and *_QUIT from the session&#8217;s logs:<br \/>\n<code><br \/>\n---------------------------------------<br \/>\n2020-01-02T07:09:16.076945\t1600[1600]\t0100c3508000ab80\t[DM_SESSION_I_ASSUME_USER]info:  \"Session 0100c3508000ab80 is owned by user dmadmin now.\"<br \/>\nExecuting query to collect XCP automatic tasks to be processed:<br \/>\nselect r_object_id from dmi_workitem where a_wq_name = 'to be processed by job' and r_auto_method_id != '0000000000000000' and (r_runtime_state = 0 or r_runtime_state = 1) order by r_creation_date asc<br \/>\nReport End  2020\/01\/02 07:09:16<br \/>\n2020-01-02T07:09:16.584776\t7907[7907]\t0100c3508000ab97\t[DM_SESSION_I_SESSION_QUIT]info:  \"Session 0100c3508000ab97 quit.\"<br \/>\n2020-01-02T07:09:44.969770\t8080[8080]\t0100c3508000ab98\t[DM_SESSION_I_SESSION_START]info:  \"Session 0100c3508000ab98 started for user dmadmin.\"<br \/>\n2020-01-02T07:09:47.329406\t8080[8080]\t0100c3508000ab98\t[DM_SESSION_I_SESSION_QUIT]info:  \"Session 0100c3508000ab98 quit.\"<br \/>\n...<br \/>\n<\/code><br \/>\nSo, inotifywatch is pretty effective as a file watcher.<\/p>\n<p>Let&#8217;se see how many tail processes are currently running:<br \/>\n<code><br \/>\n$ psg tail | grep -v gawk<br \/>\n PPID   PID  PGID   SID TTY      TPGID STAT   UID   TIME COMMAND<br \/>\n    1  4818 24411  9328 pts\/1     8723 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/agentexec.log<br \/>\n    1  4846 24411  9328 pts\/1     8723 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab80<br \/>\n    1  4850 24411  9328 pts\/1     8723 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab7f<br \/>\n    1  8375 24411  9328 pts\/1     8723 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/job_0800c3508000218b<br \/>\n    1  8389 24411  9328 pts\/1     8723 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab99<br \/>\n    1  8407 24411  9328 pts\/1     8723 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/sysadmin\/Initialize_WQDoc.txt<br \/>\n    1  8599 24411  9328 pts\/1     8723 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/job_0800c35080000386<br \/>\n    1  8614 24411  9328 pts\/1     8723 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab9a<br \/>\n    1  8657 24411  9328 pts\/1     8723 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab9b<br \/>\n    1  8673 24411  9328 pts\/1     8723 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/sysadmin\/DataDictionaryPublisherDoc.txt<br \/>\n<\/code><br \/>\nAnd after a while:<br \/>\n<code><br \/>\n$ psg tail | grep -v gawk<br \/>\n PPID   PID  PGID   SID TTY      TPGID STAT   UID   TIME COMMAND<br \/>\n    1  4818 24411  9328 pts\/1     9132 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/agentexec.log<br \/>\n    1  4846 24411  9328 pts\/1     9132 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab80<br \/>\n    1  4850 24411  9328 pts\/1     9132 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab7f<br \/>\n    1  8599 24411  9328 pts\/1     9132 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/job_0800c35080000386<br \/>\n    1  8614 24411  9328 pts\/1     9132 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab9a<br \/>\n    1  8657 24411  9328 pts\/1     9132 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab9b<br \/>\n    1  8673 24411  9328 pts\/1     9132 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/sysadmin\/DataDictionaryPublisherDoc.txt<br \/>\n    1  8824 24411  9328 pts\/1     9132 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/job_0800c35080007042<br \/>\n    1  8834 24411  9328 pts\/1     9132 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab9c<br \/>\n    1  8852 24411  9328 pts\/1     9132 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/sysadmin\/bpm_XCPAutoTaskMgmtDoc.txt<br \/>\n<\/code><\/p>\n<p>Again:<br \/>\n<code><br \/>\n$ psg tail | grep -v gawk<br \/>\n PPID   PID  PGID   SID TTY      TPGID STAT   UID   TIME COMMAND<br \/>\n    1 10058 24411  9328 pts\/1    10252 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/agentexec.log<br \/>\n    1 10078 24411  9328 pts\/1    10252 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/job_0800c3508000218b<br \/>\n    1 10111 24411  9328 pts\/1    10252 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab9f<br \/>\n    1 10131 24411  9328 pts\/1    10252 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab80<br \/>\n    1 10135 24411  9328 pts\/1    10252 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab7f<br \/>\n    1 10139 24411  9328 pts\/1    10252 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/sysadmin\/Initialize_WQDoc.txt<br \/>\n<\/code><\/p>\n<p>And again:<br \/>\n<code><br \/>\n$ psg tail | grep -v gawk<br \/>\n PPID   PID  PGID   SID TTY      TPGID STAT   UID   TIME COMMAND<br \/>\n    1 10892 24411  9328 pts\/1    11022 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/agentexec.log<br \/>\n    1 10896 24411  9328 pts\/1    11022 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/job_0800c3508000218b<br \/>\n    1 10907 24411  9328 pts\/1    11022 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000aba1<br \/>\n    1 10921 24411  9328 pts\/1    11022 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab80<br \/>\n    1 10925 24411  9328 pts\/1    11022 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab7f<br \/>\n    1 10930 24411  9328 pts\/1    11022 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/sysadmin\/Initialize_WQDoc.txt<br \/>\n<\/code><\/p>\n<p>And, after disabling, the aggregation of new logs:<br \/>\n<code><br \/>\n$ echo 0 &gt; $tail_on_off<br \/>\n$ psg tail | grep -v gawk<br \/>\n PPID   PID  PGID   SID TTY      TPGID STAT   UID   TIME COMMAND<br \/>\n    1 24676 24277  9328 pts\/1    26096 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/agentexec\/agentexec.log<br \/>\n    1 24710 24277  9328 pts\/1    26096 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab80<br \/>\n    1 24714 24277  9328 pts\/1    26096 S     1001   0:00 tail --follow=name --retry \/app\/dctm\/dba\/log\/0000c350\/dmadmin\/0100c3508000ab7f<br \/>\n<\/code><br \/>\nAnd eventually:<br \/>\n<code><br \/>\n$ psg tail | grep -v gawk<br \/>\n PPID   PID  PGID   SID TTY      TPGID STAT   UID   TIME COMMAND<br \/>\n&lt;nothing&gt;<br \/>\n<\/code><br \/>\nFrom now on, until the aggregation is turned back on, this list will be empty as no new tail commands will be started.<br \/>\nThe number of tail commands grows and shrinks as expected, so the process clean up is working.<br \/>\nThe agentexec.log file is often here because jobs are started frequently and therefore this file regularly triggers the MODIFY event. dmadmin and sysadmin owned session logs come and go for each started job. The DataDictionaryPublisherDoc.txt, bpm_XCPAutoTaskMgmtDoc.txt and Initialize_WQDoc.txt are a few followed job&#8217;s logs.<br \/>\nIn the used test system, an out of the box docbase without any running applications, the output is obviously very quiet with long pauses in between but it can become extremely dense in a busy system. When investigating problems in such systems, it can be useful to redirect the whole output to a text file and use one&#8217;s favorite editor to search it at leisure. It is also possible to restrict it to an as narrow as desired time window (if docker logs is used) in order to reduce its noise, exclude files from being watched (see the next paragraph) and even to stop aggregating new logs via the $tail_on_off file so only the currently ones are followed as long as they are active (i.e. written in).<\/p>\n<h2>Dynamically reconfiguring inotifywait via a parameter file<\/h2>\n<p>In the preceding examples, the inotifywait command has been passed a fixed, hard-coded subtree name to watch. In certain cases, it could be useful to be able to watch a different, random sub-directory or list of arbitrary files in unrelated sub-directories with a possible list of excluded files; for even more flexibility, the other inotifywait&#8217;s parameters could also be changed dynamically. In such cases, the running inotifywait command will have to be stopped and restarted to change its command-line parameters. One could imagine to check a communication file (like the $tail_on_off file above) inside the entrypoint&#8217;s main loop, as shown below. Here the diff is relative to the precedent static parameters version:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1; highlight: []\">\n10a11,13\n&gt; # file that contains the watcher's parameters;\n&gt; export new_inotify_parameters_file=${watcher_workdir}\/inotify_parameters_file\n&gt; \n15a19,20\n&gt;    inotify_params=\"$1\"\n&gt;  \n19c24\n    eval $private_inotify ${inotify_params} $watcher_workdir | gawk -v tail_timeout=$((tail_timeout * 60)) -v excluded_files=\"$excluded_files\" -v bMust_tail=1 -v tail_on_off=$tail_on_off -v heartbeat_file=$heartbeat_file -v env_private_tail=private_tail -v FS=\"|\" -v Apo=\"'\" 'BEGIN {\n122a128,142\n&gt; process_new_inotify_parameters() {\n&gt;    # read in the new parameters from file $new_inotify_parameters_file;\n&gt;    # first line of the $new_inotify_parameters_file must contains the non-target parameters, e.g. -m -e create,modify,attrib -r --timefmt \"%Y\/%m\/%d-%H:%M:%S\" --format \"%T %e %w%f\";\n&gt;    new_params=$(cat $new_inotify_parameters_file)\n&gt;    rm $new_inotify_parameters_file\n&gt;  \n&gt;    # kill current watcher;\n&gt;    pkill -f $private_inotify\n&gt; \n&gt;    # kill the current private tail commands too;\n&gt;    pkill -f $private_tail\n&gt; \n&gt;    # restart inotify with new command-line parameters taken from $new_inotify_parameters_file;\n&gt;    follow_logs \"$new_params\" &amp;\n&gt; }\n138,139c158,162\n&lt; # start the watcher;\n # default inotifywait parameters;\n&gt; # the creation of this file will trigger a restart of the watcher using the new parameters;\n&gt; cat - &lt; $new_inotify_parameters_file\n&gt;    --quiet --monitor --event create,modify,attrib --recursive --timefmt \"%Y\/%m\/%d-%H:%M:%S\" --format '%T|%e|%w%f' ${DOCUMENTUM}\/dba\/log --exclude $new_inotify_parameters_file\n&gt; eot\n143a167\n&gt;    [[ -f $new_inotify_parameters_file ]] &amp;&amp; process_new_inotify_parameters\n<\/pre>\n<p>To apply the patch, save the first listing into a file, e.g. inotify_static_parameters.sh, the above diff output into a file, e.g. static2dynamic.diff, and use the command below to create the script inotify_dynamic_parameters.sh:<br \/>\n<code><br \/>\npatch inotify_static_parameters.sh static2dynamic.diff -o inotify_dynamic_parameters.sh<br \/>\n<\/code><br \/>\nIn order not to trigger events when it is created, the $new_inotify_parameters_file must be excluded from inotifywait&#8217;s watched directories.<br \/>\nIf, for instance, we want later to exclude jobs&#8217; logs in order to focus on more relevant logs, we could use the following inotifywait&#8217;s parameters:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1; highlight: [2]\">\n# cat - &lt;&lt;-eot &gt; $new_inotify_parameters_file\n   --quiet --monitor --event create,modify,attrib --recursive --timefmt \"%Y\/%m\/%d-%H:%M:%S\" --format \"%T|%e|%w%f\" ${DOCUMENTUM}\/dba\/log --exclude '${DOCUMENTUM}\/dba\/log\/dmtest\/agentexec\/job_.*' --exclude $new_inotify_parameters_file\neot\n<\/pre>\n<p>From the outside of a container:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1; highlight: [2]\">\ndocker exec &lt;container&gt; \/bin\/bash -c 'cat - &lt; $new_inotify_parameters_file\n   --quiet --monitor --event create,modify,attrib --recursive --timefmt \"%Y\/%m\/%d-%H:%M:%S\" --format \"%T|%e|%w%f\" ${DOCUMENTUM}\/dba\/log --exclude \"${DOCUMENTUM}\/dba\/log\/dmtest\/agentexec\/job_.*\" --exclude $new_inotify_parameters_file\neot\n'\n<\/pre>\n<p>where the new option &minus;&minus;exclude specifies a POSIX-compliant regular expression to be quoted so the shell does not perform its variable expansions.<br \/>\nAfter at most $pause_duration seconds, this file is detected (line 34), read (line 14), deleted (line 15) and the current watcher gets restarted in the background with the new parameters (line 24).<br \/>\nThis approach is akin to polling the parameter file and it looks a bit unsophisticated. We use events to control the tailing through the $tail_on_off file and for the heartbeat. Why not for the parameter file ? The next paragraph will show just that.<\/p>\n<h2>Dynamically reconfiguring inotifywait via an event<\/h2>\n<p>A better way would be to use the watcher itself to check if the parameter file has changed ! After all, we shall put our money where our mouth is, right ?<br \/>\nIn order to do this, and dedicated watcher is set up to watch the parameter file. For technical reasons (in order to watch a file, that file must already exist. Also, when the watched file is erased, inotifywait does not react to any events on that file any longer; it just sits there idly), instead of watching $new_inotify_parameters_file, it watches its parent directory. To avoid scattering files in too many locations, the parameter file will be stored along with the technical files in ${watcher_workdir} and, in order to avoid impacting the performance, it will be excluded from the main watcher (parameter &#8211;exclude $new_inotify_parameters_file, do not forget to append it otherwise the main watcher will raise events for nothing; it should not impede the parameter file to be processed though).<br \/>\nWhen a MODIFY event on this file occurs, which includes a CREATE event if the file did not pre-exist, the event is processed as shown precedently.<br \/>\nHere the diff compared to the preceding polled parameter file version:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1; highlight: [3,9,18,21]\">\n143a144,151\n&gt; # the parameter file's watcher;\n&gt; param_watcher() {\n&gt;    IFS=\"|\"\n&gt;    inotifywait --quiet --monitor --event create,modify,attrib --timefmt \"%Y\/%m\/%d-%H:%M:%S\" --format '%T|%e|%w%f' $watcher_workdir --exclude $tail_on_off $heartbeat_file | while read timestamp event target_file; do\n&gt;       [[ $target_file == $new_inotify_parameters_file ]] &amp;&amp; process_new_inotify_parameters\n&gt;    done\n&gt; }\n&gt; \n162a171,173\n&gt; \n&gt; # starts the parameter file's watcher;\n&gt; param_watcher &amp;\n<\/pre>\n<p>The delta is quite short with just the function param_watcher() to start the watcher on the parameter file and process its signal by invoking the same function process_new_inotify_parameters() introduced in the polling version.<br \/>\nTo apply the patch, save the above diff output into a file, e.g. dynamic2event.diff, and use the command below to create the script inotify_dynamic_parameters_event.sh from the polling version&#8217;s script inotify_dynamic_parameters.sh created above:<br \/>\n<code><br \/>\npatch inotify_dynamic_parameters.sh dynamic2event.diff -o inotify_dynamic_parameters_event.sh<br \/>\n<\/code><br \/>\nThe dedicated watcher runs in the background and restarts the main watcher whenever the latter receives new parameters. Quite simple.<\/p>\n<h2>Dynamically reconfiguring inotifywait via signals<\/h2>\n<p>Yet another way to interact with inotifywait&#8217;s script (e.g. the container&#8217;s entrypoint) could be through signals. To this effect, we&#8217;ll need first to choose suitable signals, say SIGUSR[12], define a trap handler and implement it. And while we are at it, let&#8217;s also add switching the watcher on and off, as illustrated below:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1; highlight: [32]\">\n2a3,5\n&gt; trap 'bounce_watcher' SIGUSR1\n&gt; trap 'toggle_watcher' SIGUSR2\n&gt; \n7a11,14\n&gt; # new global variable to hold the current tail on\/off status;\n&gt; # used to toggle the watcher;\n&gt; export bMust_tail=1\n&gt; \n144,149c151,162\n&lt; # the parameter file&#039;s watcher;\n&lt; param_watcher() {\n&lt;    IFS=&quot;|&quot;\n&lt;    inotifywait --quiet --monitor --event create,modify,attrib --timefmt &quot;%Y\/%m\/%d-%H:%M:%S&quot; --format &#039;%T|%e|%w%f&#039; $watcher_workdir --exclude $tail_on_off $heartbeat_file | while read timestamp event target_file; do\n&lt;       [[ $target_file == $new_inotify_parameters_file ]] &amp;&amp; process_new_inotify_parameters\n # restart the watcher with new parameters;\n&gt; bounce_watcher() {\n&gt;    [[ -f $new_inotify_parameters_file ]] &amp;&amp; process_new_inotify_parameters\n&gt;    echo \"parameters reloaded\"\n&gt; }\n&gt; \n&gt; # flip the watcher on\/off;\n&gt; # as file $tail_on_off is watched, modifying it will trigger the processing of the boolean bMust_tail in the gawk script;\n&gt; toggle_watcher() {\n&gt;    (( bMust_tail = (bMust_tail + 1) % 2 ))\n&gt;    echo $bMust_tail &gt; $tail_on_off\n&gt;    echo \"toggled the watcher, it is $bMust_tail now\"\n172,173c185,187\n&lt; # starts the parameter file&#039;s watcher;\n process_new_inotify_parameters\n&gt; \n&gt; echo \"process $$ started\"\n178d191\n&lt;    [[ -f $new_inotify_parameters_file ]] &amp;&amp; process_new_inotify_parameters\n<\/pre>\n<p>To apply the patch, save the above diff output into a file, e.g. event2signals.diff, and use the command below to create the script inotify_dynamic_parameters_signals.sh from the event version&#8217;s script inotify_dynamic_parameters_event.sh created above:<br \/>\n<code><br \/>\npatch inotify_dynamic_parameters_events.sh event2signals.diff -o inotify_dynamic_parameters_signals.sh<br \/>\n<\/code><br \/>\nThe loop is simpler now as the functions are directly invoked by outside signals.<br \/>\nTo use it, just send the SIGUSR[12] signals to the container, as shown below:<br \/>\n<code><br \/>\n# write a new configuration file:<br \/>\n...<br \/>\n# and restart the watcher:<br \/>\n$ docker kill --signal SIGUSR1 &lt;container&gt;<br \/>\n&nbsp;<br \/>\n# toggle the watcher;<br \/>\n$ docker kill --signal SIGUSR2 &lt;container&gt;<br \/>\n<\/code><br \/>\nIf testing outside a container, the commands would be:<br \/>\n<code><br \/>\n\/bin\/kill --signal SIGUSR1 <em>pid<\/em><br \/>\n# or:<br \/>\n\/bin\/kill --signal SIGUSR2 <em>pid<\/em><br \/>\n<\/code><br \/>\nwhere <em>pid<\/em> is the pid displayed when the script starts.<br \/>\nWe won&#8217;t indulge more on signals, see <a title=\"How to stop Documentum processes in a docker container, and more (part I)\" href=\"https:\/\/www.dbi-services.com\/blog\/how-to-stop-documentum-processes-in-a-docker-container-and-more-part-i\/\" target=\"_blank\" rel=\"noopener noreferrer\">How to stop Documentum processes in a docker container, and more (part I)<\/a> for many more examples of using signals to talk to containers.<\/p>\n<h2>Conclusion<\/h2>\n<p>inotifywait is simple to configure and extremely fast, pratically instantaneous, at sending notifications. Although the aggregated output looks a bit confusing when the system is loaded, there are several ways to reduce its volume to make it easier to use. It is an interesting addition to an administrator to help troubleshooting repositories and their client applications. With suitable parameters, it can be used to support any containerized or traditional installations. It is one of those no frills tools that just work (mostly) as expected. If you ever happen to need a file watcher, consider it, you won&#8217;t be disappointed and it is fun to use.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Testing the log watcher This is part II of the article. Part I is here. All the above code has to be included in the entrypoint script so it gets executed at container start up time but it can also be tested more simply in a traditional repository installation. First, we&#8217;ll move the code into [&hellip;]<\/p>\n","protected":false},"author":40,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[525],"tags":[],"type_dbi":[],"class_list":["post-13359","post","type-post","status-publish","format-standard","hentry","category-enterprise-content-management"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.5) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Tracking Logs Inside a Documentum Container (part II) - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Tracking Logs Inside a Documentum Container (part II)\" \/>\n<meta property=\"og:description\" content=\"Testing the log watcher This is part II of the article. Part I is here. All the above code has to be included in the entrypoint script so it gets executed at container start up time but it can also be tested more simply in a traditional repository installation. First, we&#8217;ll move the code into [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2020-01-29T23:45:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-24T07:32:08+00:00\" \/>\n<meta name=\"author\" content=\"Middleware Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Middleware Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"16 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/tracking-logs-inside-a-documentum-container-part-ii\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/tracking-logs-inside-a-documentum-container-part-ii\\\/\"},\"author\":{\"name\":\"Middleware Team\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/8d8563acfc6e604cce6507f45bac0ea1\"},\"headline\":\"Tracking Logs Inside a Documentum Container (part II)\",\"datePublished\":\"2020-01-29T23:45:31+00:00\",\"dateModified\":\"2025-10-24T07:32:08+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/tracking-logs-inside-a-documentum-container-part-ii\\\/\"},\"wordCount\":1310,\"commentCount\":0,\"articleSection\":[\"Enterprise content management\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/tracking-logs-inside-a-documentum-container-part-ii\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/tracking-logs-inside-a-documentum-container-part-ii\\\/\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/tracking-logs-inside-a-documentum-container-part-ii\\\/\",\"name\":\"Tracking Logs Inside a Documentum Container (part II) - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#website\"},\"datePublished\":\"2020-01-29T23:45:31+00:00\",\"dateModified\":\"2025-10-24T07:32:08+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/8d8563acfc6e604cce6507f45bac0ea1\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/tracking-logs-inside-a-documentum-container-part-ii\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/tracking-logs-inside-a-documentum-container-part-ii\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/tracking-logs-inside-a-documentum-container-part-ii\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Tracking Logs Inside a Documentum Container (part II)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/8d8563acfc6e604cce6507f45bac0ea1\",\"name\":\"Middleware Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g\",\"caption\":\"Middleware Team\"},\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/author\\\/middleware-team\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Tracking Logs Inside a Documentum Container (part II) - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/","og_locale":"en_US","og_type":"article","og_title":"Tracking Logs Inside a Documentum Container (part II)","og_description":"Testing the log watcher This is part II of the article. Part I is here. All the above code has to be included in the entrypoint script so it gets executed at container start up time but it can also be tested more simply in a traditional repository installation. First, we&#8217;ll move the code into [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/","og_site_name":"dbi Blog","article_published_time":"2020-01-29T23:45:31+00:00","article_modified_time":"2025-10-24T07:32:08+00:00","author":"Middleware Team","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Middleware Team","Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/"},"author":{"name":"Middleware Team","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d8563acfc6e604cce6507f45bac0ea1"},"headline":"Tracking Logs Inside a Documentum Container (part II)","datePublished":"2020-01-29T23:45:31+00:00","dateModified":"2025-10-24T07:32:08+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/"},"wordCount":1310,"commentCount":0,"articleSection":["Enterprise content management"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/","url":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/","name":"Tracking Logs Inside a Documentum Container (part II) - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2020-01-29T23:45:31+00:00","dateModified":"2025-10-24T07:32:08+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d8563acfc6e604cce6507f45bac0ea1"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/tracking-logs-inside-a-documentum-container-part-ii\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Tracking Logs Inside a Documentum Container (part II)"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d8563acfc6e604cce6507f45bac0ea1","name":"Middleware Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ddcae7ba0f9d1a0e7ae707f0e689e4a9c95bb48ec49c8e6d9cc86d43f4121cb6?s=96&d=mm&r=g","caption":"Middleware Team"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/middleware-team\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/13359","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/40"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=13359"}],"version-history":[{"count":1,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/13359\/revisions"}],"predecessor-version":[{"id":41178,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/13359\/revisions\/41178"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=13359"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=13359"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=13359"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=13359"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}