<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Archives des Database Administration &amp; Monitoring - dbi Blog</title>
	<atom:link href="https://www.dbi-services.com/blog/category/database-administration-monitoring/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.dbi-services.com/blog/category/database-administration-monitoring/</link>
	<description></description>
	<lastBuildDate>Mon, 20 Apr 2026 09:45:09 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>PostgreSQL 19: Importing statistics from remote servers</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-importing-statistics-from-remote-servers/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-importing-statistics-from-remote-servers/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 08:15:22 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43948</guid>

					<description><![CDATA[<p>Usually we do not see many foreign data wrappers being used by our customers. Most of them use the foreign data wrapper for Oracle to fetch data from Oracle systems. Some of them use the foreign data wrapper for files but that&#8217;s mostly it. Only one (I am aware of) actually uses the foreign data [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-importing-statistics-from-remote-servers/">PostgreSQL 19: Importing statistics from remote servers</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Usually we do not see many foreign data wrappers being used by our customers. Most of them use the <a href="https://github.com/laurenz/oracle_fdw" target="_blank" rel="noreferrer noopener">foreign data wrapper for Oracle</a> to fetch data from Oracle systems. Some of them use the <a href="https://www.dbi-services.com/blog/external-tables-in-postgresql/">foreign data wrapper for files</a> but that&#8217;s mostly it. Only one (I am aware of) actually uses the <a href="https://www.postgresql.org/docs/18/postgres-fdw.html" target="_blank" rel="noreferrer noopener">foreign data wrapper for PostgreSQL</a> which obviously connects PostgreSQL to PostgreSQL. Some foreign data wrappers allow for collecting optimizer statistics on foreign tables and the foreign data wrappers for Oracle and PostgreSQL are examples for this. These local statistics are better than nothing but you need to take care that they are up to date and for that you need a fresh copy of the statistics over the remote data. PostgreSQL 19 will come with a solution for that when it comes to the foreign data wrapper for PostgreSQL. Actually, the solution is not in the foreign data wrapper for PostgreSQL but in the underlying framework and postgres_fdw uses can use that from version 19 on.</p>



<p>For looking at this we need a simple setup, so we initialize two new PostgreSQL 19 clusters and connect them with postgres_fdw:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,3,4,5,6,7,8,9,11,13,15,17,19,21]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] initdb --version
initdb (PostgreSQL) 19devel
postgres@:/home/postgres/ &#x5B;pgdev] initdb --pgdata=/var/tmp/pg1
postgres@:/home/postgres/ &#x5B;pgdev] initdb --pgdata=/var/tmp/pg2
postgres@:/home/postgres/ &#x5B;pgdev] echo &quot;port=8888&quot; &gt;&gt; /var/tmp/pg1/postgresql.auto.conf 
postgres@:/home/postgres/ &#x5B;pgdev] echo &quot;port=8889&quot; &gt;&gt; /var/tmp/pg2/postgresql.auto.conf 
postgres@:/home/postgres/ &#x5B;pgdev] pg_ctl --pgdata=/var/tmp/pg1/ start
postgres@:/home/postgres/ &#x5B;pgdev] pg_ctl --pgdata=/var/tmp/pg2/ start
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;create extension postgres_fdw&quot;
CREATE EXTENSION
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8889 -c &quot;create table t ( a int, b text, c timestamptz )&quot;
CREATE TABLE
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8889 -c &quot;insert into t select i, md5(i::text), now() from generate_series(1,1000000) i&quot;
INSERT 0 1000000
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;create server srv_pg2 foreign data wrapper postgres_fdw options(port &#039;8889&#039;, dbname &#039;postgres&#039;)&quot;
CREATE SERVER
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;create user mapping for postgres server srv_pg2 options (user &#039;postgres&#039;, password &#039;postgres&#039;)&quot;
CREATE USER MAPPING
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;create foreign table ft (a int, b text, c timestamptz) server srv_pg2 options (schema_name &#039;public&#039;, table_name &#039;t&#039;)&quot;
CREATE FOREIGN TABLE
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;select count(*) from ft&quot;
  count  
---------
 1000000
(1 row)
</pre></div>


<p>What we have now is one table in the cluster on port 8889 and this table is attached as a foreign table in the cluster on port 8888.</p>



<p>We already have statistics on the source table in the cluster on port 8889:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8889 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;t&#039;&quot;

 reltuples 
-----------
   1000000
(1 row)
</pre></div>


<p>&#8230; but we do not have any statistics on the foreign table in the cluster on port 8888:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;ft&#039;&quot;

 reltuples 
-----------
        -1

(1 row)
</pre></div>


<p>Only after manually analyzing the foreign table the statistics show up:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;DEV] psql -p 8888 -c &quot;analyze ft&quot;
ANALYZE
postgres@:/home/postgres/ &#x5B;DEV] psql -p 8888 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;ft&#039;&quot;

 reltuples 
-----------
   1000000
(1 row)
</pre></div>


<p>The issue that can arise with these local statistics is, that they probably become outdated when the source table is modified:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3,10]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8889 -c &quot;insert into t select i, md5(i::text), now() from generate_series(1000001,2000000) i&quot;
INSERT 0 1000000
postgres@:/home/postgres/ &#x5B;DEV] psql -p 8889 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;t&#039;&quot;

 reltuples 
-----------
   2000000
(1 row)

postgres@:/home/postgres/ &#x5B;DEV] psql -p 8888 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;ft&#039;&quot;

 reltuples 
-----------
   1000000
(1 row)
</pre></div>


<p>As you can see, the row counts do not match anymore. Once the local statistics are gathered we again have the same picture on both sides:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;DEV] psql -p 8888 -c &quot;analyze ft&quot;
ANALYZE
postgres@:/home/postgres/ &#x5B;DEV] psql -p 8888 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;ft&#039;&quot;

 reltuples 
-----------
   2000000
(1 row)
</pre></div>


<p>One way to avoid this issue even before PostgreSQL 19 is to tell postgres_fdw to run analyze on the remote table and to use those statistics:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;alter foreign table ft options ( use_remote_estimate &#039;true&#039; )&quot;
</pre></div>


<p>In this case the local statistics will not be used but of course this comes with the overhead of the additional analyze on the remote side.</p>



<p>From PostgreSQL 19 there is another option:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;alter foreign table ft options ( restore_stats &#039;true&#039; )&quot;
ALTER FOREIGN TABLE
</pre></div>


<p>This option tells postgres_fdw to import the statistics from the remote side and store them locally. If that fails it will run analyze as above, the <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=28972b6fc3dcd1296e844246b635eddfa29c38e1" target="_blank" rel="noreferrer noopener">commit message</a> nicely explains this:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
Add support for importing statistics from remote servers.

Add a new FDW callback routine that allows importing remote statistics
for a foreign table directly to the local server, instead of collecting
statistics locally.  The new callback routine is called at the beginning
of the ANALYZE operation on the table, and if the FDW failed to import
the statistics, the existing callback routine is called on the table to
collect statistics locally.

Also implement this for postgres_fdw.  It is enabled by &quot;restore_stats&quot;
option both at the server and table level.  Currently, it is the user&#039;s
responsibility to ensure remote statistics to import are up-to-date, so
the default is false.
</pre></div>


<p>As usual, thanks to all involved.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-importing-statistics-from-remote-servers/">PostgreSQL 19: Importing statistics from remote servers</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-importing-statistics-from-remote-servers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: Online enabling of data checksums</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-online-enabling-of-data-checksums/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-online-enabling-of-data-checksums/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Fri, 17 Apr 2026 06:00:00 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43935</guid>

					<description><![CDATA[<p>Since PostgreSQL 18 was released last year checksums are enabled by default when a new cluster is initialized. This also means, that you either need to explicitly disable that when you upgrade from a previous version of PostgreSQL or you need to enable this in the old version of PostgreSQL you want to upgrade from. [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-online-enabling-of-data-checksums/">PostgreSQL 19: Online enabling of data checksums</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Since PostgreSQL 18 was released last year checksums are enabled by default when a new cluster is initialized. This also means, that you either need to explicitly disable that when you upgrade from a previous version of PostgreSQL or you need to enable this in the old version of PostgreSQL you want to upgrade from. The reason is, that <a href="https://www.postgresql.org/docs/current/pgupgrade.html">pg_upgrade</a> will complain if the old and new version of PostgreSQL do not have the same setting for this.</p>



<p>Enabling and disabling checksums in offline mode can be done since several versions of PostgreSQL using <a href="https://www.postgresql.org/docs/current/app-pgchecksums.html" target="_blank" rel="noreferrer noopener">pg_checksums</a>, but as mentioned: This will not work if the cluster is running:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,3,9]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;181] pg_checksums --version
pg_checksums (PostgreSQL) 18.1 
postgres@:/home/postgres/ &#x5B;181] pg_checksums --pgdata=$PGDATA
Checksum operation completed
Files scanned:   966
Blocks scanned:  2969
Bad checksums:  0
Data checksum version: 1  -&gt; This means &quot;enabled&quot;
postgres@:/home/postgres/ &#x5B;181] pg_checksums --pgdata=$PGDATA --disable
pg_checksums: error: cluster must be shut down
</pre></div>


<p>Even in PostgreSQL 19 this is still same: You cannot use pg_checksum to enable or disable checksums while the cluster is running.</p>



<p>What will change in version 19 is that two new functions have been added, one for enabling checksums and one for disabling checksums:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# \dfS *checksums*
                                                        List of functions
   Schema   |           Name            | Result data type |                     Argument data types                      | Type 
------------+---------------------------+------------------+--------------------------------------------------------------+------
 pg_catalog | pg_disable_data_checksums | void             |                                                              | func
 pg_catalog | pg_enable_data_checksums  | void             | cost_delay integer DEFAULT 0, cost_limit integer DEFAULT 100 | func
(2 rows)
</pre></div>


<p>As mentioned in the <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f19c0eccae9680f5785b11cdc58ef571998caec9" target="_blank" rel="noreferrer noopener">commit message</a> this is implemented by background workers and to actually see those processes on the operating system lets create some data so the workers really have something to do:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres=# create table t ( a int, b text, c timestamptz );
CREATE TABLE
postgres=# insert into t select i, md5(i::text), now() from generate_series(1,10000000) i;
INSERT 0 10000000
</pre></div>


<p>As this is version 19 of PostgreSQL currently checksum are enabled:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# show data_checksums;
 data_checksums 
----------------
 on
(1 row)
</pre></div>


<p>To disable that online, pg_disable_data_checksums is the function to use:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,7]; title: ; notranslate">
postgres=# select * from pg_disable_data_checksums();
 pg_disable_data_checksums 
---------------------------
 
(1 row)

postgres=# show data_checksums;
 data_checksums 
----------------
 off
(1 row)
</pre></div>


<p>To enable checksums online pg_enable_data_checksums is the function to use. If you want to see the background workers you might grep for that in a second session on the operating system:</p>



<p></p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [2,8,15]; title: ; notranslate">
-- first session, connected to PostgreSQL
postgres=# select pg_enable_data_checksums();
 pg_enable_data_checksums 
--------------------------
 
(1 row)

postgres=# show data_checksums ;
 data_checksums 
----------------
 on
(1 row)

-- second session, on the OS
postgres@:/home/postgres/postgresql/ &#x5B;pgdev] watch &quot;ps -ef | grep checksum | grep -v watch&quot;
Every 2.0s: ps -ef | grep checksum | grep -v watch                                                                                                                                                    pgbox.it.dbi-services.com: 09:49:20 AM
                                                                                                                                                                                                                               in 0.006s (0)
postgres    4931    2510  0 09:49 ?        00:00:00 postgres: pgdev: datachecksum launcher
postgres    4932    2510 25 09:49 ?        00:00:00 postgres: pgdev: datachecksum worker
postgres    4964    4962  0 09:49 pts/2    00:00:00 grep checksum
</pre></div>


<p>Because enabling the checksum comes with some overhead there is throttling control as it is already the case for autovacuum:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# select pg_enable_data_checksums(cost_delay=&gt;1,cost_limit=&gt;3000);
 pg_enable_data_checksums 
--------------------------
 
(1 row)
</pre></div>


<p>Very nice, thanks to all involved.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-online-enabling-of-data-checksums/">PostgreSQL 19: Online enabling of data checksums</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-online-enabling-of-data-checksums/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: get_*_ddl functions</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-get__ddl-functions/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-get__ddl-functions/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 04:00:00 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43925</guid>

					<description><![CDATA[<p>PostgreSQL already comes with plenty of system information functions to reconstruct the commands to create various objects, e.g. constraints or indexes. Starting with PostgreSQL 19 more functions will be available, namely those: As the names imply they can be used to recreate the commands to create a database, a role, or a tablespace. To see [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-get__ddl-functions/">PostgreSQL 19: get_*_ddl functions</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>PostgreSQL already comes with plenty of <a href="https://www.postgresql.org/docs/current/functions-info.html" target="_blank" rel="noreferrer noopener">system information functions</a> to reconstruct the commands to create various objects, e.g. constraints or indexes. Starting with PostgreSQL 19 more functions will be available, namely those:</p>



<ul class="wp-block-list">
<li>pg_get_database_ddl</li>



<li>pg_get_role_ddl</li>



<li>pg_get_tablespace_ddl</li>
</ul>



<p>As the names imply they can be used to recreate the commands to create a database, a role, or a tablespace. </p>



<p>To see what they do lets create a small setup:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,8,10,11,13,15,17]; title: ; notranslate">
postgres=# select version();

                                        version                                        
---------------------------------------------------------------------------------------
 PostgreSQL 19devel dbi services build on x86_64-linux, compiled by gcc-15.1.1, 64-bit
(1 row)

postgres=# create user u with login password &#039;u&#039;;
CREATE ROLE
postgres=# \! mkdir /var/tmp/tbs
postgres=# create tablespace tbs location &#039;/var/tmp/tbs&#039; with ( random_page_cost = 1.1 );
CREATE TABLESPACE
postgres=# create database d with owner = u tablespace = tbs;
CREATE DATABASE
postgres=# alter database d connection limit = 10;
ALTER DATABASE
postgres=# \l
                                                        List of databases
   Name    |  Owner   | Encoding | Locale Provider |   Collate   |    Ctype    |   Locale    | ICU Rules |   Access privileges   
-----------+----------+----------+-----------------+-------------+-------------+-------------+-----------+-----------------------
 d         | u        | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | 
 postgres  | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | 
 template0 | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | =c/postgres          +
           |          |          |                 |             |             |             |           | postgres=CTc/postgres
 template1 | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | =c/postgres          +
           |          |          |                 |             |             |             |           | postgres=CTc/postgres
(4 rows)

</pre></div>


<p>To get the commands to recreate that database the new function &#8220;pg_get_database_ddl&#8221; can be used:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# select * from  pg_get_database_ddl ( &#039;d&#039;::regdatabase );
                                                                   pg_get_database_ddl                                                                   
---------------------------------------------------------------------------------------------------------------------------------------------------------
 CREATE DATABASE d WITH TEMPLATE = template0 ENCODING = &#039;UTF8&#039; LOCALE_PROVIDER = icu LOCALE = &#039;en_US.UTF-8&#039; ICU_LOCALE = &#039;en-US-x-icu&#039; TABLESPACE = tbs;
 ALTER DATABASE d OWNER TO u;
 ALTER DATABASE d CONNECTION LIMIT = 10;
(3 rows)
</pre></div>


<p>There are some options to control the output format and what gets reconstructed, e.g.:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,15,28]; title: ; notranslate">
postgres=# select * from  pg_get_database_ddl ( &#039;d&#039;::regdatabase, &#039;pretty&#039;, &#039;true&#039; );
           pg_get_database_ddl           
-----------------------------------------
 CREATE DATABASE d                      +
     WITH TEMPLATE = template0          +
     ENCODING = &#039;UTF8&#039;                  +
     LOCALE_PROVIDER = icu              +
     LOCALE = &#039;en_US.UTF-8&#039;             +
     ICU_LOCALE = &#039;en-US-x-icu&#039;         +
     TABLESPACE = tbs;
 ALTER DATABASE d OWNER TO u;
 ALTER DATABASE d CONNECTION LIMIT = 10;
(3 rows)

postgres=# select * from  pg_get_database_ddl ( &#039;d&#039;::regdatabase, &#039;pretty&#039;, &#039;true&#039;, &#039;owner&#039;, &#039;false&#039; );
           pg_get_database_ddl           
-----------------------------------------
 CREATE DATABASE d                      +
     WITH TEMPLATE = template0          +
     ENCODING = &#039;UTF8&#039;                  +
     LOCALE_PROVIDER = icu              +
     LOCALE = &#039;en_US.UTF-8&#039;             +
     ICU_LOCALE = &#039;en-US-x-icu&#039;         +
     TABLESPACE = tbs;
 ALTER DATABASE d CONNECTION LIMIT = 10;
(2 rows)

postgres=# select * from  pg_get_database_ddl ( &#039;d&#039;::regdatabase, &#039;pretty&#039;, &#039;true&#039;, &#039;owner&#039;, &#039;false&#039;, &#039;tablespace&#039;, &#039;false&#039; );
           pg_get_database_ddl           
-----------------------------------------
 CREATE DATABASE d                      +
     WITH TEMPLATE = template0          +
     ENCODING = &#039;UTF8&#039;                  +
     LOCALE_PROVIDER = icu              +
     LOCALE = &#039;en_US.UTF-8&#039;             +
     ICU_LOCALE = &#039;en-US-x-icu&#039;;
 ALTER DATABASE d CONNECTION LIMIT = 10;
(2 rows)
</pre></div>


<p>The other two functions behave the same (but do not have exactly the same options):</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,8,17]; title: ; notranslate">
postgres=# select * from pg_get_tablespace_ddl(&#039;tbs&#039;);
                     pg_get_tablespace_ddl                     
---------------------------------------------------------------
 CREATE TABLESPACE tbs OWNER postgres LOCATION &#039;/var/tmp/tbs&#039;;
 ALTER TABLESPACE tbs SET (random_page_cost=&#039;1.1&#039;);
(2 rows)

postgres=# select * from pg_get_tablespace_ddl(&#039;tbs&#039;, &#039;pretty&#039;, &#039;true&#039;);
               pg_get_tablespace_ddl                
----------------------------------------------------
 CREATE TABLESPACE tbs                             +
     OWNER postgres                                +
     LOCATION &#039;/var/tmp/tbs&#039;;
 ALTER TABLESPACE tbs SET (random_page_cost=&#039;1.1&#039;);
(2 rows)

postgres=# select * from pg_get_tablespace_ddl(&#039;tbs&#039;, &#039;pretty&#039;, &#039;true&#039;, &#039;owner&#039;, &#039;false&#039;);
               pg_get_tablespace_ddl                
----------------------------------------------------
 CREATE TABLESPACE tbs                             +
     LOCATION &#039;/var/tmp/tbs&#039;;
 ALTER TABLESPACE tbs SET (random_page_cost=&#039;1.1&#039;);
(2 rows)
</pre></div>


<p>&#8230; and finally for the roles:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,7,20]; title: ; notranslate">
postgres=# select * from pg_get_role_ddl (&#039;u&#039;);
                                      pg_get_role_ddl                                       
--------------------------------------------------------------------------------------------
 CREATE ROLE u NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS;
(1 row)

postgres=# select * from pg_get_role_ddl (&#039;u&#039;, &#039;pretty&#039;, &#039;true&#039;);
  pg_get_role_ddl  
-------------------
 CREATE ROLE u    +
     NOSUPERUSER  +
     INHERIT      +
     NOCREATEROLE +
     NOCREATEDB   +
     LOGIN        +
     NOREPLICATION+
     NOBYPASSRLS;
(1 row)

postgres=# select * from pg_get_role_ddl (&#039;u&#039;, &#039;pretty&#039;, &#039;true&#039;, &#039;memberships&#039;, &#039;false&#039;);
  pg_get_role_ddl  
-------------------
 CREATE ROLE u    +
     NOSUPERUSER  +
     INHERIT      +
     NOCREATEROLE +
     NOCREATEDB   +
     LOGIN        +
     NOREPLICATION+
     NOBYPASSRLS;
(1 row)
</pre></div>


<p>Nice, and again: Thanks to all involved.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-get__ddl-functions/">PostgreSQL 19: get_*_ddl functions</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-get__ddl-functions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: json format for &#8220;copy to&#8221;</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-json-format-for-copy-to/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-json-format-for-copy-to/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 04:41:59 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[Non classifié(e)]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43920</guid>

					<description><![CDATA[<p>PostgreSQL already has impressive support for working with data in json format. If you look at the jsonb data type and all the built-in functions and operators you can use, there is so much you can do with it by default. Starting with PostgreSQL 19 there is one feature more when it comes to working [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-json-format-for-copy-to/">PostgreSQL 19: json format for &#8220;copy to&#8221;</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>PostgreSQL already has impressive support for working with data in <a href="https://www.json.org/json-en.html" target="_blank" rel="noreferrer noopener">json</a> format. If you look at the <a href="https://www.postgresql.org/docs/current/datatype-json.html">jsonb</a> data type and all the <a href="https://www.postgresql.org/docs/current/functions-json.html">built-in functions and operators</a> you can use, there is so much you can do with it by default. Starting with PostgreSQL 19 there is one feature more when it comes to working with data in json format.</p>



<p>&#8220;<a href="https://www.postgresql.org/docs/current/sql-copy.html">COPY</a>&#8221; already is quite powerful and the fastest way to get data in and out of PostgreSQL (you may read some previous posts about copy <a href="https://www.dbi-services.com/blog/postgresql-17-copy-and-save_error_to/" target="_blank" rel="noreferrer noopener">here</a>, <a href="https://www.dbi-services.com/blog/postgresql-18-reject_limit-for-copy/">here</a>, and <a href="https://www.dbi-services.com/blog/postgresql-17-track-skipped-rows-from-copy-in-pg_stat_progress_copy/">here</a>). </p>



<p>As usual lets start with a simple table:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres=# create table t ( a int primary key, b text );
CREATE TABLE
postgres=# insert into t select i, md5(i::text) from generate_series(1,1000000) i;
INSERT 0 1000000
</pre></div>


<p>To get that data out in text format you might simply do this:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres=# copy t to &#039;/var/tmp/t&#039;;
COPY 1000000
postgres=# \! head /var/tmp/t
1       c4ca4238a0b923820dcc509a6f75849b
2       c81e728d9d4c2f636f067f89cc14862c
3       eccbc87e4b5ce2fe28308fd9f2a7baf3
4       a87ff679a2f3e71d9181a67b7542122c
5       e4da3b7fbbce2345d7772b0674a318d5
6       1679091c5a880faf6fb5e6087eb1b2dc
7       8f14e45fceea167a5a36dedd4bea2543
8       c9f0f895fb98ab9159f51fd0297e236d
9       45c48cce2e2d7fbdea1afc51c7c6ad26
10      d3d9446802a44259755d38e6d163e820
</pre></div>


<p>Starting with PostgreSQL 19 you can do the same in json format:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; highlight: [1,3]; title: ; notranslate">
postgres=# copy t to &#039;/var/tmp/t1&#039; with (format json);
COPY 1000000
postgres=# \! head /var/tmp/t1
{&quot;a&quot;:1,&quot;b&quot;:&quot;c4ca4238a0b923820dcc509a6f75849b&quot;}
{&quot;a&quot;:2,&quot;b&quot;:&quot;c81e728d9d4c2f636f067f89cc14862c&quot;}
{&quot;a&quot;:3,&quot;b&quot;:&quot;eccbc87e4b5ce2fe28308fd9f2a7baf3&quot;}
{&quot;a&quot;:4,&quot;b&quot;:&quot;a87ff679a2f3e71d9181a67b7542122c&quot;}
{&quot;a&quot;:5,&quot;b&quot;:&quot;e4da3b7fbbce2345d7772b0674a318d5&quot;}
{&quot;a&quot;:6,&quot;b&quot;:&quot;1679091c5a880faf6fb5e6087eb1b2dc&quot;}
{&quot;a&quot;:7,&quot;b&quot;:&quot;8f14e45fceea167a5a36dedd4bea2543&quot;}
{&quot;a&quot;:8,&quot;b&quot;:&quot;c9f0f895fb98ab9159f51fd0297e236d&quot;}
{&quot;a&quot;:9,&quot;b&quot;:&quot;45c48cce2e2d7fbdea1afc51c7c6ad26&quot;}
{&quot;a&quot;:10,&quot;b&quot;:&quot;d3d9446802a44259755d38e6d163e820&quot;}
</pre></div>


<p>Specifying a SQL is also supported:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres=# copy (select a from t) to &#039;/var/tmp/t1&#039; with (format json);
COPY 1000000
postgres=# \! head /var/tmp/t1
{&quot;a&quot;:1}
{&quot;a&quot;:2}
{&quot;a&quot;:3}
{&quot;a&quot;:4}
{&quot;a&quot;:5}
{&quot;a&quot;:6}
{&quot;a&quot;:7}
{&quot;a&quot;:8}
{&quot;a&quot;:9}
{&quot;a&quot;:10}
</pre></div>


<p>As noted in the <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7dadd38cda95bf5bc0c4715d9ab71766d1693379">commit message</a> there are some options which are not compatible with the json format:</p>



<ul class="wp-block-list">
<li>HEADER</li>



<li>DEFAULT</li>



<li>NULL</li>



<li>DELIMITER</li>



<li>FORCE QUOTE</li>



<li>FORCE NOT NULL</li>



<li>and FORCE NULL</li>
</ul>



<p>Also not supported (currently) is &#8220;copy from&#8221;.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-json-format-for-copy-to/">PostgreSQL 19: json format for &#8220;copy to&#8221;</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-json-format-for-copy-to/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: The &#8220;repack&#8221; command</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-the-repack-command/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-the-repack-command/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Tue, 14 Apr 2026 03:15:44 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43912</guid>

					<description><![CDATA[<p>Before PostgreSQL 19 you had two commands to completely rewrite a table: Either you can use the &#8220;vacuum full&#8221; or the &#8220;cluster&#8221; command to achieve this. Both operations are blocking and the table cannot be used until those operations complete. This can easily be verified with the following simple test cases: The same is true [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-the-repack-command/">PostgreSQL 19: The &#8220;repack&#8221; command</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Before PostgreSQL 19 you had two commands to completely rewrite a table: Either you can use the &#8220;<a href="https://www.postgresql.org/docs/current/sql-vacuum.html" target="_blank" rel="noreferrer noopener">vacuum full</a>&#8221; or the &#8220;<a href="https://www.postgresql.org/docs/current/sql-cluster.html" target="_blank" rel="noreferrer noopener">cluster</a>&#8221; command to achieve this. Both operations are blocking and the table cannot be used until those operations complete. This can easily be verified with the following simple test cases:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [2,4,6,9]; title: ; notranslate">
-- session 1
postgres=# create table t ( a int primary key, b text );
CREATE TABLE
postgres=# insert into t select i, md5(i::text) from generate_series(1,10000000) i;
INSERT 0 1000000
postgres=# vacuum full t;

-- session 2
postgres=# select count(*) from t;  -- this blocks until vacuum full completes
</pre></div>


<p>The same is true for the &#8220;cluster&#8221; command:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [2,11,14]; title: ; notranslate">
-- session 1
postgres=# \d t
                 Table &quot;public.t&quot;
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | text    |           |          | 
Indexes:
    &quot;t_pkey&quot; PRIMARY KEY, btree (a)

postgres=# cluster t using t_pkey;

-- session 2
postgres=# select count(*) from t;  -- this blocks until clustering completes
</pre></div>


<p>Starting with PostgreSQL 19 (scheduled to be released later this year) these two functionalities are combined into the &#8220;<a href="https://www.postgresql.org/docs/devel/sql-repack.html">repack</a>&#8221; command. The <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ac58465e0618941842439eb3f5a2cf8bebd5a3f1" target="_blank" rel="noreferrer noopener">commit message</a> makes the reason behind this pretty clear:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
Introduce the REPACK command

REPACK absorbs the functionality of VACUUM FULL and CLUSTER in a single
command.  Because this functionality is completely different from
regular VACUUM, having it separate from VACUUM makes it easier for users
to understand; as for CLUSTER, the term is heavily overloaded in the
IT world and even in Postgres itself, so it&#039;s good that we can avoid it.

We retain those older commands, but de-emphasize them in the
documentation, in favor of REPACK; the difference between VACUUM FULL
and CLUSTER (namely, the fact that tuples are written in a specific
ordering) is neatly absorbed as two different modes of REPACK.

This allows us to introduce further functionality in the future that
works regardless of whether an ordering is being applied, such as (and
especially) a concurrent mode.
</pre></div>


<p>So, instead of spreading the functionality over two commands, there is a new command which combines both:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# \h repack
Command:     REPACK
Description: rewrite a table to reclaim disk space
Syntax:
REPACK &#x5B; ( option &#x5B;, ...] ) ] &#x5B; table_and_columns &#x5B; USING INDEX &#x5B; index_name ] ] ]
REPACK &#x5B; ( option &#x5B;, ...] ) ] USING INDEX

where option can be one of:

    VERBOSE &#x5B; boolean ]
    ANALYZE &#x5B; boolean ]
    CONCURRENTLY &#x5B; boolean ]

and table_and_columns is:

    table_name &#x5B; ( column_name &#x5B;, ...] ) ]

URL: https://www.postgresql.org/docs/devel/sql-repack.html
</pre></div>


<p>The really cool stuff about this is, that this can be run concurrently which means the table is not locked for others while the command is doing its work:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [2,4,7]; title: ; notranslate">
-- session 1
postgres=# repack (concurrently) t;
-- or
postgres=# repack (concurrently) t using index t_pkey;

-- session 2
postgres=# select count(*) from t;  -- not blocking
</pre></div>


<p>Nice, thanks to all involved.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-the-repack-command/">PostgreSQL 19: The &#8220;repack&#8221; command</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-the-repack-command/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Remove grant to public in Oracle databases</title>
		<link>https://www.dbi-services.com/blog/remove-grant-to-public-in-oracle-databases/</link>
					<comments>https://www.dbi-services.com/blog/remove-grant-to-public-in-oracle-databases/#respond</comments>
		
		<dc:creator><![CDATA[Martin Bracher]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 14:18:48 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Oracle]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=40309</guid>

					<description><![CDATA[<p>CIS recommendations The Center for Internet Security publishes the &#8220;CIS Oracle database 19c Benchmark&#8221; with recommendations to enhance the security of Oracle databases. One type of recommendations is to remove grant execute to public (chapter 5.1.1.1-5.1.1.7 Public Privileges). There is a list of powerful SYS packages. And for security reasons, only users that really need [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/remove-grant-to-public-in-oracle-databases/">Remove grant to public in Oracle databases</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading" id="h-cis-recommendations">CIS recommendations</h2>



<p>The <a href="https://en.wikipedia.org/wiki/Center_for_Internet_Security">Center for Internet Security</a> publishes the &#8220;CIS Oracle database 19c Benchmark&#8221; with recommendations to enhance the security of Oracle databases.</p>



<p>One type of recommendations is to remove grant execute to public (chapter 5.1.1.1-5.1.1.7 Public Privileges). There is a list of powerful SYS packages. And for security reasons, only users that really need this functionality should have access to it. But per default, it is granted to public and all users can use it.</p>



<p>In theory, to fix that is easy, e.g.</p>



<pre class="wp-block-code"><code>REVOKE EXECUTE ON DBMS_LDAP FROM PUBLIC;</code></pre>



<p>But is that really a good idea?</p>



<h2 class="wp-block-heading" id="h-who-is-using-an-object-from-another-schema">Who is using an object from another schema?</h2>



<p>If the object is used in a program unit, a named PL/SQL block (package, function, procedure, trigger), you can see the dependency in the view dba_dependencies.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
select distinct owner from dba_dependencies 
where referenced_name=&#039;DBMS_LDAP&#039; and owner&lt;&gt;&#039;SYS&#039;
order by 2,1;
</pre></div>


<p>And for these objects, the users already have a direct grant for it. So, remove of the public grant does not affect these user-objects.<br>But wait! Rarely used, but there are named blocks with invokers right&#8217;s (<code>create procedure procname AUTHID CURRENT_USER is</code>&#8230;) . See <a href="https://docs.oracle.com/cd/E29597_01/network.1111/e16543/authorization.htm" id="https://docs.oracle.com/cd/E29597_01/network.1111/e16543/authorization.htm">How Roles Work in PL/SQL Blocks</a></p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
select owner, object_name from dba_procedures where authid=&#039;CURRENT_USER&#039;;
</pre></div>


<p>In this case the user can also access objects used in program units he has granted via a role. You have to check which users have access to these program units. These users are potentially affected by the change!</p>



<p>For objects used outside of above program units: If a user has a direct grant, or an indirect grant via a role to the object, removing the grant to public does not affect the work of this user with these objects.</p>



<p>So, what about the other users without direct/indirect grants to the object (except &#8220;public&#8221;)? How can we see if above mentioned objects are used (e.g. from external code in a Perl script or an application server connecting to the database)?</p>



<p>To see the usage of an object, we can use unified auditing and create an audit policy for the object.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
create audit policy CIS_CHECK_USAGE
actions
execute on sys.dbms_ldap
when &#039;SYS_CONTEXT(&#039;&#039;USERENV&#039;&#039;, &#039;&#039;CURRENT_USER&#039;&#039;) != &#039;&#039;SYS&#039;&#039;&#039; EVALUATE PER STATEMENT;

audit policy CIS_CHECK_USAGE;
alter audit policy cis_check_usage add actions EXECUTE on SYS.DBMS_LOB;
alter audit policy cis_check_usage add actions ...
</pre></div>


<p>Hint: Unified auditing can also be used if the Oracle binary is not relinked for unified audit (the relink only deactivates traditional auditing, unified auditing is always active)</p>



<p>To automate above steps, you can do it dynamically with the Perl script below (run it with $ORACLE_HOME/perl/bin/perl, so the required Oracle modules are present):</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: perl; title: ; notranslate">
  use DBI;
  my $dbh = DBI-&gt;connect(&#039;dbi:Oracle:&#039;, &#039;&#039;, &#039;&#039;,{ PrintError =&gt; 1, ora_session_mode=&gt;2 });
  my @pdblist;
  my $sth=$dbh-&gt;prepare(q{select PDB_NAME from cdb_pdbs where pdb_name&lt;&gt;&#039;PDB$SEED&#039; union select &#039;CDB$ROOT&#039; from dual});
  $sth-&gt;execute();
  while (my @row = $sth-&gt;fetchrow_array) {
    push(@pdblist, $row&#x5B;0]);
  }

  foreach my $pdb (@pdblist){
    # switch PDB
    print &quot;PDB=$pdb\n&quot;;
    $dbh-&gt;do(&quot;alter session set container=$pdb&quot;);

    # create cis_check_usage
    print q{ create audit policy cis_check_usage actions all on sys.AUD$ when &#039;SYS_CONTEXT(&#039;&#039;USERENV&#039;&#039;, &#039;&#039;CURRENT_USER&#039;&#039;) != &#039;&#039;SYS&#039;&#039;&#039; EVALUATE PER STATEMENT}.&quot;\n&quot;;
    $dbh-&gt;do(q{ create audit policy cis_check_usage actions all on sys.AUD$ when &#039;SYS_CONTEXT(&#039;&#039;USERENV&#039;&#039;, &#039;&#039;CURRENT_USER&#039;&#039;) != &#039;&#039;SYS&#039;&#039;&#039; EVALUATE PER STATEMENT});
    $dbh-&gt;do(q{ audit policy cis_check_usage});

    # add execute to public
    my $sql=q{
     SELECT  PRIVILEGE||&#039; on &#039;||owner||&#039;.&#039;||table_name FROM DBA_TAB_PRIVS WHERE GRANTEE=&#039;PUBLIC&#039; AND PRIVILEGE=&#039;EXECUTE&#039; AND TABLE_NAME IN (
     &#039;DBMS_LDAP&#039;,&#039;UTL_INADDR&#039;,&#039;UTL_TCP&#039;,&#039;UTL_MAIL&#039;,&#039;UTL_SMTP&#039;,&#039;UTL_DBWS&#039;,&#039;UTL_ORAMTS&#039;,&#039;UTL_HTTP&#039;,&#039;HTTPURITYPE&#039;,
     &#039;DBMS_ADVISOR&#039;,&#039;DBMS_LOB&#039;,&#039;UTL_FILE&#039;,
     &#039;DBMS_CRYPTO&#039;,&#039;DBMS_OBFUSCATION_TOOLKIT&#039;, &#039;DBMS_RANDOM&#039;,
     &#039;DBMS_JAVA&#039;,&#039;DBMS_JAVA_TEST&#039;,
     &#039;DBMS_SCHEDULER&#039;,&#039;DBMS_JOB&#039;,
     &#039;DBMS_SQL&#039;, &#039;DBMS_XMLGEN&#039;, &#039;DBMS_XMLQUERY&#039;,&#039;DBMS_XMLSTORE&#039;,&#039;DBMS_XMLSAVE&#039;,&#039;DBMS_AW&#039;,&#039;OWA_UTIL&#039;,&#039;DBMS_REDACT&#039;,
     &#039;DBMS_CREDENTIAL&#039;
      )};
    $sth=$dbh-&gt;prepare(&quot;$sql&quot;);
    $sth-&gt;execute();
    while (my @result = $sth-&gt;fetchrow_array) {
      print  &quot;alter audit policy cis_check_usage add actions $result&#x5B;0]\n&quot;;
      $dbh-&gt;do(&quot;alter audit policy cis_check_usage add actions $result&#x5B;0]&quot;);
    }
  }

</pre></div>


<h2 class="wp-block-heading" id="h-revoke-the-grants">Revoke the grants</h2>



<p>After some days/weeks, you can evaluate the usage of dbms_ldap or other objects audited by the cis_check_usage policy</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
select dbusername, current_user, object_schema||&#039;.&#039;||object_name as object, 
      sql_text, system_privilege_used,
       system_privilege, unified_audit_policies, con_id , event_timestamp 
from cdb_unified_audit_trail 
where unified_audit_policies like &#039;%CIS_CHECK_USAGE%&#039;;
</pre></div>


<p>With this query, we see the usage of the objects we audited with the CIS_CHECK_USAGE policy. If there are no rows, check if you really enabled the policy (<code>select * from audit_unified_enabled_policies where policy_name='CIS_CHECK_USAGE';</code>)</p>



<p>With the next query, we exclude the objects per user that can be accessed by a direct grant or a grant via a role, so, a revoke from public will not affect this user.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
select distinct current_user, action_name, object_schema, object_name, con_id 
from cdb_unified_audit_trail a
where unified_audit_policies like &#039;%CIS_CHECK_USAGE%&#039;
and current_user not in ( 
  select grantee from cdb_tab_privs -- direct grant
  where owner=a.object_schema and table_name=a.object_name and con_id=a.con_id
union all
  select r.grantee from cdb_role_privs r, cdb_tab_privs t -- grant via role
  where r.granted_role=t.grantee and r.con_id=t.con_id 
  and r.grantee=a.current_user   and t.owner=a.object_schema 
  and t.table_name=a.object_name and r.con_id=a.con_id
);
</pre></div>


<p>And what is left, needs attention.</p>



<p>Sometimes the objects are used by a background process, e.g. if you see the object_name DBMS_SQL, but  in sql_text it is not used, then the user probably does not need it. But if it is present in sql_text, then the user definitely needs a grant. I recommend to grant the object via a role, so it behaves as before, the user can use it directly, but not in procedures/functions/packages.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
create  role cis_dbms_sql ;
grant execute on sys.dbms_sql to cis_dbms_sql;
grant cis_dbms_sql to user1;
</pre></div>


<p>Then pragmatically, remove the execute rights from public on a test system and check if the application still works as expected. Generate the revoke commands dynamically, and do not forget to also dynamically generate an undo script in case of problems:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
SELECT  &#039;revoke &#039;||PRIVILEGE||&#039; on &#039;||owner||&#039;.&#039;||table_name||&#039; from PUBLIC;&#039; 
FROM DBA_TAB_PRIVS 
WHERE GRANTEE=&#039;PUBLIC&#039; AND PRIVILEGE=&#039;EXECUTE&#039; AND TABLE_NAME IN (
   &#039;DBMS_LDAP&#039;,&#039; UTL_INADDR&#039; ,&#039;UTL_TCP&#039;, &#039;UTL_MAIL&#039;, &#039;UTL_SMTP&#039;, &#039;UTL_DBWS&#039;,
 &#039;UTL_ORAMTS&#039;,&#039;UTL_HTTP&#039;,&#039;HTTPURITYPE&#039;,
&#039;DBMS_ADVISOR&#039;,&#039;DBMS_LOB&#039;,&#039;UTL_FILE&#039;,
&#039;DBMS_CRYPTO&#039;,&#039;DBMS_OBFUSCATION_TOOLKIT&#039;, &#039;DBMS_RANDOM&#039;,
&#039;DBMS_JAVA&#039;,&#039;DBMS_JAVA_TEST&#039;,
&#039;DBMS_SCHEDULER&#039;,&#039;DBMS_JOB&#039;,
&#039;DBMS_SQL&#039;, &#039;DBMS_XMLGEN&#039;, &#039;DBMS_XMLQUERY&#039;,&#039;DBMS_XMLSTORE&#039;,&#039;DBMS_XMLSAVE&#039;,&#039;DBMS_AW&#039;,&#039;OWA_UTIL&#039;,&#039;DBMS_REDACT&#039;,
&#039;DBMS_CREDENTIAL&#039;
);

SELECT  &#039;grant &#039;||PRIVILEGE||&#039; on &#039;||owner||&#039;.&#039;||table_name||&#039; to PUBLIC;&#039;
FROM DBA_TAB_PRIVS 
WHERE GRANTEE=&#039;PUBLIC&#039; AND PRIVILEGE=&#039;EXECUTE&#039; AND TABLE_NAME IN (
   &#039;DBMS_LDAP&#039;,&#039; UTL_INADDR&#039; ,&#039;UTL_TCP&#039;, &#039;UTL_MAIL&#039;, &#039;UTL_SMTP&#039;, &#039;UTL_DBWS&#039;,
 &#039;UTL_ORAMTS&#039;,&#039;UTL_HTTP&#039;,&#039;HTTPURITYPE&#039;,
&#039;DBMS_ADVISOR&#039;,&#039;DBMS_LOB&#039;,&#039;UTL_FILE&#039;,
&#039;DBMS_CRYPTO&#039;,&#039;DBMS_OBFUSCATION_TOOLKIT&#039;, &#039;DBMS_RANDOM&#039;,
&#039;DBMS_JAVA&#039;,&#039;DBMS_JAVA_TEST&#039;,
&#039;DBMS_SCHEDULER&#039;,&#039;DBMS_JOB&#039;,
&#039;DBMS_SQL&#039;, &#039;DBMS_XMLGEN&#039;, &#039;DBMS_XMLQUERY&#039;,&#039;DBMS_XMLSTORE&#039;,&#039;DBMS_XMLSAVE&#039;,&#039;DBMS_AW&#039;,&#039;OWA_UTIL&#039;,&#039;DBMS_REDACT&#039;,
&#039;DBMS_CREDENTIAL&#039;
);
</pre></div>


<p>It has to be run in each PDB and CDB$ROOT.</p>



<p>If all works as expected, then it is fine.</p>



<h2 class="wp-block-heading" id="h-installation-of-patches-and-new-components">Installation of patches and new components</h2>



<p>But keep that in mind if you want to install something later. It may fail. For example, install an rman catalog:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; title: ; notranslate">
RMAN&gt; create catalog;
create catalog;
error creating dbms_rcvcat package body
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-06433: error installing recovery catalog
RMAN Client Diagnostic Trace file : /u01/app/oracle/diag/clients/user_oracle/RMAN_1732619876_110/trace/ora_rman_635844_0.trc
</pre></div>


<p>To create a valid rman catalog, you need to grant the execute right for UTL_HTTP, DBMS_LOB, DBMS_XMLGEN and DBMS_SQL directly to the rman user. Strange for me: it does not work if you grant it to a role (e.g. recovery_catalog_owner), but it works with a grant to public.</p>



<p>My recommendation to install new softare or patches is:</p>



<ul class="wp-block-list">
<li>Run the undo-script mentioned above (grant execute to public)</li>



<li>Apply the Oracle or application patch or new application installation</li>



<li>Check for invalid objects</li>



<li>Run the hardening-script (revoke execute from public)</li>



<li>Check for additional invalid objects and determine the missing grants</li>



<li>Extend your hardening script with the required grants and re-run it.</li>
</ul>



<h2 class="wp-block-heading" id="h-conclusion">Conclusion</h2>



<p>Generally, the CIS hardening about revoking execute from public is possible. But it is very dangerous that the functionality of the application could be compromised. Especially with components that are used very rarely, this could only be noticed very late at best, e.g. in the case of end-of-year processing.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/remove-grant-to-public-in-oracle-databases/">Remove grant to public in Oracle databases</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/remove-grant-to-public-in-oracle-databases/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: pg_plan_advice</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-pg_plan_advice/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-pg_plan_advice/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 07:16:59 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43477</guid>

					<description><![CDATA[<p>In our performance tuning workshop, especially when attendees have an Oracle background, one question for sure pops up every time: How can I use optimizer hints in PostgreSQL. Up until today there are three answers to this: Well, now we need to update the workshop material because this was committed for PostgreSQL 19 yesterday. The [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-pg_plan_advice/">PostgreSQL 19: pg_plan_advice</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In our <a href="https://www.dbi-services.com/courses/postgresql-performance-tuning/" target="_blank" rel="noreferrer noopener">performance tuning workshop</a>, especially when attendees have an Oracle background, one question for sure pops up every time: How can I use optimizer hints in PostgreSQL. Up until today there are three answers to this:</p>



<ul class="wp-block-list">
<li>You simply can&#8217;t, there are no hints</li>



<li>You might consider using the <a href="https://github.com/ossc-db/pg_hint_plan" target="_blank" rel="noreferrer noopener">pg_hint_plan</a> extension</li>



<li>Not really hints, but you can tell the optimizer to <a href="https://www.postgresql.org/docs/current/runtime-config-query.html" target="_blank" rel="noreferrer noopener">make certain operations more expensive</a>, so other operations might be chosen</li>
</ul>



<p>Well, now we need to update the workshop material because this was <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5883ff30b02ceed3c5eabba4d9c09a7766f9a8fc" target="_blank" rel="noreferrer noopener">committed</a> for PostgreSQL 19 yesterday. The feature is not called &#8220;hints&#8221; but it does exactly that: Tell the optimizer what to do because you (might) know it better and you want a specific plan for a given query. Just be aware that this comes with the same issues as listed <a href="https://wiki.postgresql.org/wiki/OptimizerHintsDiscussion" target="_blank" rel="noreferrer noopener">here</a>.</p>



<p>The new feature comes as an extension so you need to enable it before you can use it. There are three ways to do this:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [2,8,15,17]; title: ; notranslate">
-- current session
postgres=# load &#039;pg_plan_advice&#039;;
LOAD

-- for all new sessions
postgres=# alter system set session_preload_libraries = &#039;pg_plan_advice&#039;;
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

-- instance wide
postgres=# alter system set shared_preload_libraries = &#039;pg_plan_advice&#039;;
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)
</pre></div>


<p>To see this in action, let&#8217;s create a small demo setup:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3,5,7,9]; title: ; notranslate">
postgres=# create table t1 ( a int primary key, b text );
CREATE TABLE
postgres=# create table t2 ( a int, b int references t1(a), v text );
CREATE TABLE
postgres=# insert into t1 select i, md5(i::text) from generate_series(1,1000000) i;
INSERT 0 1000000
postgres=# insert into t2 select i, i, md5(i::text) from generate_series(1,1000000) i;
INSERT 0 1000000
postgres=# insert into t2 select i, 1, md5(i::text) from generate_series(1000000,2000000) i;
INSERT 0 1000001
</pre></div>


<p>A simple parent child relation having a single match from one to one million and one million and one matches for the value one of the parent table.</p>



<p><a href="https://www.postgresql.org/docs/devel/using-explain.html">EXPLAIN</a> comes with a new option to generate the so called advice string for a given query, e.g.:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,15,16,17,18,19,20]; title: ; notranslate">
postgres=# explain (plan_advice) select * from t1 join t2 on t1.a = t2.b;
                                        QUERY PLAN                                        
------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.43..111805.81 rows=2000001 width=78)
   -&gt;  Seq Scan on t2  (cost=0.00..48038.01 rows=2000001 width=41)
   -&gt;  Memoize  (cost=0.43..0.47 rows=1 width=37)
         Cache Key: t2.b
         Cache Mode: logical
         Estimates: capacity=29629 distinct keys=29629 lookups=2000001 hit percent=98.52%
         -&gt;  Index Scan using t1_pkey on t1  (cost=0.42..0.46 rows=1 width=37)
               Index Cond: (a = t2.b)
 JIT:
   Functions: 8
   Options: Inlining false, Optimization false, Expressions true, Deforming true
 Generated Plan Advice:
   JOIN_ORDER(t2 t1)
   NESTED_LOOP_MEMOIZE(t1)
   SEQ_SCAN(t2)
   INDEX_SCAN(t1 public.t1_pkey)
   NO_GATHER(t1 t2)
(17 rows)
</pre></div>


<p>What you see here are advice tags, and the full list of those tags is documented in <a href="https://www.postgresql.org/docs/devel/pgplanadvice.html#PGPLANADVICE-TAGS" target="_blank" rel="noreferrer noopener">documentation</a> of the extension. First we have the join order, then nested loop memoize, a sequential scan on t2 and an index scan on the primary key of the parent table and finally an instruction that neither t1 nor t2 should appear under a gather node.</p>



<p>This can be given as an advice to the optimizer/planner:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,2,16,17,18,19,20,21,22,23,24,25,26,27,28]; title: ; notranslate">
postgres=# SET pg_plan_advice.advice = &#039;JOIN_ORDER(t2 t1) NESTED_LOOP_MEMOIZE(t1) SEQ_SCAN(t2) INDEX_SCAN(t1 public.t1_pkey) NO_GATHER(t1 t2)&#039;;
SET
postgres=# explain (plan_advice) select * from t1 join t2 on t1.a = t2.b;
                                        QUERY PLAN                                        
------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.43..111805.81 rows=2000001 width=78)
   -&gt;  Seq Scan on t2  (cost=0.00..48038.01 rows=2000001 width=41)
   -&gt;  Memoize  (cost=0.43..0.47 rows=1 width=37)
         Cache Key: t2.b
         Cache Mode: logical
         Estimates: capacity=29629 distinct keys=29629 lookups=2000001 hit percent=98.52%
         -&gt;  Index Scan using t1_pkey on t1  (cost=0.42..0.46 rows=1 width=37)
               Index Cond: (a = t2.b)
 JIT:
   Functions: 8
   Options: Inlining false, Optimization false, Expressions true, Deforming true
 Supplied Plan Advice:
   SEQ_SCAN(t2) /* matched */
   INDEX_SCAN(t1 public.t1_pkey) /* matched */
   JOIN_ORDER(t2 t1) /* matched */
   NESTED_LOOP_MEMOIZE(t1) /* matched */
   NO_GATHER(t1) /* matched */
   NO_GATHER(t2) /* matched */
 Generated Plan Advice:
   JOIN_ORDER(t2 t1)
   NESTED_LOOP_MEMOIZE(t1)
   SEQ_SCAN(t2)
   INDEX_SCAN(t1 public.t1_pkey)
   NO_GATHER(t1 t2)
(24 rows)
</pre></div>


<p>Running the next explain with that advice will show you what you&#8217;ve advised the planner to do and what was actually done. In this case all the advises matched and you get the same plan as before.</p>



<p>Once you play e.g. with the join order, the plan will change because you told the planner to do so:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3,16,17,18,19,20,21,22,23]; title: ; notranslate">
postgres=# SET pg_plan_advice.advice = &#039;JOIN_ORDER(t1 t2)&#039;;
SET
postgres=# explain (plan_advice) select * from t1 join t2 on t1.a = t2.b;
                                    QUERY PLAN                                     
-----------------------------------------------------------------------------------
 Merge Join  (cost=323875.24..390697.00 rows=2000001 width=78)
   Merge Cond: (t1.a = t2.b)
   -&gt;  Index Scan using t1_pkey on t1  (cost=0.42..34317.43 rows=1000000 width=37)
   -&gt;  Materialize  (cost=318880.31..328880.31 rows=2000001 width=41)
         -&gt;  Sort  (cost=318880.31..323880.31 rows=2000001 width=41)
               Sort Key: t2.b
               -&gt;  Seq Scan on t2  (cost=0.00..48038.01 rows=2000001 width=41)
 JIT:
   Functions: 7
   Options: Inlining false, Optimization false, Expressions true, Deforming true
 Supplied Plan Advice:
   JOIN_ORDER(t1 t2) /* matched */
 Generated Plan Advice:
   JOIN_ORDER(t1 t2)
   MERGE_JOIN_MATERIALIZE(t2)
   SEQ_SCAN(t2)
   INDEX_SCAN(t1 public.t1_pkey)
   NO_GATHER(t1 t2)
(18 rows)
</pre></div>


<p>Really nice, now there is an official way to influence the planner using advises but please be aware of the current <a href="https://www.postgresql.org/docs/devel/pgplanadvice.html#PGPLANADVICE-LIMITATIONS" target="_blank" rel="noreferrer noopener">limitations</a>. Needless to say, that you should use this with caution, because you can easily make things slower by advising what is not optimal for a query.</p>



<p>Thanks to all involved with this, this is really a great improvement.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-pg_plan_advice/">PostgreSQL 19: pg_plan_advice</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-pg_plan_advice/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Commercial PostgreSQL distributions with TDE (3) Cybertec PostgreSQL EE (1) Setup</title>
		<link>https://www.dbi-services.com/blog/commercial-postgresql-distributions-with-tde-3-cybertec-postgresql-ee-1-setup/</link>
					<comments>https://www.dbi-services.com/blog/commercial-postgresql-distributions-with-tde-3-cybertec-postgresql-ee-1-setup/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 07:40:21 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43415</guid>

					<description><![CDATA[<p>In the lasts posts in this series we&#8217;ve looked at Fujitsu&#8217;s distribution of PostgreSQL (here and here) and EnterpriseDB&#8217;s distribution of PostgreSQL (here and here) which both come with support for TDE (Transparent Data Encryption). A third player is Cybertec with it&#8217;s Cybertec PostgreSQL EE distribution of PostgreSQL and this is the distribution we&#8217;re looking [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/commercial-postgresql-distributions-with-tde-3-cybertec-postgresql-ee-1-setup/">Commercial PostgreSQL distributions with TDE (3) Cybertec PostgreSQL EE (1) Setup</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In the lasts posts in this series we&#8217;ve looked at Fujitsu&#8217;s distribution of PostgreSQL (<a href="https://www.dbi-services.com/blog/commercial-postgresql-distributions-with-tde-1-fujitsu-enterprise-postgres-1-setup/" target="_blank" rel="noreferrer noopener">here</a> and <a href="https://www.dbi-services.com/blog/commercial-postgresql-distributions-with-tde-1-fujitsu-enterprise-postgres-2-tde/" target="_blank" rel="noreferrer noopener">here</a>) and EnterpriseDB&#8217;s distribution of PostgreSQL (<a href="https://www.dbi-services.com/blog/commercial-postgresql-distributions-with-tde-2-edb-postgres-extended-server-1-setup/" target="_blank" rel="noreferrer noopener">here</a> and <a href="https://www.dbi-services.com/blog/commercial-postgresql-distributions-with-tde-2-edb-postgres-extended-server-2-tde/" target="_blank" rel="noreferrer noopener">here</a>) which both come with support for TDE (Transparent Data Encryption). A third player is Cybertec with it&#8217;s <a href="https://www.cybertec-postgresql.com/en/products/cybertec-postgresql-enterprise-edition-pgee/" target="_blank" rel="noreferrer noopener">Cybertec PostgreSQL EE</a> distribution of PostgreSQL and this is the distribution we&#8217;re looking at in this and the next post.</p>



<p>Cybertec provides free access to their <a href="https://repository.cybertec.at/" target="_blank" rel="noreferrer noopener">repositories</a> with the limitation of 1GB data per table. As with Fujitsu, the supported versions of Linux distributions are based <a href="https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux" target="_blank" rel="noreferrer noopener">RHEL</a> (8,9 &amp; 10) and <a href="https://www.suse.com/products/server/" target="_blank" rel="noreferrer noopener">SLES</a> (15 &amp; 16). </p>



<p>Installing Cybertec&#8217;s distribution of PostgreSQL is, the same as with Fujitsu and EnterpriseDB,  just a matter of attaching the repository and installing the packages. Before I am going to do that I&#8217;ll disable the EnterpriseDB repositories for not running into any issues with those when installing another distribution of PostgreSQL:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,9,11]; title: ; notranslate">
&#x5B;root@postgres-tde ~]$ dnf repolist
Updating Subscription Management repositories.
repo id                                         repo name
enterprisedb-enterprise                         enterprisedb-enterprise
enterprisedb-enterprise-noarch                  enterprisedb-enterprise-noarch
enterprisedb-enterprise-source                  enterprisedb-enterprise-source
rhel-9-for-x86_64-appstream-rpms                Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
rhel-9-for-x86_64-baseos-rpms                   Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
&#x5B;root@postgres-tde ~]$ dnf config-manager --disable enterprisedb-*
Updating Subscription Management repositories.
&#x5B;root@postgres-tde ~]$ dnf repolist
Updating Subscription Management repositories.
repo id                                                                                                   repo name
rhel-9-for-x86_64-appstream-rpms                                                                          Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
rhel-9-for-x86_64-baseos-rpms                                                                             Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
&#x5B;root@postgres-tde ~]$
</pre></div>


<p>Attaching the Cybertec repository for version 18 of PostgreSQL:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2,24]; title: ; notranslate">
&#x5B;root@postgres-tde ~]$ version=18
&#x5B;root@postgres-tde ~]$ sudo tee /etc/yum.repos.d/cybertec-pg$version.repo &lt;&lt;EOF
&#x5B;cybertec_pg$version]
name=CYBERTEC PostgreSQL $version repository for RHEL/CentOS \$releasever - \$basearch
baseurl=https://repository.cybertec.at/public/$version/redhat/\$releasever/\$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
&#x5B;cybertec_common]
name=CYBERTEC common repository for RHEL/CentOS \$releasever - \$basearch
baseurl=https://repository.cybertec.at/public/common/redhat/\$releasever/\$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
EOF
&#x5B;cybertec_pg18]
name=CYBERTEC PostgreSQL 18 repository for RHEL/CentOS $releasever - $basearch
baseurl=https://repository.cybertec.at/public/18/redhat/$releasever/$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
&#x5B;cybertec_common]
name=CYBERTEC common repository for RHEL/CentOS $releasever - $basearch
baseurl=https://repository.cybertec.at/public/common/redhat/$releasever/$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
&#x5B;root@postgres-tde ~]$ dnf repolist
Updating Subscription Management repositories.
repo id                                                                                                 repo name
cybertec_common                                                                                         CYBERTEC common repository for RHEL/CentOS 9 - x86_64
cybertec_pg18                                                                                           CYBERTEC PostgreSQL 18 repository for RHEL/CentOS 9 - x86_64
rhel-9-for-x86_64-appstream-rpms                                                                        Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
rhel-9-for-x86_64-baseos-rpms                                                                           Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
&#x5B;root@postgres-tde ~]$
</pre></div>


<p>Let&#8217;s check what we have available:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
&#x5B;root@postgres-tde ~]$ dnf search postgresql18-ee
Updating Subscription Management repositories.
Last metadata expiration check: 0:00:10 ago on Mon 09 Mar 2026 09:33:05 AM CET.
================================================================================================== Name Exactly Matched: postgresql18-ee ===================================================================================================
postgresql18-ee.x86_64 : PostgreSQL client programs and libraries
================================================================================================= Name &amp; Summary Matched: postgresql18-ee ==================================================================================================
postgresql18-ee-contrib-debuginfo.x86_64 : Debug information for package postgresql18-ee-contrib
postgresql18-ee-debuginfo.x86_64 : Debug information for package postgresql18-ee
postgresql18-ee-devel-debuginfo.x86_64 : Debug information for package postgresql18-ee-devel
postgresql18-ee-ecpg-devel-debuginfo.x86_64 : Debug information for package postgresql18-ee-ecpg-devel
postgresql18-ee-ecpg-libs-debuginfo.x86_64 : Debug information for package postgresql18-ee-ecpg-libs
postgresql18-ee-libs-debuginfo.x86_64 : Debug information for package postgresql18-ee-libs
postgresql18-ee-libs-oauth-debuginfo.x86_64 : Debug information for package postgresql18-ee-libs-oauth
postgresql18-ee-llvmjit-debuginfo.x86_64 : Debug information for package postgresql18-ee-llvmjit
postgresql18-ee-plperl-debuginfo.x86_64 : Debug information for package postgresql18-ee-plperl
postgresql18-ee-plpython3-debuginfo.x86_64 : Debug information for package postgresql18-ee-plpython3
postgresql18-ee-pltcl-debuginfo.x86_64 : Debug information for package postgresql18-ee-pltcl
postgresql18-ee-server-debuginfo.x86_64 : Debug information for package postgresql18-ee-server
====================================================================================================== Name Matched: postgresql18-ee =======================================================================================================
postgresql18-ee-contrib.x86_64 : Contributed source and binaries distributed with PostgreSQL
postgresql18-ee-devel.x86_64 : PostgreSQL development header files and libraries
postgresql18-ee-docs.x86_64 : Extra documentation for PostgreSQL
postgresql18-ee-ecpg-devel.x86_64 : Development files for ECPG (Embedded PostgreSQL for C)
postgresql18-ee-ecpg-libs.x86_64 : Run-time libraries for ECPG programs
postgresql18-ee-libs.x86_64 : The shared libraries required for any PostgreSQL clients
postgresql18-ee-libs-oauth.x86_64 : The shared libraries required for any PostgreSQL clients - OAuth flow
postgresql18-ee-llvmjit.x86_64 : Just-in-time compilation support for PostgreSQL
postgresql18-ee-plperl.x86_64 : The Perl procedural language for PostgreSQL
postgresql18-ee-plpython3.x86_64 : The Python3 procedural language for PostgreSQL
postgresql18-ee-pltcl.x86_64 : The Tcl procedural language for PostgreSQL
postgresql18-ee-server.x86_64 : The programs needed to create and run a PostgreSQL server
postgresql18-ee-test.x86_64 : The test suite distributed with PostgreSQL
</pre></div>


<p>This are the usual suspects, so for getting it installed:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
&#x5B;root@postgres-tde ~]$ dnf install postgresql18-ee-server postgresql18-ee postgresql18-ee-contrib
Updating Subscription Management repositories.
Last metadata expiration check: 0:00:29 ago on Mon 09 Mar 2026 10:30:17 AM CET.
Dependencies resolved.
============================================================================================================================================================================================================================================
 Package                                                       Architecture                                 Version                                                               Repository                                           Size
============================================================================================================================================================================================================================================
Installing:
 postgresql18-ee                                               x86_64                                       18.3-EE~demo.rhel9.cybertec2                                          cybertec_pg18                                       2.0 M
 postgresql18-ee-contrib                                       x86_64                                       18.3-EE~demo.rhel9.cybertec2                                          cybertec_pg18                                       755 k
 postgresql18-ee-server                                        x86_64                                       18.3-EE~demo.rhel9.cybertec2                                          cybertec_pg18                                       7.2 M
Installing dependencies:
 postgresql18-ee-libs                                          x86_64                                       18.3-EE~demo.rhel9.cybertec2                                          cybertec_pg18                                       299 k

Transaction Summary
============================================================================================================================================================================================================================================
Install  4 Packages

Total download size: 10 M
Installed size: 46 M
Is this ok &#x5B;y/N]: y
Downloading Packages:
(1/4): postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm                                                                                                                                         1.4 MB/s | 299 kB     00:00    
(2/4): postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm                                                                                                                                      3.1 MB/s | 755 kB     00:00    
(3/4): postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm                                                                                                                                              6.8 MB/s | 2.0 MB     00:00    
(4/4): postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm                                                                                                                                        13 MB/s | 7.2 MB     00:00    
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                        13 MB/s |  10 MB     00:00     
CYBERTEC PostgreSQL 18 repository for RHEL/CentOS 9 - x86_64                                                                                                                                                 42 kB/s | 3.1 kB     00:00    
Importing GPG key 0x2D1B5F59:
 Userid     : &quot;Cybertec International (Software Signing Key) &lt;build@cybertec.at&gt;&quot;
 Fingerprint: FCFF 012F 4B39 9019 1352 BB03 AA6F 3CC1 2D1B 5F59
 From       : https://repository.cybertec.at/assets/cybertec-rpm.asc
Is this ok &#x5B;y/N]: y
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                                    1/1 
  Installing       : postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                           1/4 
  Running scriptlet: postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                           1/4 
  Installing       : postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                                2/4 
  Running scriptlet: postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                                2/4 
  Running scriptlet: postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                         3/4 
  Installing       : postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                         3/4 
  Running scriptlet: postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                         3/4 
  Installing       : postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                        4/4 
  Running scriptlet: postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                        4/4 
  Verifying        : postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                                1/4 
  Verifying        : postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                        2/4 
  Verifying        : postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                           3/4 
  Verifying        : postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64                                                                                                                                                         4/4 
Installed products updated.

Installed:
  postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64  postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64  postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64  postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64 

Complete!
</pre></div>


<p>&#8230; and that&#8217;s it. As with the other posts in this little series, we&#8217;ll have a look at how to start the instance and enable TDE in the next post.</p>



<p></p>
<p>L’article <a href="https://www.dbi-services.com/blog/commercial-postgresql-distributions-with-tde-3-cybertec-postgresql-ee-1-setup/">Commercial PostgreSQL distributions with TDE (3) Cybertec PostgreSQL EE (1) Setup</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/commercial-postgresql-distributions-with-tde-3-cybertec-postgresql-ee-1-setup/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Reading data from PostgreSQL into Oracle</title>
		<link>https://www.dbi-services.com/blog/reading-data-from-postgresql-into-oracle/</link>
					<comments>https://www.dbi-services.com/blog/reading-data-from-postgresql-into-oracle/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Fri, 06 Mar 2026 13:34:57 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[Oracle]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43361</guid>

					<description><![CDATA[<p>Usually the requests we get are around getting data from Oracle into PostgreSQL, but sometimes also the opposite is true and so it happened recently. Depending on the requirements, usually real time vs. delayed/one-shot, there are several options when you want to read from Oracle into PostgreSQL. One common of way of doing this is [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/reading-data-from-postgresql-into-oracle/">Reading data from PostgreSQL into Oracle</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Usually the requests we get are around getting data from Oracle into PostgreSQL, but sometimes also the opposite is true and so it happened recently. Depending on the requirements, usually real time vs. delayed/one-shot, there are several options when you want to read from Oracle into PostgreSQL. One common of way of doing this is to use the <a href="https://www.dbi-services.com/blog/connecting-your-postgresql-instance-to-an-oracle-database/" target="_blank" rel="noreferrer noopener">foreign data wrapper for Oracle</a> (the post is quite old but still valid) or to use some kind of logical replication when data needs to be up to date. The question is what options do you have for the other way around? When it comes to logical replication there are several tools out there which might work for your needs but what options do you have that compare more to the Oracle foreign data wrapper when data does not need to be up to date?</p>



<p>Quite old, but still available and usable is <a href="https://en.wikipedia.org/wiki/Open_Database_Connectivity" target="_blank" rel="noreferrer noopener">ODBC</a> and if you combine this with <a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/heter/heterogeneous-services-agent-types.html#GUID-07CFE202-2439-4867-93B2-52955BABBEF7" target="_blank" rel="noreferrer noopener">Oracle&#8217;s Database Heterogeneous Connectivity</a> this gives you one option for reading from data PostgreSQL into Oracle. Initially I wanted to write that down in document for the customer, but as we like to share here, it turned out to be become a blog post available to everybody.</p>



<p>My target Oracle system is an <a href="https://www.oracle.com/database/technologies/appdev/xe.html" target="_blank" rel="noreferrer noopener">Oracle Database 21c Express Edition Release 21.0.0.0.0</a> running on Oracle <a href="https://www.oracle.com/linux/" target="_blank" rel="noreferrer noopener">Linux 8.10</a>:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,21,33,34]; title: ; notranslate">
&#x5B;oracle@ora ~]$ cat /etc/os-release
NAME=&quot;Oracle Linux Server&quot;
VERSION=&quot;8.10&quot;
ID=&quot;ol&quot;
ID_LIKE=&quot;fedora&quot;
VARIANT=&quot;Server&quot;
VARIANT_ID=&quot;server&quot;
VERSION_ID=&quot;8.10&quot;
PLATFORM_ID=&quot;platform:el8&quot;
PRETTY_NAME=&quot;Oracle Linux Server 8.10&quot;
ANSI_COLOR=&quot;0;31&quot;
CPE_NAME=&quot;cpe:/o:oracle:linux:8:10:server&quot;
HOME_URL=&quot;https://linux.oracle.com/&quot;
BUG_REPORT_URL=&quot;https://github.com/oracle/oracle-linux&quot;

ORACLE_BUGZILLA_PRODUCT=&quot;Oracle Linux 8&quot;
ORACLE_BUGZILLA_PRODUCT_VERSION=8.10
ORACLE_SUPPORT_PRODUCT=&quot;Oracle Linux&quot;
ORACLE_SUPPORT_PRODUCT_VERSION=8.10

&#x5B;oracle@ora ~]$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Fri Mar 6 04:14:38 2026
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.


Connected to:
Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL&gt; set lines 300
SQL&gt; select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production

SQL&gt; 
</pre></div>


<p>My source system is a PostgreSQL 17.5 running on <a href="https://get.opensuse.org/leap/16.0/?type=server" target="_blank" rel="noreferrer noopener">openSUSE Leap 16</a>:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,15,30,36,42,48]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;175] cat /etc/os-release
NAME=&quot;openSUSE Leap&quot;
VERSION=&quot;16.0&quot;
ID=&quot;opensuse-leap&quot;
ID_LIKE=&quot;suse opensuse&quot;
VERSION_ID=&quot;16.0&quot;
PRETTY_NAME=&quot;openSUSE Leap 16.0&quot;
ANSI_COLOR=&quot;0;32&quot;
CPE_NAME=&quot;cpe:/o:opensuse:leap:16.0&quot;
BUG_REPORT_URL=&quot;https://bugs.opensuse.org&quot;
HOME_URL=&quot;https://www.opensuse.org/&quot;
DOCUMENTATION_URL=&quot;https://en.opensuse.org/Portal:Leap&quot;
LOGO=&quot;distributor-logo-Leap&quot;

postgres@:/home/postgres/ &#x5B;175] ip a
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp1s0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:01:dd:de brd ff:ff:ff:ff:ff:ff
    altname enx52540001ddde
    inet 192.168.122.158/24 brd 192.168.122.255 scope global dynamic noprefixroute enp1s0
       valid_lft 3480sec preferred_lft 3480sec
    inet6 fe80::b119:5142:93ab:b6aa/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

postgres@:/home/postgres/ &#x5B;175] psql -c &quot;select version()&quot;
                                      version
------------------------------------------------------------------------------------
 PostgreSQL 17.5 dbi services build on x86_64-linux, compiled by gcc-15.0.1, 64-bit
(1 row)

postgres@:/home/postgres/ &#x5B;175] psql -c &quot;show port&quot;
 port
------
 5433
(1 row)

postgres@:/home/postgres/ &#x5B;175] psql -c &quot;show listen_addresses&quot;
 listen_addresses
------------------
 *
(1 row)

postgres@:/home/postgres/ &#x5B;175] cat $PGDATA/pg_hba.conf | grep &quot;192.168.122&quot;
host    all             all             192.168.122.0/24        trust
</pre></div>


<p>So far for the baseline.</p>



<p>Obviously. the first step is to have a ODBC connection working from the Oracle host to the PostgreSQL host, without involving the Oracle database. For this we need <a href="https://www.unixodbc.org/" target="_blank" rel="noreferrer noopener">unixODBC</a> and on top of that we need the <a href="https://odbc.postgresql.org/" target="_blank" rel="noreferrer noopener">ODBC driver for PostgreSQL</a>. Both are available as packages on Oracle Linux 8 (should be true for any distribution based on Red Hat), so that is easy to get installed:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
&#x5B;oracle@ora ~]$ sudo dnf install -y unixODBC postgresql-odbc
Last metadata expiration check: 0:09:06 ago on Fri 06 Mar 2026 01:38:40 AM EST.
Dependencies resolved.
=============================================================================================
 Package                        Architecture    Version             Repository          Size
=============================================================================================
Installing:
 postgresql-odbc                x86_64          10.03.0000-3.el8_6  ol8_appstream      430 k
 unixODBC                       x86_64          2.3.7-2.el8_10      ol8_appstream      453 k
Installing dependencies:
 libpq                          x86_64          13.23-1.el8_10      ol8_appstream      199 k
 libtool-ltdl                   x86_64          2.4.6-25.el8        ol8_baseos_latest   58 k

Transaction Summary
=============================================================================================
Install  4 Packages

Total download size: 1.1 M
Installed size: 3.4 M
Downloading Packages:
(1/4): libtool-ltdl-2.4.6-25.el8.x86_64.rpm                778 kB/s |  58 kB     00:00
(2/4): libpq-13.23-1.el8_10.x86_64.rpm                     2.2 MB/s | 199 kB     00:00
(3/4): postgresql-odbc-10.03.0000-3.el8_6.x86_64.rpm       4.4 MB/s | 430 kB     00:00
(4/4): unixODBC-2.3.7-2.el8_10.x86_64.rpm                  14 MB/s  | 453 kB     00:00
---------------------------------------------------------------------------------------
Total                                                      9.9 MB/s | 1.1 MB     00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                 1/1
  Installing       : libpq-13.23-1.el8_10.x86_64                     1/4
  Installing       : libtool-ltdl-2.4.6-25.el8.x86_64                2/4
  Running scriptlet: libtool-ltdl-2.4.6-25.el8.x86_64                2/4
  Installing       : unixODBC-2.3.7-2.el8_10.x86_64                  3/4
  Running scriptlet: unixODBC-2.3.7-2.el8_10.x86_64                  3/4
  Installing       : postgresql-odbc-10.03.0000-3.el8_6.x86_64       4/4
  Running scriptlet: postgresql-odbc-10.03.0000-3.el8_6.x86_64       4/4
  Verifying        : libtool-ltdl-2.4.6-25.el8.x86_64                1/4
  Verifying        : libpq-13.23-1.el8_10.x86_64                     2/4
  Verifying        : postgresql-odbc-10.03.0000-3.el8_6.x86_64       3/4
  Verifying        : unixODBC-2.3.7-2.el8_10.x86_64                  4/4

Installed:
  libpq-13.23-1.el8_10.x86_64 libtool-ltdl-2.4.6-25.el8.x86_64 postgresql-odbc-10.03.0000-3.el8_6.x86_64 unixODBC-2.3.7-2.el8_10.x86_64

Complete!
</pre></div>


<p>Having that in place, lets check which configuration we need to touch:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,11]; title: ; notranslate">
&#x5B;oracle@ora ~]$ odbcinst -j
unixODBC 2.3.7
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /home/oracle/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8

&#x5B;oracle@ora ~]$ odbc_config --odbcini --odbcinstini
/etc/odbc.ini
/etc/odbcinst.ini
</pre></div>


<p>odbcinst.ini is used to configure one or more ODBC drivers, <a href="https://www.unixodbc.org/odbcinst.html" target="_blank" rel="noreferrer noopener">odbc.ini</a> is used to configure the data sources. There are several examples in  the driver configuration file, but we&#8217;re only interested in PostgreSQL:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
&#x5B;oracle@ora ~]$ grep pgodbc -A 6 /etc/odbcinst.ini 
&#x5B;pgodbc]
Description     = ODBC for PostgreSQL
Driver          = /usr/lib/psqlodbcw.so
Setup           = /usr/lib/libodbcpsqlS.so
Driver64        = /usr/lib64/psqlodbcw.so
Setup64         = /usr/lib64/libodbcpsqlS.so
FileUsage       = 1
</pre></div>


<p>For the data source get the IP address/hostname, port, user and password for your PostgreSQL database and adapt the configuration below:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
&#x5B;oracle@ora ~]$ cat /etc/odbc.ini 
&#x5B;pgdsn]
Driver = pgodbc
Description = PostgreSQL ODBC Driver
Database = postgres
Servername = 192.168.122.158
Username = postgres
Password = postgres
Port = 5433
UseDeclareFetch = 1
CommLog = /tmp/pgodbclink.log
Debug = 1
LowerCaseIdentifier = 1
</pre></div>


<p>If you got it all right, you should be able to establish a connection to PostgreSQL using the &#8220;<a href="https://www.unixodbc.org/doc/UserManual/" target="_blank" rel="noreferrer noopener">isql</a>&#8221; utility:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,11,21]; title: ; notranslate">
&#x5B;oracle@ora ~]$ isql -v pgdsn
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help &#x5B;tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+

SQL&gt; select datname from pg_database;
+----------------------------------------------------------------+
| datname                                                        |
+----------------------------------------------------------------+
| postgres                                                       |
| template1                                                      |
| template0                                                      |
+----------------------------------------------------------------+
SQLRowCount returns -1
3 rows fetched
SQL&gt; quit;
&#x5B;oracle@ora ~]$
</pre></div>


<p>This proves, that connectivity from the Oracle host to the PostgreSQL database is fine and ODBC is working properly.</p>



<p>Now we need to tell the Oracle Listener and the Oracle database how to use this configuration. This requires a configuration for the listener and a configuration for the heterogeneous services. For configuring the listener we need to know which configuration file the listener is using, but this is easy to find out:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
&#x5B;oracle@ora ~]$ lsnrctl status | grep &quot;Listener Parameter File&quot;
Listener Parameter File   /opt/oracle/homes/OraDBHome21cXE/network/admin/listener.ora
</pre></div>


<p>The content that needs to go into this file is:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; highlight: [7,8,9,10,11,12]; title: ; notranslate">
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
    (SID_NAME = orcl)
    (ORACLE_HOME = /opt/oracle/product/21c/dbhomeXE/)
    )
   (SID_DESC=
    (SID_NAME = pgdsn)
    (ORACLE_HOME = /opt/oracle/product/21c/dbhomeXE/)
    (ENVS=&quot;LD_LIBRARY_PATH=/usr/local/lib:/usr/lib64:/opt/oracle/product/21c/dbhomeXE/lib/)
    (PROGRAM=dg4odbc)
   )
)
</pre></div>


<p>LD_LIBRARY_PATH must include the PATH to the ODBC driver, and &#8220;PROGRAM&#8221; must be &#8220;dg4odbc&#8221;.</p>



<p>Continue by adding the configuration for the heterogeneous services, which in my case goes here:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
&#x5B;oracle@ora ~]$ cat /opt/oracle/homes/OraDBHome21cXE/hs/admin/initpgdsn.ora 
HS_FDS_CONNECT_INFO = pgdsn
HS_FDS_TRACE_LEVEL = DEBUG
HS_FDS_TRACE_FILE_NAME = /tmp/hs.trc
HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so
HS_LANGUAGE=AMERICAN_AMERICA.WE8ISO8859P15
set ODBCINI=/etc/odbc.ini
</pre></div>


<p>Create the connection definition in tnsnames.ora:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,13,14,15,16,17,18]; title: ; notranslate">
&#x5B;oracle@ora ~]$ cat /opt/oracle/homes/OraDBHome21cXE/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File: /opt/oracle/homes/OraDBHome21cXE/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

XE =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ora.it.dbi-services.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = XE)
    )
  )

pgdsn =
   (DESCRIPTION=
   (ADDRESS=(PROTOCOL=tcp)(HOST = ora.it.dbi-services.com)(PORT = 1521))
     (CONNECT_DATA=(SID=pgdsn))
     (HS=OK)
)
</pre></div>


<p>Restart the listener and make sure that the service &#8220;pgdsn&#8221; shows up:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,9]; title: ; notranslate">
&#x5B;oracle@ora ~]$ lsnrctl stop

LSNRCTL for Linux: Version 21.0.0.0.0 - Production on 06-MAR-2026 08:22:52

Copyright (c) 1991, 2021, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ora.it.dbi-services.com)(PORT=1521)))
The command completed successfully
&#x5B;oracle@ora ~]$ lsnrctl start

LSNRCTL for Linux: Version 21.0.0.0.0 - Production on 06-MAR-2026 08:22:53

Copyright (c) 1991, 2021, Oracle.  All rights reserved.

Starting /opt/oracle/product/21c/dbhomeXE//bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 21.0.0.0.0 - Production
System parameter file is /opt/oracle/homes/OraDBHome21cXE/network/admin/listener.ora
Log messages written to /opt/oracle/diag/tnslsnr/ora/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora.it.dbi-services.com)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ora.it.dbi-services.com)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 21.0.0.0.0 - Production
Start Date                06-MAR-2026 08:22:53
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Default Service           XE
Listener Parameter File   /opt/oracle/homes/OraDBHome21cXE/network/admin/listener.ora
Listener Log File         /opt/oracle/diag/tnslsnr/ora/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora.it.dbi-services.com)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service &quot;orcl&quot; has 1 instance(s).
  Instance &quot;orcl&quot;, status UNKNOWN, has 1 handler(s) for this service...
Service &quot;pgdsn&quot; has 1 instance(s).
  Instance &quot;pgdsn&quot;, status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

</pre></div>


<p>Finally, create a database link in Oracle and verify that you can ask for data from PostgreSQL:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,12,16]; title: ; notranslate">
&#x5B;oracle@ora ~]$ sqlplus / as sysdba

SQL*Plus: Release 21.0.0.0.0 - Production on Fri Mar 6 08:27:23 2026
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Connected to:
Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL&gt; create database link pglink connect to &quot;postgres&quot; identified by &quot;postgres&quot; using &#039;pgdsn&#039;;

Database link created.

SQL&gt; select &quot;datname&quot; from &quot;pg_database&quot;@pglink;

datname
--------------------------------------------------------------------------------
postgres
template1
template0

SQL&gt; 
</pre></div>


<p>That&#8217;s it.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/reading-data-from-postgresql-into-oracle/">Reading data from PostgreSQL into Oracle</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/reading-data-from-postgresql-into-oracle/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: pg_dumpall in binary format</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-pg_dumpall-in-binary-format/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-pg_dumpall-in-binary-format/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Fri, 27 Feb 2026 10:51:38 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43239</guid>

					<description><![CDATA[<p>One of the limitation of pg_dumpall up to PostgreSQL version 18 is, that you can only dump in text mode. While pg_dump can dump in various formats (plain,custom,tar and directory) since ages. pg_dump was always limited to text mode. This will change with PostgreSQL 19 because this was committed recently. The default is still plain [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-pg_dumpall-in-binary-format/">PostgreSQL 19: pg_dumpall in binary format</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>One of the limitation of <a href="https://www.postgresql.org/docs/18/app-pg-dumpall.html" target="_blank" rel="noreferrer noopener">pg_dumpall</a> up to PostgreSQL version 18 is, that you can only dump in text mode. While <a href="https://www.postgresql.org/docs/18/app-pgdump.html" target="_blank" rel="noreferrer noopener">pg_dump</a> can dump in various formats (plain,custom,tar and directory) since ages. pg_dump was always limited to text mode. This will change with PostgreSQL 19 because <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=763aaa06f03401584d07db71256fc0ab47235cce" target="_blank" rel="noreferrer noopener">this was committed recently</a>.</p>



<p>The default is still plain text but now you have the same options as pg_dump:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] pg_dumpall --help | grep -A 1 -w format
  -F, --format=c|d|t|p         output file format (custom, directory, tar,
                               plain text (default))
</pre></div>


<p>Lets assume we have have five user databases like this:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] for i in {1..5}; do createdb ${i}; done
postgres@:/home/postgres/ &#x5B;pgdev] psql -l
                                                        List of databases
   Name    |  Owner   | Encoding | Locale Provider |   Collate   |    Ctype    |   Locale    | ICU Rules |   Access privileges   
-----------+----------+----------+-----------------+-------------+-------------+-------------+-----------+-----------------------
 1         | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | 
 2         | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | 
 3         | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | 
 4         | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | 
 5         | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | 
 postgres  | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | 
 template0 | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | =c/postgres          +
           |          |          |                 |             |             |             |           | postgres=CTc/postgres
 template1 | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | =c/postgres          +
           |          |          |                 |             |             |             |           | postgres=CTc/postgre
</pre></div>


<p>&#8230; and a table in each:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] for i in {1..5}; do psql -c &quot;create table t${i} as select * from generate_series(1,100)&quot; ${i}; done
SELECT 100
SELECT 100
SELECT 100
SELECT 100
SELECT 100
</pre></div>


<p>Dumping that using the default mode of pg_dumpall results in a plain text as usual:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] pg_dumpall &amp;gt; dmp.sql
postgres@:/home/postgres/ &#x5B;pgdev] head -50 dmp.sql | egrep -v &quot;^$&quot;
postgres@:/home/postgres/ &#x5B;pgdev] head -50 dmp.sql | egrep -v &quot;^$|--&quot;
\restrict fkHQ08xl5cQnk99WM6prphaZBPdqlARNru7uAdgFRFMUJm7RdeI6Elk8zoOr8mg
SET default_transaction_read_only = off;
SET client_encoding = &#039;UTF8&#039;;
SET standard_conforming_strings = on;
CREATE ROLE postgres;
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS;
\unrestrict fkHQ08xl5cQnk99WM6prphaZBPdqlARNru7uAdgFRFMUJm7RdeI6Elk8zoOr8mg
\connect template1
\restrict EFrJ3U9GH8T5SSHUVhWam0FHhJyxp1pAMXSTXTLVaiHayK4ASUZzweTUywMJbmq
</pre></div>


<p>Starting with PostgreSQL 19 this can be changed to any of the other supported formats. The tar format will give you a directory structure like this:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2,9]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] pg_dumpall --format=t -f dmp_tar
postgres@:/home/postgres/ &#x5B;pgdev] ls -la dmp_tar/
total 12
drwx------. 1 postgres postgres   46 Feb 27 08:53 .
drwx------. 1 postgres postgres  436 Feb 27 08:53 ..
drwx------. 1 postgres postgres  110 Feb 27 08:53 databases
-rw-r--r--. 1 postgres postgres  448 Feb 27 08:53 map.dat
-rw-r--r--. 1 postgres postgres 1810 Feb 27 08:53 toc.glo
postgres@:/home/postgres/ &#x5B;pgdev] ls -la dmp_tar/databases/
total 56
drwx------. 1 postgres postgres  110 Feb 27 08:53 .
drwx------. 1 postgres postgres   46 Feb 27 08:53 ..
-rw-r--r--. 1 postgres postgres 6656 Feb 27 08:53 16388.tar
-rw-r--r--. 1 postgres postgres 6656 Feb 27 08:53 16389.tar
-rw-r--r--. 1 postgres postgres 6656 Feb 27 08:53 16390.tar
-rw-r--r--. 1 postgres postgres 6656 Feb 27 08:53 16391.tar
-rw-r--r--. 1 postgres postgres 6656 Feb 27 08:53 16392.tar
-rw-r--r--. 1 postgres postgres 6656 Feb 27 08:53 1.tar
-rw-r--r--. 1 postgres postgres 5632 Feb 27 08:53 5.tar
</pre></div>


<p>The &#8220;map&#8221; file maps OIDs to database names:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] cat dmp_tar/map.dat 
#################################################################
# map.dat
#
# This file maps oids to database names
#
# pg_restore will restore all the databases listed here, unless
# otherwise excluded. You can also inhibit restoration of a
# database by removing the line or commenting out the line with# a # mark.
#################################################################
1 template1
16388 1
16389 2
16390 3
16391 4
16392 5
5 postgres
</pre></div>


<p>The &#8220;toc&#8221; file, as usual is the table of contents which can be listed with <a href="https://www.postgresql.org/docs/18/app-pgrestore.html" target="_blank" rel="noreferrer noopener">pg_restore</a>:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] pg_restore -l dmp_tar/toc.glo
;
; Archive created at 2026-02-27 08:53:53 CET
;     dbname: postgres
;     TOC Entries: 10
;     Compression: none
;     Dump Version: 1.16-0
;     Format: CUSTOM
;     Integer: 4 bytes
;     Offset: 8 bytes
;     Dumped by pg_dump version: 19devel dbi services build
;
;
; Selected TOC Entries:
;
1; 0 0 default_transaction_read_only - default_transaction_read_only 
2; 0 0 client_encoding - client_encoding 
3; 0 0 standard_conforming_strings - standard_conforming_strings 
4; 0 0 DROP_GLOBAL - DATABASE &quot;1&quot; 
5; 0 0 DROP_GLOBAL - DATABASE &quot;2&quot; 
6; 0 0 DROP_GLOBAL - DATABASE &quot;3&quot; 
7; 0 0 DROP_GLOBAL - DATABASE &quot;4&quot; 
8; 0 0 DROP_GLOBAL - DATABASE &quot;5&quot; 
9; 0 0 DROP_GLOBAL - ROLE postgres 
10; 0 0 ROLE - ROLE postgres 
</pre></div>


<p>This comes with the possibility (as with the plain text format) to restore individual databases out of this global dump, or to reload global object only:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,16]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] pg_restore --globals-only dmp_tar/ -d postgres --verbosepg_restore: connecting to database for restore
pg_restore: executing SELECT pg_catalog.set_config(&#039;search_path&#039;, &#039;&#039;, false);
pg_restore: creating default_transaction_read_only &quot;default_transaction_read_only&quot;
pg_restore: creating client_encoding &quot;client_encoding&quot;
pg_restore: creating standard_conforming_strings &quot;standard_conforming_strings&quot;
pg_restore: creating ROLE &quot;ROLE postgres&quot;
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 10; 0 0 ROLE ROLE postgres (no owner)
pg_restore: error: could not execute query: ERROR:  role &quot;postgres&quot; already exists
Command was: CREATE ROLE postgres;
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS;

pg_restore: database restoring skipped because option -g/--globals-only was specified
pg_restore: warning: errors ignored on restore: 1

postgres@:/home/postgres/ &#x5B;pgdev] pg_restore --globals-only dmp_tar/ -d postgres --verbosepg_restore: connecting to database for restore
pg_restore: executing SELECT pg_catalog.set_config(&#039;search_path&#039;, &#039;&#039;, false);
pg_restore: creating default_transaction_read_only &quot;default_transaction_read_only&quot;
pg_restore: creating client_encoding &quot;client_encoding&quot;
pg_restore: creating standard_conforming_strings &quot;standard_conforming_strings&quot;
pg_restore: creating ROLE &quot;ROLE postgres&quot;
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 10; 0 0 ROLE ROLE postgres (no owner)
pg_restore: error: could not execute query: ERROR:  role &quot;postgres&quot; already exists
Command was: CREATE ROLE postgres;
ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS;

pg_restore: database restoring skipped because option -g/--globals-only was specified
pg_restore: warning: errors ignored on restore: 1
</pre></div>


<p>Another option is to run multiple restore processes to load databases in parallel. Nice to have that option and as usual: Thanks to all involved.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-pg_dumpall-in-binary-format/">PostgreSQL 19: pg_dumpall in binary format</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-pg_dumpall-in-binary-format/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 
Lazy Loading (feed)

Served from: www.dbi-services.com @ 2026-04-26 13:13:11 by W3 Total Cache
-->