<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Archives des Database Administration &amp; Monitoring - dbi Blog</title>
	<atom:link href="https://www.dbi-services.com/blog/category/database-administration-monitoring/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.dbi-services.com/blog/category/database-administration-monitoring/</link>
	<description></description>
	<lastBuildDate>Fri, 15 May 2026 19:38:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>SQL Server Snapshot Backup and Restore with Proxmox ZFS &#8211; REST API with SQL Server 2025 (3/3)</title>
		<link>https://www.dbi-services.com/blog/sql-server-snapshot-backup-and-restore-with-proxmox-zfs-rest-api-with-sql-server-2025-3-3/</link>
					<comments>https://www.dbi-services.com/blog/sql-server-snapshot-backup-and-restore-with-proxmox-zfs-rest-api-with-sql-server-2025-3-3/#respond</comments>
		
		<dc:creator><![CDATA[Amine Haloui]]></dc:creator>
		<pubDate>Thu, 14 May 2026 21:39:18 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[Operating systems]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[ZFS]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=44525</guid>

					<description><![CDATA[<p>The proposed architecture consists in adding a small internal REST API on the Proxmox server in order to expose a controlled ZFS snapshot operation. SQL Server 2025 can then call this API through sp_invoke_external_rest_endpoint, instead of running SSH commands directly or relying on an external tool. The role of the API is deliberately limited: it [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/sql-server-snapshot-backup-and-restore-with-proxmox-zfs-rest-api-with-sql-server-2025-3-3/">SQL Server Snapshot Backup and Restore with Proxmox ZFS &#8211; REST API with SQL Server 2025 (3/3)</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The proposed architecture consists in adding a small internal REST API on the Proxmox server in order to expose a controlled ZFS snapshot operation. SQL Server 2025 can then call this API through sp_invoke_external_rest_endpoint, instead of running SSH commands directly or relying on an external tool.</p>



<p>The role of the API is deliberately limited: it receives a snapshot request, checks that the requested zvol is authorized, and then runs the zfs snapshot command on the Proxmox side. An allowlist is used to restrict the ZFS volumes that can be accessed. This prevents a REST call from being able to manipulate any dataset on the server.</p>



<p>With this approach, we can reproduce a behavior close to what an enterprise storage array provides, but using Proxmox and ZFS. It is important to note that Proxmox does not natively provide the same level of integration as Pure Storage for SQL Server snapshots. Pure Storage provides dedicated mechanisms and integrations. In our case, we need to build a specific orchestration layer. The REST API therefore acts as an adapter between SQL Server, which drives the snapshot backup workflow, and ZFS, which actually performs the storage-level snapshot.</p>



<h2 class="wp-block-heading" id="h-architecture">Architecture</h2>



<p>Here is a global overview of the architecture:</p>



<ul class="wp-block-list">
<li>SQL Server freezes the database I/Os</li>



<li>SQL Server 2025 calls the internal REST API</li>



<li>The REST API validates the request and checks the zvol allowlist</li>



<li>The API triggers the ZFS snapshot on Proxmox</li>



<li>The API returns the snapshot information to SQL Server</li>



<li>SQL Server creates the metadata-only backup</li>



<li>The database I/Os are released</li>
</ul>



<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="998" height="1024" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-65-998x1024.png" alt="" class="wp-image-44526" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-65-998x1024.png 998w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-65-292x300.png 292w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-65-768x788.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-65-1496x1536.png 1496w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-65-1995x2048.png 1995w" sizes="(max-width: 998px) 100vw, 998px" /></figure>



<h2 class="wp-block-heading">REST API implementation</h2>



<p>Under Proxmox, we install the required packages:</p>



<pre class="wp-block-code"><code>apt update
apt install -y python3-venv sudo openssl</code></pre>



<p>We create a dedicated user:</p>



<pre class="wp-block-code"><code>useradd --system \
&nbsp; --home /opt/sql-zfs-api \
&nbsp; --shell /usr/sbin/nologin \
&nbsp; sqlsnap</code></pre>



<p>We create the following folders:</p>



<pre class="wp-block-code"><code>mkdir -p /opt/sql-zfs-api
mkdir -p /etc/sql-zfs-api</code></pre>



<p>We declare the authorized zvol :</p>



<pre class="wp-block-code"><code>cat &gt;/etc/sql-zfs-api/allowed-zvols &lt;&lt;'EOF'
sqlpool/pve/vm-302-disk-0
EOF</code></pre>



<p>We create a root-only allowlist:</p>



<pre class="wp-block-code"><code>chown root:root /etc/sql-zfs-api/allowed-zvols
chmod 600 /etc/sql-zfs-api/allowed-zvols</code></pre>



<p>Then we create the secured ZFS helper. This script is executed as root through sudo, but it rejects any dataset that is not defined in the allowlist.</p>



<pre class="wp-block-code"><code>cat &gt;/usr/local/sbin/sql-zfs-helper &lt;&lt;'EOF'
#!/usr/bin/env bash
set -euo pipefail

ALLOW_FILE="/etc/sql-zfs-api/allowed-zvols"
LOCK_FILE="/run/sql-zfs-helper.lock"

die() {
  echo "$*" &gt;&amp;2
  exit 1
}

exec 9&gt;"$LOCK_FILE"
flock -n 9 || die "another snapshot operation is already running"

&#091;&#091; -r "$ALLOW_FILE" ]] || die "allowlist not readable: $ALLOW_FILE"

mapfile -t ALLOWED_DATASETS &lt; &lt;(grep -Ev '^\s*(#|$)' "$ALLOW_FILE")

is_allowed() {
  local ds="$1"
  local allowed
  for allowed in "${ALLOWED_DATASETS&#091;@]}"; do
    &#091;&#091; "$ds" == "$allowed" ]] &amp;&amp; return 0
  done
  return 1
}

valid_snapname() {
  &#091;&#091; "$1" =~ ^&#091;A-Za-z0-9_.:-]{1,120}$ ]]
}

ACTION="${1:-}"
shift || true

case "$ACTION" in
  snapshot)
    SNAPNAME="${1:-}"
    shift || true

    valid_snapname "$SNAPNAME" || die "invalid snapshot name: $SNAPNAME"
    &#091;&#091; "$#" -ge 1 ]] || die "no zvol specified"
    &#091;&#091; "$#" -le 8 ]] || die "too many zvols"

    SNAPSHOTS=()

    for DS in "$@"; do
      is_allowed "$DS" || die "dataset not allowed: $DS"
      /sbin/zfs list -H -t volume -o name "$DS" &gt;/dev/null 2&gt;&amp;1 || die "zvol not found: $DS"

      FULLSNAP="${DS}@${SNAPNAME}"

      if /sbin/zfs list -H -t snapshot -o name "$FULLSNAP" &gt;/dev/null 2&gt;&amp;1; then
        die "snapshot already exists: $FULLSNAP"
      fi

      SNAPSHOTS+=("$FULLSNAP")
    done

    /sbin/zfs snapshot "${SNAPSHOTS&#091;@]}"
    /sbin/zfs hold sqlsnap "${SNAPSHOTS&#091;@]}"

    printf '{"status":"ok","snapshots":&#091;'
    SEP=""
    for S in "${SNAPSHOTS&#091;@]}"; do
      printf '%s"%s"' "$SEP" "$S"
      SEP=","
    done
    printf ']}\n'
    ;;

  list)
    /sbin/zfs list -H -t snapshot -o name -r sqlpool | grep '@sql_' || true
    ;;

  *)
    die "usage: sql-zfs-helper snapshot SNAPNAME ZVOL &#091;ZVOL...]"
    ;;
esac
EOF

chown root:root /usr/local/sbin/sql-zfs-helper
chmod 750 /usr/local/sbin/sql-zfs-helper
</code></pre>



<p>We only allow the helper through sudo:</p>



<pre class="wp-block-code"><code>cat &gt;/etc/sudoers.d/sql-zfs-helper &lt;&lt;'EOF'
sqlsnap ALL=(root) NOPASSWD: /usr/local/sbin/sql-zfs-helper *
EOF

chmod 440 /etc/sudoers.d/sql-zfs-helper
visudo -cf /etc/sudoers.d/sql-zfs-helper</code></pre>



<p>We install the FastAPI API:</p>



<pre class="wp-block-code"><code>python3 -m venv /opt/sql-zfs-api/venv
/opt/sql-zfs-api/venv/bin/pip install fastapi "uvicorn&#091;standard]"</code></pre>



<p>We create the application file:</p>



<pre class="wp-block-code"><code>cat &gt;/opt/sql-zfs-api/app.py &lt;&lt;'EOF'
import os
import re
import json
import socket
import secrets
import subprocess
from datetime import datetime, timezone
from fastapi import FastAPI, Header, HTTPException
from pydantic import BaseModel, Field

API_KEY = os.environ.get("SQL_ZFS_API_KEY", "")
ALLOW_FILE = "/etc/sql-zfs-api/allowed-zvols"
SNAP_RE = re.compile(r"^&#091;A-Za-z0-9_.:-]{1,120}$")

app = FastAPI(title="SQL ZFS Snapshot API", version="1.0.0")


class SnapshotRequest(BaseModel):
    database: str = Field(..., min_length=1, max_length=128)
    vmid: int = 302
    snapname: str = Field(..., min_length=1, max_length=120)
    zvols: list&#091;str] = Field(..., min_length=1, max_length=8)


def load_allowed_zvols() -&gt; set&#091;str]:
    with open(ALLOW_FILE, "r", encoding="utf-8") as f:
        return {
            line.strip()
            for line in f
            if line.strip() and not line.strip().startswith("#")
        }


def check_api_key(x_sqlsnap_key: str | None) -&gt; None:
    if not API_KEY:
        raise HTTPException(status_code=500, detail="API key not configured")

    if not x_sqlsnap_key:
        raise HTTPException(status_code=401, detail="missing API key")

    if not secrets.compare_digest(x_sqlsnap_key, API_KEY):
        raise HTTPException(status_code=403, detail="invalid API key")


@app.get("/health")
def health():
    return {
        "status": "ok",
        "host": socket.gethostname(),
        "utc": datetime.now(timezone.utc).isoformat(),
    }


@app.post("/v1/sql-zfs/snapshot")
def create_snapshot(
    req: SnapshotRequest,
    x_sqlsnap_key: str | None = Header(default=None, alias="x-sqlsnap-key"),
):
    check_api_key(x_sqlsnap_key)

    if not SNAP_RE.fullmatch(req.snapname):
        raise HTTPException(status_code=400, detail="invalid snapname")

    allowed = load_allowed_zvols()

    for zvol in req.zvols:
        if zvol not in allowed:
            raise HTTPException(status_code=403, detail=f"zvol not allowed: {zvol}")

    cmd = &#091;
        "sudo",
        "/usr/local/sbin/sql-zfs-helper",
        "snapshot",
        req.snapname,
        *req.zvols,
    ]

    try:
        completed = subprocess.run(
            cmd,
            text=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            timeout=30,
            check=False,
        )
    except subprocess.TimeoutExpired:
        raise HTTPException(status_code=504, detail="zfs snapshot timeout")

    if completed.returncode != 0:
        raise HTTPException(
            status_code=500,
            detail={
                "error": completed.stderr.strip(),
                "stdout": completed.stdout.strip(),
            },
        )

    snapshots = &#091;f"{zvol}@{req.snapname}" for zvol in req.zvols]

    return {
        "status": "ok",
        "database": req.database,
        "vmid": req.vmid,
        "snapname": req.snapname,
        "snapshots": snapshots,
        "media_description": "zfs|" + socket.gethostname() + "|" + ";".join(snapshots),
    }
EOF

chown -R root:root /opt/sql-zfs-api
chmod 755 /opt/sql-zfs-api
chmod 644 /opt/sql-zfs-api/app.py
</code></pre>



<p>We configure and generate the key:</p>



<pre class="wp-block-code"><code>APIKEY="$(openssl rand -hex 32)"
echo "$APIKEY"</code></pre>



<p>We create the environment file:</p>



<pre class="wp-block-code"><code>cat &gt;/etc/sql-zfs-api/sql-zfs-api.env &lt;&lt;EOF
SQL_ZFS_API_KEY=$APIKEY
EOF

chown root:root /etc/sql-zfs-api/sql-zfs-api.env
chmod 600 /etc/sql-zfs-api/sql-zfs-api.env</code></pre>



<p>We need to save the generated key.</p>



<p>Next, we enable HTTPS. SQL Server sp_invoke_external_rest_endpoint calls HTTPS endpoints, and the documentation specifies that only HTTPS endpoints with TLS are supported.</p>



<pre class="wp-block-code"><code>openssl req -x509 -newkey rsa:4096 -sha256 -days 360 -nodes \
  -keyout /etc/sql-zfs-api/tls.key \
  -out /etc/sql-zfs-api/tls.crt \
  -subj "/CN=promox1" \
  -addext "subjectAltName=DNS:promox1,IP:192.168.1.110"

chown root:sqlsnap /etc/sql-zfs-api/tls.key /etc/sql-zfs-api/tls.crt
chmod 640 /etc/sql-zfs-api/tls.key
chmod 644 /etc/sql-zfs-api/tls.crt</code></pre>



<p>The /etc/sql-zfs-api/tls.crt certificate must be imported into the Windows trusted root certification authorities on the SQL Server side. Otherwise, the HTTPS call may fail.</p>



<p>We create the systemd service:</p>



<pre class="wp-block-code"><code>cat &gt;/etc/systemd/system/sql-zfs-api.service &lt;&lt;'EOF'
&#091;Unit]
Description=SQL Server to ZFS Snapshot API
After=network-online.target
Wants=network-online.target

&#091;Service]
User=sqlsnap
Group=sqlsnap
WorkingDirectory=/opt/sql-zfs-api
EnvironmentFile=/etc/sql-zfs-api/sql-zfs-api.env
ExecStart=/opt/sql-zfs-api/venv/bin/uvicorn app:app --host 0.0.0.0 --port 8443 --ssl-keyfile /etc/sql-zfs-api/tls.key --ssl-certfile /etc/sql-zfs-api/tls.crt
Restart=on-failure
RestartSec=3

&#091;Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable --now sql-zfs-api
systemctl status sql-zfs-api
</code></pre>



<p>We check the status of our API:</p>



<figure class="wp-block-image size-full"><img decoding="async" width="697" height="186" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-67.png" alt="" class="wp-image-44528" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-67.png 697w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-67-300x80.png 300w" sizes="(max-width: 697px) 100vw, 697px" /></figure>



<p>It is possible to call the API in PowerShell using Invoke-RestMethod with PowerShell 7:</p>



<pre class="wp-block-code"><code>$headers = @{
"Content-Type"  = "application/json"
"x-sqlsnap-key" = "MyKey"
}

$body = @{
database = "StackOverflow"
vmid     = 302
snapname = "StackOverflow_test010"
zvols    = @("sqlpool/pve/vm-302-disk-0")
} | ConvertTo-Json -Depth 5

Invoke-RestMethod `
-Uri "https://192.168.1.110:8443/v1/sql-zfs/snapshot" `
-Method Post `
-Headers $headers `
-Body $body `
-ContentType "application/json" `
-SkipCertificateCheck
</code></pre>



<p>This gives:</p>



<figure class="wp-block-image size-full"><img decoding="async" width="833" height="510" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-80.png" alt="" class="wp-image-44590" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-80.png 833w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-80-300x184.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-80-768x470.png 768w" sizes="(max-width: 833px) 100vw, 833px" /></figure>



<h2 class="wp-block-heading" id="h-test-from-sql-server">Test from SQL Server</h2>



<p>A certificate was generated on Proxmox and it needs to be imported on the SQL Server host. In my case, it was located here:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="404" height="79" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-69.png" alt="" class="wp-image-44530" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-69.png 404w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-69-300x59.png 300w" sizes="auto, (max-width: 404px) 100vw, 404px" /></figure>



<p>I then imported it on Windows Server:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="788" height="149" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-70.png" alt="" class="wp-image-44531" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-70.png 788w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-70-300x57.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-70-768x145.png 768w" sizes="auto, (max-width: 788px) 100vw, 788px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="118" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-71-1024x118.png" alt="" class="wp-image-44532" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-71-1024x118.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-71-300x34.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-71-768x88.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-71.png 1384w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>For testing purposes, I created something simple. On the SQL Server side, we can create a database that will be used to store our future stored procedure. This procedure will allow us to interact with the API. In my case, I created a database called dbi_tools:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="244" height="131" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-72.png" alt="" class="wp-image-44533" /></figure>



<p>This database will contain a credential. In our case, the DATABASE SCOPED CREDENTIAL is used to securely store the authentication information required to call the REST API from SQL Server. This allows us, for example, to protect the API key:</p>



<pre class="wp-block-code"><code>USE &#091;dbi_tools]
GO

IF NOT EXISTS (
    SELECT 1
    FROM sys.symmetric_keys
    WHERE name = '##MS_DatabaseMasterKey##'
)
BEGIN
    CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'MyStrongPassword_%99';
END
GO

CREATE DATABASE SCOPED CREDENTIAL &#091;https://192.168.1.110:8443/v1/sql-zfs/snapshot]
WITH
    IDENTITY = 'HTTPEndpointHeaders',
    SECRET = '{"x-sqlsnap-key":"MyAPIKey"}';
GO</code></pre>



<p>We then create a stored procedure to encapsulate the code used to call the API:</p>



<pre class="wp-block-code"><code>USE dbi_tools;
GO

CREATE OR ALTER PROCEDURE dbo.usp_BackupDatabase_WithZfsSnapshot
    @DatabaseName sysname,
    @BackupDirectory nvarchar(4000) = N'D:\Backups\'
AS
BEGIN
    SET NOCOUNT ON;

    DECLARE @Url nvarchar(4000) =
        N'https://192.168.1.110:8443/v1/sql-zfs/snapshot';

    DECLARE @Vmid int = 302;

    DECLARE @ZvolsJson nvarchar(max) =
        N'&#091;"sqlpool/pve/vm-302-disk-0"]';

    DECLARE @Stamp varchar(20) =
        REPLACE(REPLACE(CONVERT(varchar(19), SYSUTCDATETIME(), 126), '-', ''), ':', '') + 'Z';

    DECLARE @SafeDbName nvarchar(128) =
        REPLACE(REPLACE(REPLACE(@DatabaseName, N' ', N'_'), N'&#091;', N''), N']', N'');

    DECLARE @SnapName nvarchar(128) =
        CONCAT(N'sql_', @SafeDbName, N'_', @Stamp);

    DECLARE @BackupFile nvarchar(4000) =
        CONCAT(@BackupDirectory, N'\', @SafeDbName, N'_', @Stamp, N'.bkm');

    DECLARE @Payload nvarchar(max) =
    (
        SELECT
            @DatabaseName AS &#091;database],
            @Vmid AS &#091;vmid],
            @SnapName AS &#091;snapname],
            JSON_QUERY(@ZvolsJson) AS &#091;zvols]
        FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
    );

    DECLARE @ReturnCode int;
    DECLARE @Response nvarchar(max);
    DECLARE @SnapshotList nvarchar(max);

    SELECT @SnapshotList =
        STRING_AGG(CONCAT(&#091;value], N'@', @SnapName), N';')
    FROM OPENJSON(@ZvolsJson);

    DECLARE @MediaDescription nvarchar(max) =
        CONCAT(N'zfs|promox1|', @SnapshotList);

    DECLARE @Sql nvarchar(max);

    BEGIN TRY
        SET @Sql =
            N'ALTER DATABASE ' + QUOTENAME(@DatabaseName) +
            N' SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON;';

        EXEC sys.sp_executesql @Sql;

        EXEC @ReturnCode = sys.sp_invoke_external_rest_endpoint
            @url = @Url,
            @method = N'POST',
            @headers = N'{"Content-Type":"application/json","Accept":"application/json"}',
            @payload = @Payload,
            @credential = &#091;https://192.168.1.110:8443/v1/sql-zfs/snapshot],
            @timeout = 30,
            @response = @Response OUTPUT;

        IF @ReturnCode &lt;&gt; 0
        BEGIN
            DECLARE @Err nvarchar(max) =
                CONCAT(N'ZFS snapshot API failed. ReturnCode=', @ReturnCode, N' Response=', @Response);
            THROW 51001, @Err, 1;
        END;

        SET @Sql =
            N'BACKUP DATABASE ' + QUOTENAME(@DatabaseName) + N'
              TO DISK = @BackupFile
              WITH METADATA_ONLY,
                   FORMAT,
                   MEDIANAME = @MediaName,
                   MEDIADESCRIPTION = @MediaDescription,
                   NAME = @BackupName;';

        EXEC sys.sp_executesql
            @Sql,
            N'@BackupFile nvarchar(4000),
              @MediaName nvarchar(128),
              @MediaDescription nvarchar(max),
              @BackupName nvarchar(128)',
            @BackupFile = @BackupFile,
            @MediaName = @SnapName,
            @MediaDescription = @MediaDescription,
            @BackupName = @SnapName;

        SELECT
            @DatabaseName AS database_name,
            @SnapName AS zfs_snapshot_name,
            @SnapshotList AS zfs_snapshots,
            @BackupFile AS metadata_backup_file,
            @MediaDescription AS media_description,
            @Response AS api_response;
    END TRY
    BEGIN CATCH
        IF DATABASEPROPERTYEX(@DatabaseName, 'IsDatabaseSuspendedForSnapshotBackup') = 1
        BEGIN
            SET @Sql =
                N'ALTER DATABASE ' + QUOTENAME(@DatabaseName) +
                N' SET SUSPEND_FOR_SNAPSHOT_BACKUP = OFF;';

            EXEC sys.sp_executesql @Sql;
        END;

        THROW;
    END CATCH
END;
GO
</code></pre>



<p>We then call the stored procedure:</p>



<pre class="wp-block-code"><code>EXEC dbi_tools.dbo.usp_BackupDatabase_WithZfsSnapshot
    @DatabaseName = N'StackOverflow',
    @BackupDirectory = N'D:\Backups\';</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="137" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-73-1024x137.png" alt="" class="wp-image-44534" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-73-1024x137.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-73-300x40.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-73-768x102.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-73.png 1432w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>The backup was generated :</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="630" height="149" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-74.png" alt="" class="wp-image-44535" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-74.png 630w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-74-300x71.png 300w" sizes="auto, (max-width: 630px) 100vw, 630px" /></figure>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="777" height="411" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-75.png" alt="" class="wp-image-44536" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-75.png 777w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-75-300x159.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-75-768x406.png 768w" sizes="auto, (max-width: 777px) 100vw, 777px" /></figure>



<h2 class="wp-block-heading" id="h-references">References</h2>



<p><a href="https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-invoke-external-rest-endpoint-transact-sql?view=sql-server-ver17&amp;tabs=request-headers">sp_invoke_external_rest_endpoint</a></p>



<p>Thank you. <a href="https://www.linkedin.com/in/amine-haloui-76968056/">Amine Haloui</a></p>
<p>L’article <a href="https://www.dbi-services.com/blog/sql-server-snapshot-backup-and-restore-with-proxmox-zfs-rest-api-with-sql-server-2025-3-3/">SQL Server Snapshot Backup and Restore with Proxmox ZFS &#8211; REST API with SQL Server 2025 (3/3)</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/sql-server-snapshot-backup-and-restore-with-proxmox-zfs-rest-api-with-sql-server-2025-3-3/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>SQL Server Snapshot Backup and Restore with Proxmox ZFS &#8211; Powershell implementation (2/3)</title>
		<link>https://www.dbi-services.com/blog/sql-server-snapshot-backup-and-restore-with-proxmox-zfs-2-3/</link>
					<comments>https://www.dbi-services.com/blog/sql-server-snapshot-backup-and-restore-with-proxmox-zfs-2-3/#respond</comments>
		
		<dc:creator><![CDATA[Amine Haloui]]></dc:creator>
		<pubDate>Thu, 14 May 2026 21:35:41 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[Operating systems]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[PowerShell]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[ZFS]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=44497</guid>

					<description><![CDATA[<p>In the previous section, we discussed the drawbacks of running the commands manually. Indeed, the manual process was taking too much time and could directly impact the database state while the freeze was occurring. To address this issue, it is possible to automate the solution with PowerShell. The idea is to automate the different operations [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/sql-server-snapshot-backup-and-restore-with-proxmox-zfs-2-3/">SQL Server Snapshot Backup and Restore with Proxmox ZFS &#8211; Powershell implementation (2/3)</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In the previous section, we discussed the drawbacks of running the commands manually. Indeed, the manual process was taking too much time and could directly impact the database state while the freeze was occurring.</p>



<p>To address this issue, it is possible to automate the solution with PowerShell. The idea is to automate the different operations involved in the snapshot backup and restore process.</p>



<p>We will use two scripts:</p>



<ul class="wp-block-list">
<li>One script to perform the backups and create the snapshots.</li>



<li>One script to perform the restores.</li>
</ul>



<h2 class="wp-block-heading" id="h-backup-process">Backup process</h2>



<p>Here is how the backup process works:</p>



<ul class="wp-block-list">
<li>We connect to the corresponding SQL Server instance.</li>



<li>We change the state of the database using ALTER DATABASE &#8230; SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON. At this point, the I/Os are frozen.</li>



<li>We connect to the hypervisor through SSH.</li>



<li>We create the snapshot.</li>



<li>We back up the database using BACKUP DATABASE &#8230; WITH METADATA_ONLY.</li>



<li>We change the state of the database using ALTER DATABASE &#8230; SET SUSPEND_FOR_SNAPSHOT_BACKUP = OFF. At this point, the I/Os are unfrozen.</li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="627" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-50-1024x627.png" alt="" class="wp-image-44499" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-50-1024x627.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-50-300x184.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-50-768x470.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-50-1536x941.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-50-2048x1254.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">Powershell implementation (backup)</h2>



<p>Here is the code used to perform the backup:</p>



<pre class="wp-block-code"><code>param(
    &#091;string]$SqlInstance = "VM-WS25-SQL2",
    &#091;string]$Database    = "StackOverflow",
    &#091;string]$BackupDir   = "D:\Backups",
    &#091;string]$PveHost     = "192.168.1.110",
    &#091;string]$PveUser     = "MyUser",
    &#091;string&#091;]]$Zvols     = @("sqlpool/pve/vm-302-disk-0")
)

$Timestamp = Get-Date -Format "yyyyMMddTHHmmss"
$SnapName  = "sql_${Database}_${Timestamp}"

$DbSafe = $Database.Replace("]", "]]")
$BackupFile = Join-Path $BackupDir "${Database}_${Timestamp}.bkm"

$ZfsSnapshots = $Zvols | ForEach-Object { "$_@$SnapName" }
$ZfsSnapshotArgs = $ZfsSnapshots -join " "

$MediaDescription = "zfs|$PveHost|$ZfsSnapshotArgs"

$BackupFileSql = $BackupFile.Replace("'", "''")
$MediaSql = $MediaDescription.Replace("'", "''")

$connString = "Server=$SqlInstance;Database=master;Integrated Security=True;TrustServerCertificate=True;Application Name=ZFS-TSQL-Snapshot;"
$conn = New-Object System.Data.SqlClient.SqlConnection $connString

function Invoke-SqlNonQuery {
    param(&#091;string]$Sql)

    $cmd = $conn.CreateCommand()
    $cmd.CommandTimeout = 0
    $cmd.CommandText = $Sql
    &#091;void]$cmd.ExecuteNonQuery()
}

try {
    $conn.Open()

    Write-Host "Freezing SQL database writes..."
    Invoke-SqlNonQuery "ALTER DATABASE &#091;$DbSafe] SET SUSPEND_FOR_SNAPSHOT_BACKUP = ON;"

    Write-Host "Taking ZFS snapshot on Proxmox..."
    ssh "$PveUser@$PveHost" "zfs snapshot $ZfsSnapshotArgs &amp;&amp; zfs hold sqlsnap $ZfsSnapshotArgs"

    if ($LASTEXITCODE -ne 0) {
        throw "ZFS snapshot failed on $PveHost"
    }

    Write-Host "Writing SQL metadata backup..."

    Invoke-SqlNonQuery @"
BACKUP DATABASE &#091;$DbSafe]
TO DISK = N'$BackupFileSql'
WITH METADATA_ONLY,
     MEDIADESCRIPTION = N'$MediaSql',
     NAME = N'$SnapName';
"@

    Write-Host "Snapshot backup completed:"
    Write-Host "  Snapshot: $ZfsSnapshotArgs"
    Write-Host "  Metadata: $BackupFile"
}
catch {
    Write-Warning $_

    try {
        Write-Warning "Attempting to unfreeze SQL database..."
        Invoke-SqlNonQuery "ALTER DATABASE &#091;$DbSafe] SET SUSPEND_FOR_SNAPSHOT_BACKUP = OFF;"
    }
    catch {
        Write-Warning "Could not unfreeze cleanly. Check SQL Server error log."
    }

    throw
}
finally {
    $conn.Close()
}</code></pre>



<h2 class="wp-block-heading">Restore process</h2>



<p>Here is how the restore process works:</p>



<ul class="wp-block-list">
<li>We connect to the corresponding SQL Server instance.</li>



<li>We take the database offline.</li>



<li>The volume dedicated to the StackOverflow database is taken offline.</li>



<li>We connect to the hypervisor through SSH.</li>



<li>We roll back the corresponding snapshot.</li>



<li>We restore the database using the corresponding backup, which was created at the same time as the snapshot.</li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="627" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-51-1024x627.png" alt="" class="wp-image-44501" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-51-1024x627.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-51-300x184.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-51-768x470.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-51-1536x941.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-51-2048x1254.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">Powershell implementation (restore)</h2>



<p>Here is the code used to perform the restore:</p>



<pre class="wp-block-code"><code>param(
    &#091;string]$SqlInstance = "VM-WS25-SQL2",
    &#091;string]$Database    = "StackOverflow",
    &#091;string]$BackupFile  = "D:\Backups\StackOverflow_20260514T122642.bkm",
    &#091;string]$SnapName    = "sql_StackOverflow_20260514T122642",
    &#091;string]$PveHost     = "192.168.1.110",
    &#091;string]$PveUser     = "MyUser",
    &#091;string&#091;]]$Zvols     = @("sqlpool/pve/vm-302-disk-0"),
    &#091;string&#091;]]$DatabaseDriveLetters = @("T"),
    &#091;switch]$NoRecovery
)

$ErrorActionPreference = "Stop"

function Assert-SafeName {
    param(
        &#091;string]$Value,
        &#091;string]$Name,
        &#091;string]$Pattern
    )

    if ($Value -notmatch $Pattern) {
        throw "$Name contained not allowed characters : $Value"
    }
}

function Normalize-DriveLetter {
    param(&#091;string]$DriveLetter)

    $letter = $DriveLetter.Trim().TrimEnd(":").ToUpperInvariant()

    if ($letter -notmatch '^&#091;A-Z]$') {
        throw "Drive letter invalid : $DriveLetter"
    }

    return $letter
}

function Get-DiskForDriveLetter {
    param(&#091;string]$DriveLetter)

    $letter = Normalize-DriveLetter $DriveLetter

    $partition = Get-Partition -DriveLetter $letter -ErrorAction Stop
    $disk = $partition | Get-Disk -ErrorAction Stop

    return &#091;pscustomobject]@{
        DriveLetter = $letter
        DiskNumber  = &#091;int]$disk.Number
        IsOffline   = &#091;bool]$disk.IsOffline
        FriendlyName = $disk.FriendlyName
        Size        = $disk.Size
    }
}

function Invoke-SshChecked {
    param(&#091;string]$Command)

    Write-Host "SSH $PveUser@$PveHost :: $Command"

    &amp; ssh "$PveUser@$PveHost" "$Command"

    if ($LASTEXITCODE -ne 0) {
        throw "SSH command failed with code $LASTEXITCODE : $Command"
    }
}

function New-SqlConnection {
    $connString = "Server=$SqlInstance;Database=master;Integrated Security=True;TrustServerCertificate=True;Application Name=ZFS-TSQL-Restore-NoVmRestart;"
    return New-Object System.Data.SqlClient.SqlConnection $connString
}

function Invoke-SqlNonQuery {
    param(&#091;string]$Sql)

    $conn = New-SqlConnection

    try {
        $conn.Open()
        $cmd = $conn.CreateCommand()
        $cmd.CommandTimeout = 0
        $cmd.CommandText = $Sql
        &#091;void]$cmd.ExecuteNonQuery()
    }
    finally {
        $conn.Close()
    }
}

function Invoke-SqlScalar {
    param(&#091;string]$Sql)

    $conn = New-SqlConnection

    try {
        $conn.Open()
        $cmd = $conn.CreateCommand()
        $cmd.CommandTimeout = 0
        $cmd.CommandText = $Sql
        return $cmd.ExecuteScalar()
    }
    finally {
        $conn.Close()
    }
}

function Set-DatabaseDisksOffline {
    param(&#091;object&#091;]]$DiskInfos)

    $offlinedByScript = @()

    foreach ($diskInfo in ($DiskInfos | Sort-Object DiskNumber -Unique)) {
        if ($diskInfo.IsOffline) {
            Write-Host "Disque $($diskInfo.DiskNumber) déjà offline. Lecteur $($diskInfo.DriveLetter):"
            continue
        }

        Write-Host "Taking the Windows disk offline $($diskInfo.DiskNumber), drive $($diskInfo.DriveLetter):"
        Set-Disk -Number $diskInfo.DiskNumber -IsOffline $true

        $offlinedByScript += $diskInfo
    }

    return $offlinedByScript
}

function Set-DatabaseDisksOnline {
    param(&#091;object&#091;]]$DiskInfos)

    foreach ($diskInfo in ($DiskInfos | Sort-Object DiskNumber -Unique)) {
        Write-Host "Bringing the Windows disk back online. $($diskInfo.DiskNumber), drive $($diskInfo.DriveLetter):"
        Set-Disk -Number $diskInfo.DiskNumber -IsOffline $false
    }

    Write-Host "Update-HostStorageCache..."
    Update-HostStorageCache
}

Assert-SafeName -Value $SnapName -Name "SnapName" -Pattern '^&#091;A-Za-z0-9_.:-]{1,160}$'

foreach ($zvol in $Zvols) {
    Assert-SafeName -Value $zvol -Name "Zvol" -Pattern '^&#091;A-Za-z0-9_.:/-]{1,240}$'
}

$DbQuoted = "&#091;" + $Database.Replace("]", "]]") + "]"
$DbLiteral = $Database.Replace("'", "''")
$BackupFileSql = $BackupFile.Replace("'", "''")

$ZfsSnapshots = $Zvols | ForEach-Object { "$_@$SnapName" }
$ZfsSnapshotArgs = ($ZfsSnapshots | ForEach-Object { "'$_'" }) -join " "

$RecoveryOption = if ($NoRecovery) { "NORECOVERY" } else { "RECOVERY" }

$DatabaseDiskInfos = @()
$DisksOfflinedByScript = @()

Write-Host ""
Write-Host "Restore SQL Server from a ZFS snapshot, without restarting the VM"
Write-Host "SQL Instance : $SqlInstance"
Write-Host "Database     : $Database"
Write-Host "BackupFile   : $BackupFile"
Write-Host "DB volumes   : $($DatabaseDriveLetters -join ', ')"
Write-Host "Snapshots    :"
$ZfsSnapshots | ForEach-Object { Write-Host "  $_" }
Write-Host ""

try {
    Write-Host "Checking ZFS snapshots..."
    Invoke-SshChecked "zfs list -H -t snapshot -o name $ZfsSnapshotArgs &gt;/dev/null"

    Write-Host "Identifying Windows disks containing SQL Server files..."
    foreach ($driveLetter in $DatabaseDriveLetters) {
        $diskInfo = Get-DiskForDriveLetter $driveLetter
        $DatabaseDiskInfos += $diskInfo

        Write-Host "Drive $($diskInfo.DriveLetter): -&gt; Windows disk $($diskInfo.DiskNumber) &#091;$($diskInfo.FriendlyName)]"
    }

    $backupDrive = $null
    if ($BackupFile -match '^(&#091;A-Za-z]):\\') {
        $backupDrive = Normalize-DriveLetter $Matches&#091;1]

        try {
            $backupDiskInfo = Get-DiskForDriveLetter $backupDrive
            $targetDiskNumbers = @($DatabaseDiskInfos | ForEach-Object { $_.DiskNumber } | Select-Object -Unique)

            if ($targetDiskNumbers -contains $backupDiskInfo.DiskNumber) {
                throw @"
The backup file $BackupFile is located on drive $backupDrive, which is on the same Windows disk as the SQL Server data volume.
Taking the data disk offline would make the .bkm file inaccessible, and a rollback could also make the .bkm file disappear.
Move the .bkm file to C:, a network share, or another disk that is not rolled back.
"@
            }
        }
        catch {
            throw
        }
    }

    Write-Host "Checking whether the SQL Server database exists..."
    $DbExists = Invoke-SqlScalar "SELECT CASE WHEN DB_ID(N'$DbLiteral') IS NULL THEN 0 ELSE 1 END;"

    if ($DbExists -eq 1) {
        Write-Host "Taking database $Database OFFLINE..."
        Invoke-SqlNonQuery @"
ALTER DATABASE $DbQuoted SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE $DbQuoted SET OFFLINE WITH ROLLBACK IMMEDIATE;
"@
    }
    else {
        Write-Host "Database $Database does not exist in SQL Server. Continuing with disk offline and ZFS rollback."
    }

    Write-Host "Taking Windows disks containing MDF/LDF files offline..."
    $DisksOfflinedByScript = Set-DatabaseDisksOffline -DiskInfos $DatabaseDiskInfos

    Write-Host "Rolling back ZFS snapshot..."
    $RollbackCommands = ($ZfsSnapshots | ForEach-Object { "zfs rollback -r '$_'" }) -join "; "
    Invoke-SshChecked "set -e; $RollbackCommands"

    Write-Host "Bringing Windows disks back online..."
    Set-DatabaseDisksOnline -DiskInfos $DisksOfflinedByScript
    $DisksOfflinedByScript = @()

    Write-Host "Short pause to let Windows and SQL Server detect the restored disk state..."
    Start-Sleep -Seconds 5

    Write-Host "Restoring SQL Server metadata-only backup..."

    $RestoreSql = @"
RESTORE DATABASE $DbQuoted
FROM DISK = N'$BackupFileSql'
WITH METADATA_ONLY,
     REPLACE,
     $RecoveryOption;
"@

    Invoke-SqlNonQuery $RestoreSql

    if (-not $NoRecovery) {
        Write-Host "Setting database back to MULTI_USER..."
        Invoke-SqlNonQuery @"
ALTER DATABASE $DbQuoted SET MULTI_USER;
"@
    }

    Write-Host ""
    Write-Host "Restore completed."
    Write-Host "Database : $Database"
    Write-Host "Snapshot : $SnapName"
    Write-Host "Backup   : $BackupFile"
}
catch {
    Write-Warning "Restore failed: $_"

    if ($DisksOfflinedByScript.Count -gt 0) {
        try {
            Write-Warning "Attempting to bring disks offlined by the script back online..."
            Set-DatabaseDisksOnline -DiskInfos $DisksOfflinedByScript
            $DisksOfflinedByScript = @()
        }
        catch {
            Write-Warning "Unable to automatically bring the disks back online. Check with Get-Disk."
        }
    }

    try {
        $DbExistsAfterError = Invoke-SqlScalar "SELECT CASE WHEN DB_ID(N'$DbLiteral') IS NULL THEN 0 ELSE 1 END;"

        if ($DbExistsAfterError -eq 1 -and -not $NoRecovery) {
            Write-Warning "Attempting to set the database back ONLINE/MULTI_USER..."
            Invoke-SqlNonQuery @"
ALTER DATABASE $DbQuoted SET ONLINE;
ALTER DATABASE $DbQuoted SET MULTI_USER;
"@
        }
    }
    catch {
        Write-Warning "Unable to automatically set the database back ONLINE/MULTI_USER."
    }

    throw
}</code></pre>



<h2 class="wp-block-heading">What does it look like?</h2>



<p>We start the backup process:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="530" height="82" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-52.png" alt="" class="wp-image-44503" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-52.png 530w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-52-300x46.png 300w" sizes="auto, (max-width: 530px) 100vw, 530px" /></figure>



<p>We verify that the snapshot is present:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="750" height="131" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-53.png" alt="" class="wp-image-44504" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-53.png 750w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-53-300x52.png 300w" sizes="auto, (max-width: 750px) 100vw, 750px" /></figure>



<p>We verify that the backup is present:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="601" height="36" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-54.png" alt="" class="wp-image-44505" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-54.png 601w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-54-300x18.png 300w" sizes="auto, (max-width: 601px) 100vw, 601px" /></figure>



<p>We drop the StackOverflow database:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="314" height="301" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-55.png" alt="" class="wp-image-44506" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-55.png 314w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-55-300x288.png 300w" sizes="auto, (max-width: 314px) 100vw, 314px" /></figure>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="310" height="231" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-56.png" alt="" class="wp-image-44507" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-56.png 310w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-56-300x224.png 300w" sizes="auto, (max-width: 310px) 100vw, 310px" /></figure>



<p>We start the restore process:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="951" height="384" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-57.png" alt="" class="wp-image-44508" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-57.png 951w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-57-300x121.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-57-768x310.png 768w" sizes="auto, (max-width: 951px) 100vw, 951px" /></figure>



<p>The database is available again. The restore took only a few seconds for a database of approximately 200 GB.</p>



<h2 class="wp-block-heading">Major drawbacks</h2>



<p>In my case, the solution is executed from the SQL Server itself. Ideally, it should rather be hosted on another server or client machine. We could also imagine running these scripts from a scheduler such as RedDeck, for example.</p>



<p>During the database restore, the database is switched to SINGLE_USER mode. This could be an issue if the applications using the database reconnect very frequently. A better approach would probably be to explicitly terminate the active sessions using the KILL command.</p>



<p>We have also not yet covered the use of a REST API.</p>



<p>Thank you. <a href="https://www.linkedin.com/in/amine-haloui-76968056/">Amine Haloui</a></p>
<p>L’article <a href="https://www.dbi-services.com/blog/sql-server-snapshot-backup-and-restore-with-proxmox-zfs-2-3/">SQL Server Snapshot Backup and Restore with Proxmox ZFS &#8211; Powershell implementation (2/3)</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/sql-server-snapshot-backup-and-restore-with-proxmox-zfs-2-3/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A Misleading SSAS Error in Power BI Report Server When Using DirectQuery Mode</title>
		<link>https://www.dbi-services.com/blog/a-misleading-ssas-error-in-power-bi-report-server-when-using-directquery-mode/</link>
					<comments>https://www.dbi-services.com/blog/a-misleading-ssas-error-in-power-bi-report-server-when-using-directquery-mode/#respond</comments>
		
		<dc:creator><![CDATA[Amine Haloui]]></dc:creator>
		<pubDate>Thu, 14 May 2026 21:17:45 +0000</pubDate>
				<category><![CDATA[Business Intelligence]]></category>
		<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[Power BI Report Server]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=44400</guid>

					<description><![CDATA[<p>Our client was experiencing issues after publishing a report that used Direct Query mode. Specifically, when the report was queried, the following error occurred: Error :&#160; We couldn&#8217;t connect to the Analysis Services server. Make sure you&#8217;ve entered the connection string correctly. However, this issue did not occur in Power BI Desktop. In Power BI, [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/a-misleading-ssas-error-in-power-bi-report-server-when-using-directquery-mode/">A Misleading SSAS Error in Power BI Report Server When Using DirectQuery Mode</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Our client was experiencing issues after publishing a report that used Direct Query mode. Specifically, when the report was queried, the following error occurred:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="738" height="154" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-20.png" alt="" class="wp-image-44402" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-20.png 738w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-20-300x63.png 300w" sizes="auto, (max-width: 738px) 100vw, 738px" /></figure>



<p>Error :&nbsp; We couldn&#8217;t connect to the Analysis Services server. Make sure you&#8217;ve entered the connection string correctly.</p>



<p>However, this issue did not occur in Power BI Desktop.</p>



<p>In Power BI, several data loading modes are available. Import mode loads data into the Power BI model, which usually provides faster performance and richer modeling capabilities. DirectQuery mode does not store the data in the model instead, each interaction sends queries to the source system in real time. Import is generally better for speed and flexibility, while DirectQuery is useful when data must stay in the source or remain near real-time. The trade-off is that DirectQuery depends more heavily on source performance, network latency, and source-system limitations.</p>



<h2 class="wp-block-heading" id="h-configuration">Configuration</h2>



<p>At first glance, one might think that the corresponding report is trying to connect to an SSAS service and that there is a connectivity issue between Power BI Report Server and a SQL Server Analysis Services instance.</p>



<p>However, after reviewing the data source, there was no connection to SSAS:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="667" height="388" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-22.png" alt="" class="wp-image-44405" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-22.png 667w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-22-300x175.png 300w" sizes="auto, (max-width: 667px) 100vw, 667px" /></figure>



<p>We did not have this type of configuration:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="357" height="145" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-21.png" alt="" class="wp-image-44407" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-21.png 357w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-21-300x122.png 300w" sizes="auto, (max-width: 357px) 100vw, 357px" /></figure>



<p><strong>The questions that arise</strong></p>



<p>Why are we getting an error message even though the report is not trying to connect to a SQL Server Analysis Services instance?</p>



<p>Why is our client seeing this error message and unable to query the report?</p>



<h2 class="wp-block-heading" id="h-troubleshooting">Troubleshooting</h2>



<p>By reviewing the Power BI Report Server logs, it was possible to see this type of message:</p>



<p>Failed to get CSDL. &#8212;&gt; MsolapWrapper.MsolapWrapperException: Failure encountered while getting schema.</p>



<p>CannotRetrieveModelException: An error occurred while loading the model&#8230; Verify that the connection information is correct and that you have permissions to access the data source.</p>



<p>It is also possible to retrieve some information from the ExecutionLog3 table:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="40" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-19-1024x40.png" alt="" class="wp-image-44401" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-19-1024x40.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-19-300x12.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-19-768x30.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-19.png 1329w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Indeed,&nbsp; whenever a Power BI report is rendered or a scheduled refresh is executed, new entries are written to the ExecutionLog3 table. These entries can be queried through the ExecutionLog3 view in the Report Server catalog database. The ConceptualSchema event corresponds to a user viewing the report.</p>



<p>When querying the Event Viewer, it returned these errors at the time we tried to query the report:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="146" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-25-1024x146.png" alt="" class="wp-image-44404" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-25-1024x146.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-25-300x43.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-25-768x109.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-25.png 1348w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading" id="h-more-details-about-the-first-errors">More details about the first errors</h2>



<p>We have two error messages that seem to point in two different directions. In reality, the first error messages are not very useful and appear because although the error message refers to Analysis Services, the report was not connecting to an external SSAS instance. Power BI Report Server uses an internal Analysis Services engine to load and query Power BI report models. Therefore, the error was raised by the internal PBIRS Analysis Services engine, not by a standalone SQL Server Analysis Services instance.</p>



<p>Power BI Report Server may report an Analysis Services-related error even when the report does not connect to an external SSAS instance. This is because PBIRS uses an internal Analysis Services engine to host and execute the Power BI semantic model behind the report. In DirectQuery mode, the data remains in SQL Server, but the report model, metadata, relationships, measures, and DAX queries are still processed through this internal engine.</p>



<p>When a user opens the report, PBIRS asks this local Analysis Services process to load the model and generate the queries sent to SQL Server.</p>



<p>Therefore, if the internal engine fails while loading the model, validating metadata, or connecting to the SQL Server data source, the error may mention Analysis Services. This does not mean that the report is connected to a standalone SSAS instance.</p>



<h2 class="wp-block-heading" id="h-more-details-about-the-second-errors">More details about the second errors</h2>



<p>This was the second error that pointed us in the right direction to actually resolve the issue. After looking at it more closely, we started considering connection encryption and certificates. This problem is documented, and several solutions are available.</p>



<p>Indeed, the SQL Server instance queried to retrieve the data did not have a certificate issued by a trusted certificate authority. It was using a self-generated certificate.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="832" height="257" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-24.png" alt="" class="wp-image-44403" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-24.png 832w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-24-300x93.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-24-768x237.png 768w" sizes="auto, (max-width: 832px) 100vw, 832px" /></figure>



<p>This can lead to errors such as the ones mentioned above, or errors like the following:</p>



<p>Microsoft SQL: A connection was successfully established with the server, but then an error occurred during the login process. Provider: SSL Provider, error: 0 &#8211; The certificate chain was issued by an authority that is not trusted.</p>



<h2 class="wp-block-heading" id="h-solutions">Solutions</h2>



<p>We had at least three options to resolve this issue:</p>



<ul class="wp-block-list">
<li>Change the connection mode to Import</li>



<li>Install a certificate issued by a trusted certificate authority however this would represent a major change</li>



<li>Create a new environment variable on the Power BI Report Server</li>
</ul>



<p>The client chose the easiest solution to implement: creating the corresponding environment variable.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="833" height="538" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-23.png" alt="" class="wp-image-44406" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-23.png 833w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-23-300x194.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/05/image-23-768x496.png 768w" sizes="auto, (max-width: 833px) 100vw, 833px" /></figure>



<p>We then restarted the corresponding Power BI Report Server service and this resolved the issue.</p>



<h2 class="wp-block-heading" id="h-references">References :</h2>



<p><a href="https://learn.microsoft.com/en-us/power-bi/report-server/scheduled-refresh-troubleshoot">https://learn.microsoft.com/en-us/power-bi/report-server/scheduled-refresh-troubleshoot</a></p>



<p><a href="https://learn.microsoft.com/en-us/power-query/connectors/sql-server#sql-server-certificate-isnt-trusted-on-the-client-power-bi-desktop-or-on-premises-data-gateway">https://learn.microsoft.com/en-us/power-query/connectors/sql-server#sql-server-certificate-isnt-trusted-on-the-client-power-bi-desktop-or-on-premises-data-gateway</a></p>



<p>Thank you. <a href="https://www.linkedin.com/in/amine-haloui-76968056/">Amine Haloui</a></p>
<p>L’article <a href="https://www.dbi-services.com/blog/a-misleading-ssas-error-in-power-bi-report-server-when-using-directquery-mode/">A Misleading SSAS Error in Power BI Report Server When Using DirectQuery Mode</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/a-misleading-ssas-error-in-power-bi-report-server-when-using-directquery-mode/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: Dynamically adjust the I/O worker pool</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-dynamically-adjust-the-i-o-worker-pool/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-dynamically-adjust-the-i-o-worker-pool/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Wed, 13 May 2026 05:12:15 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=44393</guid>

					<description><![CDATA[<p>When PostgreSQL 18 was released last year one of the major features was the introduction of the asynchronous I/O subsystem. The main configuration parameter for this was (and still is) io_method, which can be &#8220;worker&#8221; (the default), io_uring or sync (the old behavior). If you opted for &#8220;workers&#8221; the number of those workers is controlled [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-dynamically-adjust-the-i-o-worker-pool/">PostgreSQL 19: Dynamically adjust the I/O worker pool</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>When <a href="https://www.postgresql.org/docs/current/release-18.html#RELEASE-18-CHANGES" target="_blank" rel="noreferrer noopener">PostgreSQL 18 was released</a> last year one of the major features was the <a href="https://www.dbi-services.com/blog/postgresql-18-support-for-asynchronous-i-o/" target="_blank" rel="noreferrer noopener">introduction of the asynchronous I/O subsystem</a>. The main configuration parameter for this was (and still is) <a href="https://www.postgresql.org/docs/18/runtime-config-resource.html#GUC-IO-METHOD" target="_blank" rel="noreferrer noopener">io_method</a>, which can be &#8220;worker&#8221; (the default), <a href="https://en.wikipedia.org/wiki/Io_uring" target="_blank" rel="noreferrer noopener">io_uring</a> or sync (the old behavior). If you opted for &#8220;workers&#8221; the number of those workers is controlled by &#8220;<a href="https://www.postgresql.org/docs/18/runtime-config-resource.html#GUC-IO-WORKERS" target="_blank" rel="noreferrer noopener">io_workers</a>&#8221; and the default for this is 3. PostgreSQL 19 most probably will change the way how many of those workers are launched, not anymore using the static value of &#8220;io_workers&#8221; but making this dynamic by launching workers from a predefined pool.</p>



<p>The configuration parameter &#8220;io_workers&#8221; is gone and four additional parameters show up to control this:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# \dconfig io_*work*
 List of configuration parameters
         Parameter         | Value 
---------------------------+-------
 io_max_workers            | 8
 io_min_workers            | 2
 io_worker_idle_timeout    | 1min
 io_worker_launch_interval | 100ms
(4 rows)
</pre></div>


<p>&#8220;io_min_workes&#8221; (as the name implies) controls how many workers are available by default, which is two:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;DEV] ps -ef | grep postgres | grep worker | grep -v grep
postgres    8564    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 0
postgres    8565    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 1
</pre></div>


<p>&#8220;io_max_workers&#8221; (again, as the name implies) controls the maximum worker processes which can be launched for the whole instance.</p>



<p>To see that dynamic startup of workers in action lets create a simple table containing twenty million rows:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres=# create table t ( a int, b text, c timestamptz );
CREATE TABLE
postgres=# insert into t select i, i::text, now() from generate_series(1,20000000) i;
INSERT 0 2000000
</pre></div>


<p>While watching the workers in a separate session:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;DEV] watch &quot;ps -ef | grep postgres | grep worker | grep -v grep&quot;

Every 2.0s: ps -ef | grep postgres | grep worker | grep -v grep               pgbox.it.dbi-services.com: 06:52:20 AM
                                                                                                       in 0.022s (0)
postgres    8564    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 0
postgres    8565    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 1
</pre></div>


<p>&#8230; and doing a count(*) over the whole table in session one:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# select count(*) from t;
  count   
----------
 20000000
(1 row)
</pre></div>


<p>&#8230; you&#8217;ll notice that an additional worker (io worker 2) shows up in the second session watching the processes (maybe you have to play a bit with the number of rows depending on your configuration of PostgreSQL):</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [6]; title: ; notranslate">
Every 2.0s: ps -ef | grep postgres | grep worker | grep -v grep               pgbox.it.dbi-services.com: 07:02:40 AM
                                                                                                       in 0.018s (0)
postgres    8564    8562  0 06:34 ?        00:00:02 postgres: pgdev: io worker 0
postgres    8565    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 1
postgres   11914    8562  0 07:02 ?        00:00:00 postgres: pgdev: io worker 2
</pre></div>


<p>Once this additional worker is idle for one minute it will disappear and we&#8217;re back to two worker processes:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
Every 2.0s: ps -ef | grep postgres | grep worker | grep -v grep               pgbox.it.dbi-services.com: 07:04:24 AM
                                                                                                       in 0.020s (0)
postgres    8564    8562  0 06:34 ?        00:00:02 postgres: pgdev: io worker 0
postgres    8565    8562  0 06:34 ?        00:00:00 postgres: pgdev: io worker 1
</pre></div>


<p>This is controlled by &#8220;io_worker_idle_timeout&#8221; and the default is one minute. </p>



<p>The remaining configuration knob is &#8220;io_worker_launch_interval&#8221;, and this is the interval at which additional workers can be launched. The reason behind this is, that not too many workers will be launched at once.</p>



<p>This will make tuning the workers easier, compared to PostgreSQL 18. Again, thanks to all involved, the commit is <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d1c01b79d4ae90e52bf9db9c05c9de17b7313e85">here</a>.</p>



<p></p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-dynamically-adjust-the-i-o-worker-pool/">PostgreSQL 19: Dynamically adjust the I/O worker pool</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-dynamically-adjust-the-i-o-worker-pool/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: pg_waldump can now read from archives</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-pg_waldump-can-now-read-from-archives/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-pg_waldump-can-now-read-from-archives/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Mon, 11 May 2026 04:48:04 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=44025</guid>

					<description><![CDATA[<p>When PostgreSQL 18 introduced the ability to verify tar based (and compressed) backups with pg_verifybackup there was one limitation: The verification of the WAL files in the tars (or compressed files) had to be skipped (--no-parse-wal) because pg_waldump in that version of PostgreSQL is not able to cope with that (and pg_waldump is used by [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-pg_waldump-can-now-read-from-archives/">PostgreSQL 19: pg_waldump can now read from archives</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>When <a href="https://www.postgresql.org/docs/current/release-18.html#RELEASE-18-HIGHLIGHTS" target="_blank" rel="noreferrer noopener">PostgreSQL 18 introduced the ability to verify tar based (and compressed) backups with pg_verifybackup</a> there was one limitation: <a href="https://www.dbi-services.com/blog/postgresql-18-verify-tar-format-and-compressed-backups/" target="_blank" rel="noreferrer noopener">The verification of the WAL files in the tars (or compressed files) had to be skipped</a> (<code>--no-parse-wal</code>) because <a href="https://www.postgresql.org/docs/18/pgwaldump.html" target="_blank" rel="noreferrer noopener">pg_waldump</a> in that version of PostgreSQL is not able to cope with that (and pg_waldump is used by pg_verifybackup). This will change with PostgreSQL 19 because of this <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b15c1513984e6eafd264bf6e84a08549905621f1" target="_blank" rel="noreferrer noopener">commit</a>: &#8220;pg_waldump: Add support for reading WAL from tar archives&#8221;.</p>



<p>This is maybe not a feature a lot of people have waited for but it makes two tasks a lot easier:</p>



<ul class="wp-block-list">
<li>As mentioned above: pg_verifybackup can now read from WAL in tar and compressed files and therefore can do WAL verification</li>



<li>When you have WAL in a tar or compressed file and you know what you&#8217;re looking for you do not need to manually extract those archives before using pg_waldump</li>
</ul>



<p>To see that in action once can create a tar or compressed backup with pb_basebackup:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2,3]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] mkdir /var/tmp/dummy
postgres@:/home/postgres/ &#x5B;pgdev] pg_basebackup --checkpoint=fast --format=t --pgdata=/var/tmp/dummy
postgres@:/home/postgres/ &#x5B;pgdev] ls -la /var/tmp/dummy
total 128476
drwxr-xr-x. 1 postgres postgres        66 May 11 06:36 .
drwxrwxrwt. 1 root     root           762 May 11 06:33 ..
-rw-------. 1 postgres postgres    149515 May 11 06:36 backup_manifest
-rw-------. 1 postgres postgres 114619904 May 11 06:36 base.tar
-rw-------. 1 postgres postgres  16778752 May 11 06:36 pg_wal.tar
</pre></div>


<p>Looking at the PostgreSQL log file while the backup is running gives us a LSN we can give to pg_waldump:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; highlight: [3]; title: ; notranslate">
2026-05-11 06:36:18.397 CEST - 2 - 1731 -  - @ - 0LOG:  checkpoint complete: fast force wait: wrote 2 buffers (0.0%), wrote 3 SLRU buffers; 0 WAL file(s) added, 1 removed, 0 recycled; write=0.002 s, sync=0.005 s, total=0.019 s; sync files=4, longest=0.003 s, average=0.002 s; distance=16384 kB, estimate=16384 kB; lsn=0/0D000088, redo lsn=0/0D000028

postgres@:/home/postgres/ &#x5B;pgdev] pg_waldump --path=/var/tmp/dummy/pg_wal.tar -s &quot;0/0D000088&quot; 
rmgr: XLOG        len (rec/tot):    122/   122, tx:          0, lsn: 0/0D000088, prev 0/0D000050, desc: CHECKPOINT_ONLINE redo 0/0D000028; tli 1; prev tli 1; fpw true; wal_level replica; logical decoding false; xid 0:729; oid 16420; multi 1; offset 1; oldest xid 684 in DB 1; oldest multi 1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 729; checksums on; online
rmgr: Standby     len (rec/tot):     54/    54, tx:          0, lsn: 0/0D000108, prev 0/0D000088, desc: RUNNING_XACTS nextXid 729 latestCompletedXid 728 oldestRunningXid 729; dbid: 0
rmgr: XLOG        len (rec/tot):     34/    34, tx:          0, lsn: 0/0D000140, prev 0/0D000108, desc: BACKUP_END 0/0D000028
rmgr: XLOG        len (rec/tot):     24/    24, tx:          0, lsn: 0/0D000168, prev 0/0D000140, desc: SWITCH 
pg_waldump: error: could not find WAL &quot;00000001000000000000000E&quot; in archive &quot;pg_wal.tar
</pre></div>


<p>This helps pg_verifybackup fully verify a backup (in previous versions you had to use &#8220;&#8211;no-parse-wal&#8221;):</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] pg_verifybackup --progress /var/tmp/dummy/
111933/111933 kB (100%) verified
backup successfully verified
</pre></div>


<p>As usual, thanks to all involved.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-pg_waldump-can-now-read-from-archives/">PostgreSQL 19: pg_waldump can now read from archives</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-pg_waldump-can-now-read-from-archives/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: Importing statistics from remote servers</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-importing-statistics-from-remote-servers/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-importing-statistics-from-remote-servers/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 08:15:22 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43948</guid>

					<description><![CDATA[<p>Usually we do not see many foreign data wrappers being used by our customers. Most of them use the foreign data wrapper for Oracle to fetch data from Oracle systems. Some of them use the foreign data wrapper for files but that&#8217;s mostly it. Only one (I am aware of) actually uses the foreign data [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-importing-statistics-from-remote-servers/">PostgreSQL 19: Importing statistics from remote servers</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Usually we do not see many foreign data wrappers being used by our customers. Most of them use the <a href="https://github.com/laurenz/oracle_fdw" target="_blank" rel="noreferrer noopener">foreign data wrapper for Oracle</a> to fetch data from Oracle systems. Some of them use the <a href="https://www.dbi-services.com/blog/external-tables-in-postgresql/">foreign data wrapper for files</a> but that&#8217;s mostly it. Only one (I am aware of) actually uses the <a href="https://www.postgresql.org/docs/18/postgres-fdw.html" target="_blank" rel="noreferrer noopener">foreign data wrapper for PostgreSQL</a> which obviously connects PostgreSQL to PostgreSQL. Some foreign data wrappers allow for collecting optimizer statistics on foreign tables and the foreign data wrappers for Oracle and PostgreSQL are examples for this. These local statistics are better than nothing but you need to take care that they are up to date and for that you need a fresh copy of the statistics over the remote data. PostgreSQL 19 will come with a solution for that when it comes to the foreign data wrapper for PostgreSQL. Actually, the solution is not in the foreign data wrapper for PostgreSQL but in the underlying framework and postgres_fdw uses can use that from version 19 on.</p>



<p>For looking at this we need a simple setup, so we initialize two new PostgreSQL 19 clusters and connect them with postgres_fdw:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,3,4,5,6,7,8,9,11,13,15,17,19,21]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] initdb --version
initdb (PostgreSQL) 19devel
postgres@:/home/postgres/ &#x5B;pgdev] initdb --pgdata=/var/tmp/pg1
postgres@:/home/postgres/ &#x5B;pgdev] initdb --pgdata=/var/tmp/pg2
postgres@:/home/postgres/ &#x5B;pgdev] echo &quot;port=8888&quot; &gt;&gt; /var/tmp/pg1/postgresql.auto.conf 
postgres@:/home/postgres/ &#x5B;pgdev] echo &quot;port=8889&quot; &gt;&gt; /var/tmp/pg2/postgresql.auto.conf 
postgres@:/home/postgres/ &#x5B;pgdev] pg_ctl --pgdata=/var/tmp/pg1/ start
postgres@:/home/postgres/ &#x5B;pgdev] pg_ctl --pgdata=/var/tmp/pg2/ start
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;create extension postgres_fdw&quot;
CREATE EXTENSION
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8889 -c &quot;create table t ( a int, b text, c timestamptz )&quot;
CREATE TABLE
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8889 -c &quot;insert into t select i, md5(i::text), now() from generate_series(1,1000000) i&quot;
INSERT 0 1000000
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;create server srv_pg2 foreign data wrapper postgres_fdw options(port &#039;8889&#039;, dbname &#039;postgres&#039;)&quot;
CREATE SERVER
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;create user mapping for postgres server srv_pg2 options (user &#039;postgres&#039;, password &#039;postgres&#039;)&quot;
CREATE USER MAPPING
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;create foreign table ft (a int, b text, c timestamptz) server srv_pg2 options (schema_name &#039;public&#039;, table_name &#039;t&#039;)&quot;
CREATE FOREIGN TABLE
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;select count(*) from ft&quot;
  count  
---------
 1000000
(1 row)
</pre></div>


<p>What we have now is one table in the cluster on port 8889 and this table is attached as a foreign table in the cluster on port 8888.</p>



<p>We already have statistics on the source table in the cluster on port 8889:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8889 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;t&#039;&quot;

 reltuples 
-----------
   1000000
(1 row)
</pre></div>


<p>&#8230; but we do not have any statistics on the foreign table in the cluster on port 8888:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;ft&#039;&quot;

 reltuples 
-----------
        -1

(1 row)
</pre></div>


<p>Only after manually analyzing the foreign table the statistics show up:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;DEV] psql -p 8888 -c &quot;analyze ft&quot;
ANALYZE
postgres@:/home/postgres/ &#x5B;DEV] psql -p 8888 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;ft&#039;&quot;

 reltuples 
-----------
   1000000
(1 row)
</pre></div>


<p>The issue that can arise with these local statistics is, that they probably become outdated when the source table is modified:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3,10]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8889 -c &quot;insert into t select i, md5(i::text), now() from generate_series(1000001,2000000) i&quot;
INSERT 0 1000000
postgres@:/home/postgres/ &#x5B;DEV] psql -p 8889 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;t&#039;&quot;

 reltuples 
-----------
   2000000
(1 row)

postgres@:/home/postgres/ &#x5B;DEV] psql -p 8888 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;ft&#039;&quot;

 reltuples 
-----------
   1000000
(1 row)
</pre></div>


<p>As you can see, the row counts do not match anymore. Once the local statistics are gathered we again have the same picture on both sides:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;DEV] psql -p 8888 -c &quot;analyze ft&quot;
ANALYZE
postgres@:/home/postgres/ &#x5B;DEV] psql -p 8888 -c &quot;select reltuples::bigint from pg_class  where relname = &#039;ft&#039;&quot;

 reltuples 
-----------
   2000000
(1 row)
</pre></div>


<p>One way to avoid this issue even before PostgreSQL 19 is to tell postgres_fdw to run analyze on the remote table and to use those statistics:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;alter foreign table ft options ( use_remote_estimate &#039;true&#039; )&quot;
</pre></div>


<p>In this case the local statistics will not be used but of course this comes with the overhead of the additional analyze on the remote side.</p>



<p>From PostgreSQL 19 there is another option:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;pgdev] psql -p 8888 -c &quot;alter foreign table ft options ( restore_stats &#039;true&#039; )&quot;
ALTER FOREIGN TABLE
</pre></div>


<p>This option tells postgres_fdw to import the statistics from the remote side and store them locally. If that fails it will run analyze as above, the <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=28972b6fc3dcd1296e844246b635eddfa29c38e1" target="_blank" rel="noreferrer noopener">commit message</a> nicely explains this:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
Add support for importing statistics from remote servers.

Add a new FDW callback routine that allows importing remote statistics
for a foreign table directly to the local server, instead of collecting
statistics locally.  The new callback routine is called at the beginning
of the ANALYZE operation on the table, and if the FDW failed to import
the statistics, the existing callback routine is called on the table to
collect statistics locally.

Also implement this for postgres_fdw.  It is enabled by &quot;restore_stats&quot;
option both at the server and table level.  Currently, it is the user&#039;s
responsibility to ensure remote statistics to import are up-to-date, so
the default is false.
</pre></div>


<p>As usual, thanks to all involved.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-importing-statistics-from-remote-servers/">PostgreSQL 19: Importing statistics from remote servers</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-importing-statistics-from-remote-servers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: Online enabling of data checksums</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-online-enabling-of-data-checksums/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-online-enabling-of-data-checksums/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Fri, 17 Apr 2026 06:00:00 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43935</guid>

					<description><![CDATA[<p>Since PostgreSQL 18 was released last year checksums are enabled by default when a new cluster is initialized. This also means, that you either need to explicitly disable that when you upgrade from a previous version of PostgreSQL or you need to enable this in the old version of PostgreSQL you want to upgrade from. [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-online-enabling-of-data-checksums/">PostgreSQL 19: Online enabling of data checksums</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Since PostgreSQL 18 was released last year checksums are enabled by default when a new cluster is initialized. This also means, that you either need to explicitly disable that when you upgrade from a previous version of PostgreSQL or you need to enable this in the old version of PostgreSQL you want to upgrade from. The reason is, that <a href="https://www.postgresql.org/docs/current/pgupgrade.html">pg_upgrade</a> will complain if the old and new version of PostgreSQL do not have the same setting for this.</p>



<p>Enabling and disabling checksums in offline mode can be done since several versions of PostgreSQL using <a href="https://www.postgresql.org/docs/current/app-pgchecksums.html" target="_blank" rel="noreferrer noopener">pg_checksums</a>, but as mentioned: This will not work if the cluster is running:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,3,9]; title: ; notranslate">
postgres@:/home/postgres/ &#x5B;181] pg_checksums --version
pg_checksums (PostgreSQL) 18.1 
postgres@:/home/postgres/ &#x5B;181] pg_checksums --pgdata=$PGDATA
Checksum operation completed
Files scanned:   966
Blocks scanned:  2969
Bad checksums:  0
Data checksum version: 1  -&gt; This means &quot;enabled&quot;
postgres@:/home/postgres/ &#x5B;181] pg_checksums --pgdata=$PGDATA --disable
pg_checksums: error: cluster must be shut down
</pre></div>


<p>Even in PostgreSQL 19 this is still same: You cannot use pg_checksum to enable or disable checksums while the cluster is running.</p>



<p>What will change in version 19 is that two new functions have been added, one for enabling checksums and one for disabling checksums:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# \dfS *checksums*
                                                        List of functions
   Schema   |           Name            | Result data type |                     Argument data types                      | Type 
------------+---------------------------+------------------+--------------------------------------------------------------+------
 pg_catalog | pg_disable_data_checksums | void             |                                                              | func
 pg_catalog | pg_enable_data_checksums  | void             | cost_delay integer DEFAULT 0, cost_limit integer DEFAULT 100 | func
(2 rows)
</pre></div>


<p>As mentioned in the <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=f19c0eccae9680f5785b11cdc58ef571998caec9" target="_blank" rel="noreferrer noopener">commit message</a> this is implemented by background workers and to actually see those processes on the operating system lets create some data so the workers really have something to do:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres=# create table t ( a int, b text, c timestamptz );
CREATE TABLE
postgres=# insert into t select i, md5(i::text), now() from generate_series(1,10000000) i;
INSERT 0 10000000
</pre></div>


<p>As this is version 19 of PostgreSQL currently checksum are enabled:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# show data_checksums;
 data_checksums 
----------------
 on
(1 row)
</pre></div>


<p>To disable that online, pg_disable_data_checksums is the function to use:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,7]; title: ; notranslate">
postgres=# select * from pg_disable_data_checksums();
 pg_disable_data_checksums 
---------------------------
 
(1 row)

postgres=# show data_checksums;
 data_checksums 
----------------
 off
(1 row)
</pre></div>


<p>To enable checksums online pg_enable_data_checksums is the function to use. If you want to see the background workers you might grep for that in a second session on the operating system:</p>



<p></p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [2,8,15]; title: ; notranslate">
-- first session, connected to PostgreSQL
postgres=# select pg_enable_data_checksums();
 pg_enable_data_checksums 
--------------------------
 
(1 row)

postgres=# show data_checksums ;
 data_checksums 
----------------
 on
(1 row)

-- second session, on the OS
postgres@:/home/postgres/postgresql/ &#x5B;pgdev] watch &quot;ps -ef | grep checksum | grep -v watch&quot;
Every 2.0s: ps -ef | grep checksum | grep -v watch                                                                                                                                                    pgbox.it.dbi-services.com: 09:49:20 AM
                                                                                                                                                                                                                               in 0.006s (0)
postgres    4931    2510  0 09:49 ?        00:00:00 postgres: pgdev: datachecksum launcher
postgres    4932    2510 25 09:49 ?        00:00:00 postgres: pgdev: datachecksum worker
postgres    4964    4962  0 09:49 pts/2    00:00:00 grep checksum
</pre></div>


<p>Because enabling the checksum comes with some overhead there is throttling control as it is already the case for autovacuum:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# select pg_enable_data_checksums(cost_delay=&gt;1,cost_limit=&gt;3000);
 pg_enable_data_checksums 
--------------------------
 
(1 row)
</pre></div>


<p>Very nice, thanks to all involved.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-online-enabling-of-data-checksums/">PostgreSQL 19: Online enabling of data checksums</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-online-enabling-of-data-checksums/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: get_*_ddl functions</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-get__ddl-functions/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-get__ddl-functions/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Thu, 16 Apr 2026 04:00:00 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43925</guid>

					<description><![CDATA[<p>PostgreSQL already comes with plenty of system information functions to reconstruct the commands to create various objects, e.g. constraints or indexes. Starting with PostgreSQL 19 more functions will be available, namely those: As the names imply they can be used to recreate the commands to create a database, a role, or a tablespace. To see [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-get__ddl-functions/">PostgreSQL 19: get_*_ddl functions</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>PostgreSQL already comes with plenty of <a href="https://www.postgresql.org/docs/current/functions-info.html" target="_blank" rel="noreferrer noopener">system information functions</a> to reconstruct the commands to create various objects, e.g. constraints or indexes. Starting with PostgreSQL 19 more functions will be available, namely those:</p>



<ul class="wp-block-list">
<li>pg_get_database_ddl</li>



<li>pg_get_role_ddl</li>



<li>pg_get_tablespace_ddl</li>
</ul>



<p>As the names imply they can be used to recreate the commands to create a database, a role, or a tablespace. </p>



<p>To see what they do lets create a small setup:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,8,10,11,13,15,17]; title: ; notranslate">
postgres=# select version();

                                        version                                        
---------------------------------------------------------------------------------------
 PostgreSQL 19devel dbi services build on x86_64-linux, compiled by gcc-15.1.1, 64-bit
(1 row)

postgres=# create user u with login password &#039;u&#039;;
CREATE ROLE
postgres=# \! mkdir /var/tmp/tbs
postgres=# create tablespace tbs location &#039;/var/tmp/tbs&#039; with ( random_page_cost = 1.1 );
CREATE TABLESPACE
postgres=# create database d with owner = u tablespace = tbs;
CREATE DATABASE
postgres=# alter database d connection limit = 10;
ALTER DATABASE
postgres=# \l
                                                        List of databases
   Name    |  Owner   | Encoding | Locale Provider |   Collate   |    Ctype    |   Locale    | ICU Rules |   Access privileges   
-----------+----------+----------+-----------------+-------------+-------------+-------------+-----------+-----------------------
 d         | u        | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | 
 postgres  | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | 
 template0 | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | =c/postgres          +
           |          |          |                 |             |             |             |           | postgres=CTc/postgres
 template1 | postgres | UTF8     | icu             | en_US.UTF-8 | en_US.UTF-8 | en-US-x-icu |           | =c/postgres          +
           |          |          |                 |             |             |             |           | postgres=CTc/postgres
(4 rows)

</pre></div>


<p>To get the commands to recreate that database the new function &#8220;pg_get_database_ddl&#8221; can be used:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# select * from  pg_get_database_ddl ( &#039;d&#039;::regdatabase );
                                                                   pg_get_database_ddl                                                                   
---------------------------------------------------------------------------------------------------------------------------------------------------------
 CREATE DATABASE d WITH TEMPLATE = template0 ENCODING = &#039;UTF8&#039; LOCALE_PROVIDER = icu LOCALE = &#039;en_US.UTF-8&#039; ICU_LOCALE = &#039;en-US-x-icu&#039; TABLESPACE = tbs;
 ALTER DATABASE d OWNER TO u;
 ALTER DATABASE d CONNECTION LIMIT = 10;
(3 rows)
</pre></div>


<p>There are some options to control the output format and what gets reconstructed, e.g.:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,15,28]; title: ; notranslate">
postgres=# select * from  pg_get_database_ddl ( &#039;d&#039;::regdatabase, &#039;pretty&#039;, &#039;true&#039; );
           pg_get_database_ddl           
-----------------------------------------
 CREATE DATABASE d                      +
     WITH TEMPLATE = template0          +
     ENCODING = &#039;UTF8&#039;                  +
     LOCALE_PROVIDER = icu              +
     LOCALE = &#039;en_US.UTF-8&#039;             +
     ICU_LOCALE = &#039;en-US-x-icu&#039;         +
     TABLESPACE = tbs;
 ALTER DATABASE d OWNER TO u;
 ALTER DATABASE d CONNECTION LIMIT = 10;
(3 rows)

postgres=# select * from  pg_get_database_ddl ( &#039;d&#039;::regdatabase, &#039;pretty&#039;, &#039;true&#039;, &#039;owner&#039;, &#039;false&#039; );
           pg_get_database_ddl           
-----------------------------------------
 CREATE DATABASE d                      +
     WITH TEMPLATE = template0          +
     ENCODING = &#039;UTF8&#039;                  +
     LOCALE_PROVIDER = icu              +
     LOCALE = &#039;en_US.UTF-8&#039;             +
     ICU_LOCALE = &#039;en-US-x-icu&#039;         +
     TABLESPACE = tbs;
 ALTER DATABASE d CONNECTION LIMIT = 10;
(2 rows)

postgres=# select * from  pg_get_database_ddl ( &#039;d&#039;::regdatabase, &#039;pretty&#039;, &#039;true&#039;, &#039;owner&#039;, &#039;false&#039;, &#039;tablespace&#039;, &#039;false&#039; );
           pg_get_database_ddl           
-----------------------------------------
 CREATE DATABASE d                      +
     WITH TEMPLATE = template0          +
     ENCODING = &#039;UTF8&#039;                  +
     LOCALE_PROVIDER = icu              +
     LOCALE = &#039;en_US.UTF-8&#039;             +
     ICU_LOCALE = &#039;en-US-x-icu&#039;;
 ALTER DATABASE d CONNECTION LIMIT = 10;
(2 rows)
</pre></div>


<p>The other two functions behave the same (but do not have exactly the same options):</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,8,17]; title: ; notranslate">
postgres=# select * from pg_get_tablespace_ddl(&#039;tbs&#039;);
                     pg_get_tablespace_ddl                     
---------------------------------------------------------------
 CREATE TABLESPACE tbs OWNER postgres LOCATION &#039;/var/tmp/tbs&#039;;
 ALTER TABLESPACE tbs SET (random_page_cost=&#039;1.1&#039;);
(2 rows)

postgres=# select * from pg_get_tablespace_ddl(&#039;tbs&#039;, &#039;pretty&#039;, &#039;true&#039;);
               pg_get_tablespace_ddl                
----------------------------------------------------
 CREATE TABLESPACE tbs                             +
     OWNER postgres                                +
     LOCATION &#039;/var/tmp/tbs&#039;;
 ALTER TABLESPACE tbs SET (random_page_cost=&#039;1.1&#039;);
(2 rows)

postgres=# select * from pg_get_tablespace_ddl(&#039;tbs&#039;, &#039;pretty&#039;, &#039;true&#039;, &#039;owner&#039;, &#039;false&#039;);
               pg_get_tablespace_ddl                
----------------------------------------------------
 CREATE TABLESPACE tbs                             +
     LOCATION &#039;/var/tmp/tbs&#039;;
 ALTER TABLESPACE tbs SET (random_page_cost=&#039;1.1&#039;);
(2 rows)
</pre></div>


<p>&#8230; and finally for the roles:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,7,20]; title: ; notranslate">
postgres=# select * from pg_get_role_ddl (&#039;u&#039;);
                                      pg_get_role_ddl                                       
--------------------------------------------------------------------------------------------
 CREATE ROLE u NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN NOREPLICATION NOBYPASSRLS;
(1 row)

postgres=# select * from pg_get_role_ddl (&#039;u&#039;, &#039;pretty&#039;, &#039;true&#039;);
  pg_get_role_ddl  
-------------------
 CREATE ROLE u    +
     NOSUPERUSER  +
     INHERIT      +
     NOCREATEROLE +
     NOCREATEDB   +
     LOGIN        +
     NOREPLICATION+
     NOBYPASSRLS;
(1 row)

postgres=# select * from pg_get_role_ddl (&#039;u&#039;, &#039;pretty&#039;, &#039;true&#039;, &#039;memberships&#039;, &#039;false&#039;);
  pg_get_role_ddl  
-------------------
 CREATE ROLE u    +
     NOSUPERUSER  +
     INHERIT      +
     NOCREATEROLE +
     NOCREATEDB   +
     LOGIN        +
     NOREPLICATION+
     NOBYPASSRLS;
(1 row)
</pre></div>


<p>Nice, and again: Thanks to all involved.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-get__ddl-functions/">PostgreSQL 19: get_*_ddl functions</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-get__ddl-functions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: json format for &#8220;copy to&#8221;</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-json-format-for-copy-to/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-json-format-for-copy-to/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 04:41:59 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[Non classifié(e)]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43920</guid>

					<description><![CDATA[<p>PostgreSQL already has impressive support for working with data in json format. If you look at the jsonb data type and all the built-in functions and operators you can use, there is so much you can do with it by default. Starting with PostgreSQL 19 there is one feature more when it comes to working [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-json-format-for-copy-to/">PostgreSQL 19: json format for &#8220;copy to&#8221;</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>PostgreSQL already has impressive support for working with data in <a href="https://www.json.org/json-en.html" target="_blank" rel="noreferrer noopener">json</a> format. If you look at the <a href="https://www.postgresql.org/docs/current/datatype-json.html">jsonb</a> data type and all the <a href="https://www.postgresql.org/docs/current/functions-json.html">built-in functions and operators</a> you can use, there is so much you can do with it by default. Starting with PostgreSQL 19 there is one feature more when it comes to working with data in json format.</p>



<p>&#8220;<a href="https://www.postgresql.org/docs/current/sql-copy.html">COPY</a>&#8221; already is quite powerful and the fastest way to get data in and out of PostgreSQL (you may read some previous posts about copy <a href="https://www.dbi-services.com/blog/postgresql-17-copy-and-save_error_to/" target="_blank" rel="noreferrer noopener">here</a>, <a href="https://www.dbi-services.com/blog/postgresql-18-reject_limit-for-copy/">here</a>, and <a href="https://www.dbi-services.com/blog/postgresql-17-track-skipped-rows-from-copy-in-pg_stat_progress_copy/">here</a>). </p>



<p>As usual lets start with a simple table:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres=# create table t ( a int primary key, b text );
CREATE TABLE
postgres=# insert into t select i, md5(i::text) from generate_series(1,1000000) i;
INSERT 0 1000000
</pre></div>


<p>To get that data out in text format you might simply do this:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres=# copy t to &#039;/var/tmp/t&#039;;
COPY 1000000
postgres=# \! head /var/tmp/t
1       c4ca4238a0b923820dcc509a6f75849b
2       c81e728d9d4c2f636f067f89cc14862c
3       eccbc87e4b5ce2fe28308fd9f2a7baf3
4       a87ff679a2f3e71d9181a67b7542122c
5       e4da3b7fbbce2345d7772b0674a318d5
6       1679091c5a880faf6fb5e6087eb1b2dc
7       8f14e45fceea167a5a36dedd4bea2543
8       c9f0f895fb98ab9159f51fd0297e236d
9       45c48cce2e2d7fbdea1afc51c7c6ad26
10      d3d9446802a44259755d38e6d163e820
</pre></div>


<p>Starting with PostgreSQL 19 you can do the same in json format:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; highlight: [1,3]; title: ; notranslate">
postgres=# copy t to &#039;/var/tmp/t1&#039; with (format json);
COPY 1000000
postgres=# \! head /var/tmp/t1
{&quot;a&quot;:1,&quot;b&quot;:&quot;c4ca4238a0b923820dcc509a6f75849b&quot;}
{&quot;a&quot;:2,&quot;b&quot;:&quot;c81e728d9d4c2f636f067f89cc14862c&quot;}
{&quot;a&quot;:3,&quot;b&quot;:&quot;eccbc87e4b5ce2fe28308fd9f2a7baf3&quot;}
{&quot;a&quot;:4,&quot;b&quot;:&quot;a87ff679a2f3e71d9181a67b7542122c&quot;}
{&quot;a&quot;:5,&quot;b&quot;:&quot;e4da3b7fbbce2345d7772b0674a318d5&quot;}
{&quot;a&quot;:6,&quot;b&quot;:&quot;1679091c5a880faf6fb5e6087eb1b2dc&quot;}
{&quot;a&quot;:7,&quot;b&quot;:&quot;8f14e45fceea167a5a36dedd4bea2543&quot;}
{&quot;a&quot;:8,&quot;b&quot;:&quot;c9f0f895fb98ab9159f51fd0297e236d&quot;}
{&quot;a&quot;:9,&quot;b&quot;:&quot;45c48cce2e2d7fbdea1afc51c7c6ad26&quot;}
{&quot;a&quot;:10,&quot;b&quot;:&quot;d3d9446802a44259755d38e6d163e820&quot;}
</pre></div>


<p>Specifying a SQL is also supported:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1,3]; title: ; notranslate">
postgres=# copy (select a from t) to &#039;/var/tmp/t1&#039; with (format json);
COPY 1000000
postgres=# \! head /var/tmp/t1
{&quot;a&quot;:1}
{&quot;a&quot;:2}
{&quot;a&quot;:3}
{&quot;a&quot;:4}
{&quot;a&quot;:5}
{&quot;a&quot;:6}
{&quot;a&quot;:7}
{&quot;a&quot;:8}
{&quot;a&quot;:9}
{&quot;a&quot;:10}
</pre></div>


<p>As noted in the <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=7dadd38cda95bf5bc0c4715d9ab71766d1693379">commit message</a> there are some options which are not compatible with the json format:</p>



<ul class="wp-block-list">
<li>HEADER</li>



<li>DEFAULT</li>



<li>NULL</li>



<li>DELIMITER</li>



<li>FORCE QUOTE</li>



<li>FORCE NOT NULL</li>



<li>and FORCE NULL</li>
</ul>



<p>Also not supported (currently) is &#8220;copy from&#8221;.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-json-format-for-copy-to/">PostgreSQL 19: json format for &#8220;copy to&#8221;</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-json-format-for-copy-to/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 19: The &#8220;repack&#8221; command</title>
		<link>https://www.dbi-services.com/blog/postgresql-19-the-repack-command/</link>
					<comments>https://www.dbi-services.com/blog/postgresql-19-the-repack-command/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Tue, 14 Apr 2026 03:15:44 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=43912</guid>

					<description><![CDATA[<p>Before PostgreSQL 19 you had two commands to completely rewrite a table: Either you can use the &#8220;vacuum full&#8221; or the &#8220;cluster&#8221; command to achieve this. Both operations are blocking and the table cannot be used until those operations complete. This can easily be verified with the following simple test cases: The same is true [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-the-repack-command/">PostgreSQL 19: The &#8220;repack&#8221; command</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Before PostgreSQL 19 you had two commands to completely rewrite a table: Either you can use the &#8220;<a href="https://www.postgresql.org/docs/current/sql-vacuum.html" target="_blank" rel="noreferrer noopener">vacuum full</a>&#8221; or the &#8220;<a href="https://www.postgresql.org/docs/current/sql-cluster.html" target="_blank" rel="noreferrer noopener">cluster</a>&#8221; command to achieve this. Both operations are blocking and the table cannot be used until those operations complete. This can easily be verified with the following simple test cases:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [2,4,6,9]; title: ; notranslate">
-- session 1
postgres=# create table t ( a int primary key, b text );
CREATE TABLE
postgres=# insert into t select i, md5(i::text) from generate_series(1,10000000) i;
INSERT 0 1000000
postgres=# vacuum full t;

-- session 2
postgres=# select count(*) from t;  -- this blocks until vacuum full completes
</pre></div>


<p>The same is true for the &#8220;cluster&#8221; command:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [2,11,14]; title: ; notranslate">
-- session 1
postgres=# \d t
                 Table &quot;public.t&quot;
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | text    |           |          | 
Indexes:
    &quot;t_pkey&quot; PRIMARY KEY, btree (a)

postgres=# cluster t using t_pkey;

-- session 2
postgres=# select count(*) from t;  -- this blocks until clustering completes
</pre></div>


<p>Starting with PostgreSQL 19 (scheduled to be released later this year) these two functionalities are combined into the &#8220;<a href="https://www.postgresql.org/docs/devel/sql-repack.html">repack</a>&#8221; command. The <a href="https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ac58465e0618941842439eb3f5a2cf8bebd5a3f1" target="_blank" rel="noreferrer noopener">commit message</a> makes the reason behind this pretty clear:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
Introduce the REPACK command

REPACK absorbs the functionality of VACUUM FULL and CLUSTER in a single
command.  Because this functionality is completely different from
regular VACUUM, having it separate from VACUUM makes it easier for users
to understand; as for CLUSTER, the term is heavily overloaded in the
IT world and even in Postgres itself, so it&#039;s good that we can avoid it.

We retain those older commands, but de-emphasize them in the
documentation, in favor of REPACK; the difference between VACUUM FULL
and CLUSTER (namely, the fact that tuples are written in a specific
ordering) is neatly absorbed as two different modes of REPACK.

This allows us to introduce further functionality in the future that
works regardless of whether an ordering is being applied, such as (and
especially) a concurrent mode.
</pre></div>


<p>So, instead of spreading the functionality over two commands, there is a new command which combines both:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [1]; title: ; notranslate">
postgres=# \h repack
Command:     REPACK
Description: rewrite a table to reclaim disk space
Syntax:
REPACK &#x5B; ( option &#x5B;, ...] ) ] &#x5B; table_and_columns &#x5B; USING INDEX &#x5B; index_name ] ] ]
REPACK &#x5B; ( option &#x5B;, ...] ) ] USING INDEX

where option can be one of:

    VERBOSE &#x5B; boolean ]
    ANALYZE &#x5B; boolean ]
    CONCURRENTLY &#x5B; boolean ]

and table_and_columns is:

    table_name &#x5B; ( column_name &#x5B;, ...] ) ]

URL: https://www.postgresql.org/docs/devel/sql-repack.html
</pre></div>


<p>The really cool stuff about this is, that this can be run concurrently which means the table is not locked for others while the command is doing its work:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: sql; highlight: [2,4,7]; title: ; notranslate">
-- session 1
postgres=# repack (concurrently) t;
-- or
postgres=# repack (concurrently) t using index t_pkey;

-- session 2
postgres=# select count(*) from t;  -- not blocking
</pre></div>


<p>Nice, thanks to all involved.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/postgresql-19-the-repack-command/">PostgreSQL 19: The &#8220;repack&#8221; command</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/postgresql-19-the-repack-command/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 
Lazy Loading (feed)

Served from: www.dbi-services.com @ 2026-05-16 16:36:52 by W3 Total Cache
-->