Sooner or later all of us have to change a parameter on the database. But how is this put into execution when using a Patroni cluster? Of course there are some specifics you have to consider.
This post will give you a short introduction into this topic.
When you want to change a parameter on a Patroni cluster you have several possibilities:
– Dynamic configuration in DCS. These changes are applied asynchronously to every node.
– Local configuration in patroni.yml. This will take precedence over the dynamic configuration.
– Cluster configuration using “alter system”.
– Environment configuration using local environment variables.
Change PostgreSQL parameters using patronictl
1. Change parameters, that do not need a restart
If you want to change a parameter (or more) for the whole cluster, you should use patronictl. If you want to change the initial configuration as well, you should also adjust patroni.yml.
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config PG1
All parameters already set are shown and can be changed like in any other file using the vi commands:
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config PG1 loop_wait: 10 maximum_lag_on_failover: 1048576 postgresql: parameters: archive_command: /bin/true archive_mode: 'on' autovacuum_max_workers: '6' autovacuum_vacuum_scale_factor: '0.1' autovacuum_vacuum_threshold: '50' client_min_messages: WARNING effective_cache_size: 512MB hot_standby: 'on' hot_standby_feedback: 'on' listen_addresses: '*' log_autovacuum_min_duration: 60s log_checkpoints: 'on' log_connections: 'on' log_directory: pg_log log_disconnections: 'on' log_duration: 'on' log_filename: postgresql-%a.log log_line_prefix: '%m - %l - %p - %h - %u@%d - %x' log_lock_waits: 'on' log_min_duration_statement: 30s log_min_error_statement: NOTICE log_min_messages: WARNING log_rotation_age: '1440' log_statement: ddl log_temp_files: '0' log_timezone: Europe/Zurich log_truncate_on_rotation: 'on' logging_collector: 'on' maintenance_work_mem: 64MB max_replication_slots: 10 max_wal_senders: '20' port: 5432 shared_buffers: 128MB shared_preload_libraries: pg_stat_statements wal_compression: 'off' wal_keep_segments: 8 wal_level: replica wal_log_hints: 'on' work_mem: 8MB use_pg_rewind: true use_slots: true retry_timeout: 10 ttl: 30
Once saved, you get the following:
--- +++ @@ -2,7 +2,8 @@ maximum_lag_on_failover: 1048576 postgresql: parameters: - archive_command: /bin/true + archive_command: 'test ! -f /u99/pgdata/PG1/archived_wal/%f && cp %p /u99/pgdata/PG1/archived_wal/%f' archive_mode: 'on' autovacuum_max_workers: '6' autovacuum_vacuum_scale_factor: '0.1' Apply these changes? [y/N]: y Configuration changed
When connecting to the database you will see, that the parameter is changed now. It is also changed on all the other nodes.
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] sq psql (11.5) Type "help" for help. postgres=# show archive_command; archive_command ------------------------------------------------------------------------------------ test ! -f /u99/pgdata/PG1/archived_wal/%f && cp %p /u99/pgdata/PG1/archived_wal/%f (1 row)
2. Change parameters, that need a restart
How can parameters be changed that need a restart? Especially as we want to have a minimal downtime of the cluster.
First of all the parameter can be changed the same way as the parameters that do not need a restart using patronictl edit-config. Once the parameter is changed the status overview of the cluster gets a new column showing which node needs a restart.
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list +---------+----------+----------------+--------+---------+----+-----------+-----------------+ | Cluster | Member | Host | Role | State | TL | Lag in MB | Pending restart | +---------+----------+----------------+--------+---------+----+-----------+-----------------+ | PG1 | patroni1 | 192.168.22.111 | Leader | running | 4 | 0.0 | * | | PG1 | patroni2 | 192.168.22.112 | | running | 4 | 0.0 | * | | PG1 | patroni3 | 192.168.22.113 | | running | 4 | 0.0 | * | +---------+----------+----------------+--------+---------+----+-----------+-----------------+
Afterwards there are two possibilites.
2.1 Restart node by node
If you do not want to restart the whole cluster, you have the possibility to restart each node separatly. Keep in mind, that you have to restart the Leader Node first, otherwise the change does not take effect. It is also possible to schedule the restart of a node.
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni1 When should the restart take place (e.g. 2019-10-08T15:33) [now]: +---------+----------+----------------+--------+---------+----+-----------+-----------------+ | Cluster | Member | Host | Role | State | TL | Lag in MB | Pending restart | +---------+----------+----------------+--------+---------+----+-----------+-----------------+ | PG1 | patroni1 | 192.168.22.111 | Leader | running | 4 | 0.0 | * | | PG1 | patroni2 | 192.168.22.112 | | running | 4 | 0.0 | * | | PG1 | patroni3 | 192.168.22.113 | | running | 4 | 0.0 | * | +---------+----------+----------------+--------+---------+----+-----------+-----------------+ Are you sure you want to restart members patroni1? [y/N]: y Restart if the PostgreSQL version is less than provided (e.g. 9.5.2) []: Success: restart on member patroni1 postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni2 When should the restart take place (e.g. 2019-10-08T15:34) [now]: +---------+----------+----------------+--------+---------+----+-----------+-----------------+ | Cluster | Member | Host | Role | State | TL | Lag in MB | Pending restart | +---------+----------+----------------+--------+---------+----+-----------+-----------------+ | PG1 | patroni1 | 192.168.22.111 | Leader | running | 4 | 0.0 | | | PG1 | patroni2 | 192.168.22.112 | | running | 4 | 0.0 | * | | PG1 | patroni3 | 192.168.22.113 | | running | 4 | 0.0 | * | +---------+----------+----------------+--------+---------+----+-----------+-----------------+ Are you sure you want to restart members patroni2? [y/N]: y Restart if the PostgreSQL version is less than provided (e.g. 9.5.2) []: Success: restart on member patroni2 postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 patroni3 When should the restart take place (e.g. 2019-10-08T15:34) [now]: +---------+----------+----------------+--------+---------+----+-----------+-----------------+ | Cluster | Member | Host | Role | State | TL | Lag in MB | Pending restart | +---------+----------+----------------+--------+---------+----+-----------+-----------------+ | PG1 | patroni1 | 192.168.22.111 | Leader | running | 4 | 0.0 | | | PG1 | patroni2 | 192.168.22.112 | | running | 4 | 0.0 | | | PG1 | patroni3 | 192.168.22.113 | | running | 4 | 0.0 | * | +---------+----------+----------------+--------+---------+----+-----------+-----------------+ Are you sure you want to restart members patroni3? [y/N]: y Restart if the PostgreSQL version is less than provided (e.g. 9.5.2) []: Success: restart on member patroni3 postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list +---------+----------+----------------+--------+---------+----+-----------+ | Cluster | Member | Host | Role | State | TL | Lag in MB | +---------+----------+----------------+--------+---------+----+-----------+ | PG1 | patroni1 | 192.168.22.111 | Leader | running | 4 | 0.0 | | PG1 | patroni2 | 192.168.22.112 | | running | 4 | 0.0 | | PG1 | patroni3 | 192.168.22.113 | | running | 4 | 0.0 | +---------+----------+----------------+--------+---------+----+-----------+
2.2 Restart the whole cluster
In case you don’t want to restart node by node and you have the possibility of a downtime, it is also possible to restart the whole cluster (scheduled or immediately)
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl restart PG1 When should the restart take place (e.g. 2019-10-08T15:37) [now]: +---------+----------+----------------+--------+---------+----+-----------+-----------------+ | Cluster | Member | Host | Role | State | TL | Lag in MB | Pending restart | +---------+----------+----------------+--------+---------+----+-----------+-----------------+ | PG1 | patroni1 | 192.168.22.111 | Leader | running | 4 | 0.0 | * | | PG1 | patroni2 | 192.168.22.112 | | running | 4 | 0.0 | * | | PG1 | patroni3 | 192.168.22.113 | | running | 4 | 0.0 | * | +---------+----------+----------------+--------+---------+----+-----------+-----------------+ Are you sure you want to restart members patroni1, patroni2, patroni3? [y/N]: y Restart if the PostgreSQL version is less than provided (e.g. 9.5.2) []: Success: restart on member patroni1 Success: restart on member patroni2 Success: restart on member patroni3 postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl list +---------+----------+----------------+--------+---------+----+-----------+ | Cluster | Member | Host | Role | State | TL | Lag in MB | +---------+----------+----------------+--------+---------+----+-----------+ | PG1 | patroni1 | 192.168.22.111 | Leader | running | 4 | 0.0 | | PG1 | patroni2 | 192.168.22.112 | | running | 4 | 0.0 | | PG1 | patroni3 | 192.168.22.113 | | running | 4 | 0.0 | +---------+----------+----------------+--------+---------+----+-----------+
Change PostgreSQL parameters using “alter system”
Of course you can change a parameter only on one node using “alter system”, too.
postgres@patroni1:/home/postgres/ [PG1] sq psql (11.5) Type "help" for help. postgres=# show archive_Command; archive_command ----------------- /bin/false (1 row) postgres=# alter system set archive_command='/bin/true'; ALTER SYSTEM postgres=# select pg_reload_conf(); pg_reload_conf ---------------- t (1 row) postgres=# show archive_command; archive_command ----------------- /bin/true (1 row)
For sure the parameter change is not automatically applied to the replicas. The parameter is only changed on that node. All the other nodes will keep the value from the DCS. So you can change the parameter using “patronictl edit-config” or with an “alter system” command on each node. But: you also have to keep in mind the order in which the parameters are applied. The “alter system” change will persist the “patronictl edit-config” command.
Conclusion
So if you consider that there are some specialities when changing parameters in a Patroni cluster, it is quite easy to change a parameter. There are some parameters that need the same value on all nodes, e.g. max_connections, max_worker_processes, wal_level. And there are as well some parameters controlled by patroni, e.g listen_addresses and port. For a more details check the Patroni documentation . And last but not least: If you change the configuration with patronictl and one node still has another configuration. Look for a postgresql.auto.conf in the PGDATA directory. Maybe there you can find the reason for different parameters on your nodes.
If you are interested in more “Patroni Operations” blogs, check also this one Patroni operations: Switchover and Failover.
DARIA
01.02.2023Thank you for the artictle