Being at customers is always the best way to learn. Today while discussing that pg_dump will always produce a consistent dump because it uses the “repeatable read” isolation level this question came up: What happens when we dump a database and while the dump is running we truncate a table in that database? Does that block? Well, the answer is in the documentation: pg_dump is a utility for backing up a PostgreSQL database. It makes consistent backups even if the database is being used concurrently. pg_dump does not block other users accessing the database (readers or writers).

What is not in the documentation is that pg_dump uses the “repeatable read” isolation level, but it is documented in the source code:

postgres@pgbox:/home/postgres/postgresql-10.4/ [PG10] vi src/bin/pg_dump/pg_dump.c
...
 *      Note that pg_dump runs in a transaction-snapshot mode transaction,
 *      so it sees a consistent snapshot of the database including system
 *      catalogs. However, it relies in part on various specialized backend
 *      functions like pg_get_indexdef(), and those things tend to look at
 *      the currently committed state.  So it is possible to get 'cache
 *      lookup failed' error if someone performs DDL changes while a dump is
 *      happening. The window for this sort of thing is from the acquisition
 *      of the transaction snapshot to getSchemaData() (when pg_dump acquires
 *      AccessShareLock on every table it intends to dump). It isn't very large,
 *      but it can happen.
...

For the moment lets ignore the rest of that paragraph and focus on the original question. For that lets create some sample data we can dump:

postgres=# create database dump;
CREATE DATABASE
postgres=# c dump
You are now connected to database "dump" as user "postgres".
dump=# create table t_dump as 
       select a.*, md5(a::text) 
         from generate_series ( 1, 3000000 ) a;
SELECT 3000000

As we need two sessions for this demo we increase the time it takes for the dump by compressing at the highest level:

postgres@pgbox:/home/postgres/ [PG10] pg_dump --compress=9 dump > test.dump

In a second session, while the dump is running, we truncate the table?

dump=# truncate table t_dump;
TRUNCATE TABLE
Time: 9411.574 ms (00:09.412)

And surprise: Yes, the pg_dump operation is blocking the truncate (you can see that from the time it took, usually a truncate is instant). So the documentation is not quite accurate. Before going further, does the same happen when we modify the table while the dump is running? Same test as above for the dump and in the second session:

dump=# alter table t_dump add c text;
ALTER TABLE
Time: 11093.535 ms (00:11.094)

Same here, blocking (otherwise the addition of a column would have been instant). So when you do a DDL against a table while a dump is running that DDL has to wait until the dump completed.

Coming back to the remaining sentences of the paragraph from the source code. pg_dump acquires an AccessShareLock while it is running and we can verify this in the second session while the dump is running:

dump=# select database, relation::regclass, mode from pg_locks where relation = 't_dump'::regclass;
 database | relation |      mode       
----------+----------+-----------------
    33985 | t_dump   | AccessShareLock
(1 row)

This does not lock the table for reading or writing but it does lock the table for DDLs. We can confirm that as well when we do a select and an insert in the second session while the dump is running in the first session:

dump=# insert into t_dump (a,md5,c) values (-1,'aaa','bbb');
INSERT 0 1
Time: 8.131 ms
dump=# select * from t_dump where a = -1;
 a  | md5 |  c  
----+-----+-----
 -1 | aaa | bbb
(1 row)

No issues here. When we manually lock the table in “AccessShareLock” in the first session we will not be able to alter it in the second session.
Session 1:

dump=# begin;
BEGIN
dump=# lock table t_dump IN ACCESS SHARE MODE;
LOCK TABLE
dump=# 

… and in the second session try some DDL:

dump=# alter table t_dump alter COLUMN c set default 'a';
-- blocks

Creating an index on that table while is locked in that mode works:

dump=# create index i1 on t_dump (c);
CREATE INDEX

… while dropping an index while the table is locked in that mode will block as well:

dump=# drop index i1;
-- blocks

So the final advice: Plan to do your dumps when there is no DDL activity.