Obviously we need a table containing same large objects to start with:
postgres=# create table t1 ( a int, b oid); CREATE TABLE
Before inserting some data lets create a publication for that table right now:
postgres=# create publication my_pub for table t1; CREATE PUBLICATION
Ok, that works. Now we need a subscription for that, so on a second instance:
postgres=# create table t1 ( a int, b oid); CREATE TABLE postgres=# create subscription my_sub connection 'host=localhost port=6000 dbname=postgres user=postgres' publication my_pub; CREATE SUBSCRIPTION
So far, so good. Lets insert some data on the publishing instance and see what happens:
postgres=# ! which cp /usr/bin/cp postgres=# insert into t1 (a,b) values (1, lo_import('/usr/bin/cp')); INSERT 0 1
That worked. What do we see on the subscription side?
postgres=# select * from t1; a | b ---+------- 1 | 16418 (1 row) postgres=# select * from pg_size_pretty ( pg_relation_size ( 't1' ) ); pg_size_pretty ---------------- 8192 bytes (1 row)
So, at least “something” is there. Lets prove it:
postgres=# select lo_export(b,'/tmp/cp') from t1; ERROR: large object 16418 does not exist postgres=#
Hm, this is not what was expected, right? Doing the same on the publishing side works:
postgres=# select lo_export(b,'/tmp/cp') from t1; lo_export ----------- 1 (1 row) postgres=# ! chmod +x /tmp/cp postgres=# ! /tmp/cp --help | head -1 Usage: /tmp/cp [OPTION]... [-T] SOURCE DEST
This means the OID is replicated but not the large object itself. So the answer is: No, large objects can not be used with PostgreSQL 10 logical replication.