pg_dump issues

From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: pg_dump issues
Date: 2011-10-01 21:08:02
Message-ID: 4E878132.4080903@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

While investigating a client problem I just observed that pg_dump takes
a surprisingly large amount of time to dump a schema with a large number
of views. The client's hardware is quite spiffy, and yet pg_dump is
taking many minutes to dump a schema with some 35,000 views. Here's a
simple test case:

create schema views;
do 'begin for i in 1 .. 10000 loop execute $$create view views.v_$$
|| i ||$$ as select current_date as d, current_timestamp as ts,
$_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
loop; end;';

On my modest hardware this database took 4m18.864s for pg_dump to run.
Should we be looking at replacing the retail operations which consume
most of this time with something that runs faster?

There is also this gem of behaviour, which is where I started:

p1 p2
begin;
drop view foo;
pg_dump
commit;
boom.

with this error:

2011-10-01 16:38:20 EDT [27084] 30063 ERROR: could not open
relation with OID 133640
2011-10-01 16:38:20 EDT [27084] 30064 STATEMENT: SELECT
pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef

Of course, this isn't caused by having a large catalog, but it's
terrible nevertheless. I'm not sure what to do about it.

cheers

andrew

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Joe Abbate 2011-10-01 21:48:10 Re: pg_dump issues
Previous Message Daniel Farina 2011-10-01 20:44:50 Re: pg_cancel_backend by non-superuser