Slow pgdump

Lists: pgsql-general
From: Patrick Hatcher <PHatcher(at)macys(dot)com>
To: pgsql-general(at)postgresql(dot)org
Cc: Aron Bartling <abartling(at)macys(dot)com>
Subject: Slow pgdump
Date: 2005-11-23 01:13:44
Message-ID: OF60486770.8EC48273-ON882570C2.0004E59F-882570C2.0006BE3D@FDS.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general


OS - RH3
Pg - 7.4.9
Ram - 8G
Disk-709G Raid 0+1

We are having a pgdump issue that we can't seem to find an answer for

Background:
Production server contains 11 databases of which 1 database comprises 85%
of the 194G used on the drive. This one large db contains 12 schemas.
Within the schemas of the large db, there maybe 1 or 2 views that span
across 2 schemas.

If we do a backup using pgdump against the entire database, it will take
upwards of 8+ hours for the backup to complete.

If we split the backup up to do a pgdump for the first 10 dbs and then do a
pgdump by schema on the 1 large db, the the backup takes only 3.5hrs

The other than using the schema switch, there is no compression happening
on either dump.

Any ideas why this might be happening or where we can check for issues?

TIA
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com


From: "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>
To: Patrick Hatcher <PHatcher(at)macys(dot)com>
Cc: pgsql-general(at)postgresql(dot)org, Aron Bartling <abartling(at)macys(dot)com>
Subject: Re: Slow pgdump
Date: 2005-11-28 21:05:51
Message-ID: 20051128210551.GX78939@pervasive.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

I'm making a bit of a guess here, but I suspect the issue is that a
single large dump will hold a transaction open for the entire time. That
will affect vacuums at a minimum; not sure what else could be affected.

On Tue, Nov 22, 2005 at 05:13:44PM -0800, Patrick Hatcher wrote:
>
> OS - RH3
> Pg - 7.4.9
> Ram - 8G
> Disk-709G Raid 0+1
>
> We are having a pgdump issue that we can't seem to find an answer for
>
> Background:
> Production server contains 11 databases of which 1 database comprises 85%
> of the 194G used on the drive. This one large db contains 12 schemas.
> Within the schemas of the large db, there maybe 1 or 2 views that span
> across 2 schemas.
>
> If we do a backup using pgdump against the entire database, it will take
> upwards of 8+ hours for the backup to complete.
>
> If we split the backup up to do a pgdump for the first 10 dbs and then do a
> pgdump by schema on the 1 large db, the the backup takes only 3.5hrs
>
> The other than using the schema switch, there is no compression happening
> on either dump.
>
> Any ideas why this might be happening or where we can check for issues?
>
> TIA
> Patrick Hatcher
> Development Manager Analytics/MIO
> Macys.com
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org
>

--
Jim C. Nasby, Sr. Engineering Consultant jnasby(at)pervasive(dot)com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461