Re: out of memory in backup and restore

Lists: pgsql-admin
From: Thomas Markus <t(dot)markus(at)proventis(dot)net>
To: pgsql-admin(at)postgresql(dot)org
Subject: out of memory in backup and restore
Date: 2006-12-15 09:28:55
Message-ID: 45826AD7.7050005@proventis.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi,

i'm running pg 8.1.0 on a debian linux (64bit) box (dual xeon 8gb ram)
pg_dump creates an error when exporting a large table with blobs
(largest blob is 180mb)

error is:
pg_dump: ERROR: out of memory
DETAIL: Failed on request of size 1073741823.
pg_dump: SQL command to dump the contents of table "downloads" failed:
PQendcopy() failed.
pg_dump: Error message from server: ERROR: out of memory
DETAIL: Failed on request of size 1073741823.
pg_dump: The command was: COPY public.downloads ... TO stdout;

if i try pg_dump with -d dump runs with all types (c,t,p), but i cant
restore (out of memory error or corrupt tar header at ...)

how can i backup (and restore) such a db?

kr
Thomas


From: "Shoaib Mir" <shoaibmir(at)gmail(dot)com>
To: "Thomas Markus" <t(dot)markus(at)proventis(dot)net>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: out of memory in backup and restore
Date: 2006-12-15 11:19:16
Message-ID: bf54be870612150319x90ea963hf2b0c3443139ad3e@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Can you please show the dbserver logs and syslog at the same time when it
goes out of memory...

Also how much is available RAM you have and the SHMMAX set?

------------
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 12/15/06, Thomas Markus <t(dot)markus(at)proventis(dot)net> wrote:
>
> Hi,
>
> i'm running pg 8.1.0 on a debian linux (64bit) box (dual xeon 8gb ram)
> pg_dump creates an error when exporting a large table with blobs
> (largest blob is 180mb)
>
> error is:
> pg_dump: ERROR: out of memory
> DETAIL: Failed on request of size 1073741823.
> pg_dump: SQL command to dump the contents of table "downloads" failed:
> PQendcopy() failed.
> pg_dump: Error message from server: ERROR: out of memory
> DETAIL: Failed on request of size 1073741823.
> pg_dump: The command was: COPY public.downloads ... TO stdout;
>
> if i try pg_dump with -d dump runs with all types (c,t,p), but i cant
> restore (out of memory error or corrupt tar header at ...)
>
> how can i backup (and restore) such a db?
>
> kr
> Thomas
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org
>


From: Thomas Markus <t(dot)markus(at)proventis(dot)net>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: out of memory in backup and restore
Date: 2006-12-15 12:02:30
Message-ID: 45828ED6.9020501@proventis.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi,

logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
- cat /proc/sys/kernel/shmmax is 2013265920
- ulimit is unlimited
kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit
postmaster process usage is 1.8gb ram atm

thx
Thomas

Shoaib Mir schrieb:
> Can you please show the dbserver logs and syslog at the same time when
> it goes out of memory...
>
> Also how much is available RAM you have and the SHMMAX set?
>
> ------------
> Shoaib Mir
> EnterpriseDB ( www.enterprisedb.com <http://www.enterprisedb.com>)
>
> On 12/15/06, *Thomas Markus* <t(dot)markus(at)proventis(dot)net
> <mailto:t(dot)markus(at)proventis(dot)net>> wrote:
>
> Hi,
>
> i'm running pg 8.1.0 on a debian linux (64bit) box (dual xeon 8gb ram)
> pg_dump creates an error when exporting a large table with blobs
> (largest blob is 180mb)
>
> error is:
> pg_dump: ERROR: out of memory
> DETAIL: Failed on request of size 1073741823.
> pg_dump: SQL command to dump the contents of table "downloads" failed:
> PQendcopy() failed.
> pg_dump: Error message from server: ERROR: out of memory
> DETAIL: Failed on request of size 1073741823.
> pg_dump: The command was: COPY public.downloads ... TO stdout;
>
> if i try pg_dump with -d dump runs with all types (c,t,p), but i cant
> restore (out of memory error or corrupt tar header at ...)
>
> how can i backup (and restore) such a db?
>
> kr
> Thomas
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org
>
>

--
Thomas Markus

Tel: +49 30 29 36 399 - 22
Fax: +49 30 29 36 399 - 50
Mail: t(dot)markus(at)proventis(dot)net
Web: www.proventis.net
Web: www.blue-ant.de

proventis GmbH
Zimmerstraße 79-80
10117 Berlin

"proventis: Wir bewegen Organisationen."

Attachment Content-Type Size
t.markus.vcf text/x-vcard 251 bytes

From: "Marcelo Costa" <marcelojscosta(at)gmail(dot)com>
To: "Thomas Markus" <t(dot)markus(at)proventis(dot)net>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: out of memory in backup and restore
Date: 2006-12-15 12:09:50
Message-ID: c13f2d590612150409k76950984h5817211c24518f7b@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Try see your /tmp directory on your server, this maybe can for an left space
on your system disk.

[],s

Marcelo.

2006/12/15, Thomas Markus <t(dot)markus(at)proventis(dot)net>:
>
> Hi,
>
> logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
> - cat /proc/sys/kernel/shmmax is 2013265920
> - ulimit is unlimited
> kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit
> postmaster process usage is 1.8gb ram atm
>
> thx
> Thomas
>
>
> Shoaib Mir schrieb:
> > Can you please show the dbserver logs and syslog at the same time when
> > it goes out of memory...
> >
> > Also how much is available RAM you have and the SHMMAX set?
> >
> > ------------
> > Shoaib Mir
> > EnterpriseDB ( www.enterprisedb.com <http://www.enterprisedb.com>)
> >
> > On 12/15/06, *Thomas Markus* <t(dot)markus(at)proventis(dot)net
> > <mailto:t(dot)markus(at)proventis(dot)net>> wrote:
> >
> > Hi,
> >
> > i'm running pg 8.1.0 on a debian linux (64bit) box (dual xeon 8gb
> ram)
> > pg_dump creates an error when exporting a large table with blobs
> > (largest blob is 180mb)
> >
> > error is:
> > pg_dump: ERROR: out of memory
> > DETAIL: Failed on request of size 1073741823.
> > pg_dump: SQL command to dump the contents of table "downloads"
> failed:
> > PQendcopy() failed.
> > pg_dump: Error message from server: ERROR: out of memory
> > DETAIL: Failed on request of size 1073741823.
> > pg_dump: The command was: COPY public.downloads ... TO stdout;
> >
> > if i try pg_dump with -d dump runs with all types (c,t,p), but i
> cant
> > restore (out of memory error or corrupt tar header at ...)
> >
> > how can i backup (and restore) such a db?
> >
> > kr
> > Thomas
> >
> > ---------------------------(end of
> > broadcast)---------------------------
> > TIP 4: Have you searched our list archives?
> >
> > http://archives.postgresql.org
> >
> >
>
> --
> Thomas Markus
>
> Tel: +49 30 29 36 399 - 22
> Fax: +49 30 29 36 399 - 50
> Mail: t(dot)markus(at)proventis(dot)net
> Web: www.proventis.net
> Web: www.blue-ant.de
>
> proventis GmbH
> Zimmerstraße 79-80
> 10117 Berlin
>
> "proventis: Wir bewegen Organisationen."
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org
>
>
>
>

--
Marcelo Costa


From: "Shoaib Mir" <shoaibmir(at)gmail(dot)com>
To: "Thomas Markus" <t(dot)markus(at)proventis(dot)net>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: out of memory in backup and restore
Date: 2006-12-15 12:17:11
Message-ID: bf54be870612150417y6ff7eb69r383cfaff455db22a@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Looks like with 1.8 GB usage not much left for dump to get the required
chunk from memory. Not sure if that will help but try increasing the swap
space...

-------------
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 12/15/06, Thomas Markus <t(dot)markus(at)proventis(dot)net> wrote:
>
> Hi,
>
> logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
> - cat /proc/sys/kernel/shmmax is 2013265920
> - ulimit is unlimited
> kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit
> postmaster process usage is 1.8gb ram atm
>
> thx
> Thomas
>
>
> Shoaib Mir schrieb:
> > Can you please show the dbserver logs and syslog at the same time when
> > it goes out of memory...
> >
> > Also how much is available RAM you have and the SHMMAX set?
> >
> > ------------
> > Shoaib Mir
> > EnterpriseDB ( www.enterprisedb.com <http://www.enterprisedb.com>)
> >
> > On 12/15/06, *Thomas Markus* <t(dot)markus(at)proventis(dot)net
> > <mailto:t(dot)markus(at)proventis(dot)net>> wrote:
> >
> > Hi,
> >
> > i'm running pg 8.1.0 on a debian linux (64bit) box (dual xeon 8gb
> ram)
> > pg_dump creates an error when exporting a large table with blobs
> > (largest blob is 180mb)
> >
> > error is:
> > pg_dump: ERROR: out of memory
> > DETAIL: Failed on request of size 1073741823.
> > pg_dump: SQL command to dump the contents of table "downloads"
> failed:
> > PQendcopy() failed.
> > pg_dump: Error message from server: ERROR: out of memory
> > DETAIL: Failed on request of size 1073741823.
> > pg_dump: The command was: COPY public.downloads ... TO stdout;
> >
> > if i try pg_dump with -d dump runs with all types (c,t,p), but i
> cant
> > restore (out of memory error or corrupt tar header at ...)
> >
> > how can i backup (and restore) such a db?
> >
> > kr
> > Thomas
> >
> > ---------------------------(end of
> > broadcast)---------------------------
> > TIP 4: Have you searched our list archives?
> >
> > http://archives.postgresql.org
> >
> >
>
> --
> Thomas Markus
>
> Tel: +49 30 29 36 399 - 22
> Fax: +49 30 29 36 399 - 50
> Mail: t(dot)markus(at)proventis(dot)net
> Web: www.proventis.net
> Web: www.blue-ant.de
>
> proventis GmbH
> Zimmerstraße 79-80
> 10117 Berlin
>
> "proventis: Wir bewegen Organisationen."
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org
>
>
>
>


From: Thomas Markus <t(dot)markus(at)proventis(dot)net>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: out of memory in backup and restore
Date: 2006-12-15 12:24:55
Message-ID: 45829417.3030109@proventis.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi,

free diskspace is 34gb (underlying xfs) (complete db dump is 9gb). free
-tm says 6gb free ram and 6gb unused swap space.
can i decrease shared buffers without pg restart?

thx
Thomas

Shoaib Mir schrieb:
> Looks like with 1.8 GB usage not much left for dump to get the
> required chunk from memory. Not sure if that will help but try
> increasing the swap space...
>
> -------------
> Shoaib Mir
> EnterpriseDB ( www.enterprisedb.com <http://www.enterprisedb.com>)
>
> On 12/15/06, *Thomas Markus* <t(dot)markus(at)proventis(dot)net
> <mailto:t(dot)markus(at)proventis(dot)net>> wrote:
>
> Hi,
>
> logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
> - cat /proc/sys/kernel/shmmax is 2013265920
> - ulimit is unlimited
> kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit
> postmaster process usage is 1.8gb ram atm
>
> thx
> Thomas
>
>
> Shoaib Mir schrieb:
> > Can you please show the dbserver logs and syslog at the same
> time when
> > it goes out of memory...
> >
> > Also how much is available RAM you have and the SHMMAX set?
> >
> > ------------
> > Shoaib Mir
> > EnterpriseDB ( www.enterprisedb.com
> <http://www.enterprisedb.com> <http://www.enterprisedb.com>)
> >
> > On 12/15/06, *Thomas Markus* <t(dot)markus(at)proventis(dot)net
> <mailto:t(dot)markus(at)proventis(dot)net>
> > <mailto: t(dot)markus(at)proventis(dot)net
> <mailto:t(dot)markus(at)proventis(dot)net>>> wrote:
> >
> > Hi,
> >
> > i'm running pg 8.1.0 on a debian linux (64bit) box (dual
> xeon 8gb ram)
> > pg_dump creates an error when exporting a large table with
> blobs
> > (largest blob is 180mb)
> >
> > error is:
> > pg_dump: ERROR: out of memory
> > DETAIL: Failed on request of size 1073741823.
> > pg_dump: SQL command to dump the contents of table
> "downloads" failed:
> > PQendcopy() failed.
> > pg_dump: Error message from server: ERROR: out of memory
> > DETAIL: Failed on request of size 1073741823.
> > pg_dump: The command was: COPY public.downloads ... TO stdout;
> >
> > if i try pg_dump with -d dump runs with all types (c,t,p),
> but i cant
> > restore (out of memory error or corrupt tar header at ...)
> >
> > how can i backup (and restore) such a db?
> >
> > kr
> > Thomas
> >
> > ---------------------------(end of
> > broadcast)---------------------------
> > TIP 4: Have you searched our list archives?
> >
> > http://archives.postgresql.org
> >
> >
>
>


From: "Marcelo Costa" <marcelojscosta(at)gmail(dot)com>
To: "Thomas Markus" <t(dot)markus(at)proventis(dot)net>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: out of memory in backup and restore
Date: 2006-12-15 12:42:42
Message-ID: c13f2d590612150442k11be0a63m9731a2afb86d7e10@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

To decrease shared buffers you need restart your pgsql.

If do you make on df -h command what is the result, please send.

2006/12/15, Thomas Markus <t(dot)markus(at)proventis(dot)net>:
>
> Hi,
>
> free diskspace is 34gb (underlying xfs) (complete db dump is 9gb). free
> -tm says 6gb free ram and 6gb unused swap space.
> can i decrease shared buffers without pg restart?
>
> thx
> Thomas
>
> Shoaib Mir schrieb:
> > Looks like with 1.8 GB usage not much left for dump to get the
> > required chunk from memory. Not sure if that will help but try
> > increasing the swap space...
> >
> > -------------
> > Shoaib Mir
> > EnterpriseDB ( www.enterprisedb.com <http://www.enterprisedb.com>)
> >
> > On 12/15/06, *Thomas Markus* <t(dot)markus(at)proventis(dot)net
> > <mailto:t(dot)markus(at)proventis(dot)net>> wrote:
> >
> > Hi,
> >
> > logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
> > - cat /proc/sys/kernel/shmmax is 2013265920
> > - ulimit is unlimited
> > kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit
> > postmaster process usage is 1.8gb ram atm
> >
> > thx
> > Thomas
> >
> >
> > Shoaib Mir schrieb:
> > > Can you please show the dbserver logs and syslog at the same
> > time when
> > > it goes out of memory...
> > >
> > > Also how much is available RAM you have and the SHMMAX set?
> > >
> > > ------------
> > > Shoaib Mir
> > > EnterpriseDB ( www.enterprisedb.com
> > <http://www.enterprisedb.com> <http://www.enterprisedb.com>)
> > >
> > > On 12/15/06, *Thomas Markus* <t(dot)markus(at)proventis(dot)net
> > <mailto:t(dot)markus(at)proventis(dot)net>
> > > <mailto: t(dot)markus(at)proventis(dot)net
> > <mailto:t(dot)markus(at)proventis(dot)net>>> wrote:
> > >
> > > Hi,
> > >
> > > i'm running pg 8.1.0 on a debian linux (64bit) box (dual
> > xeon 8gb ram)
> > > pg_dump creates an error when exporting a large table with
> > blobs
> > > (largest blob is 180mb)
> > >
> > > error is:
> > > pg_dump: ERROR: out of memory
> > > DETAIL: Failed on request of size 1073741823.
> > > pg_dump: SQL command to dump the contents of table
> > "downloads" failed:
> > > PQendcopy() failed.
> > > pg_dump: Error message from server: ERROR: out of memory
> > > DETAIL: Failed on request of size 1073741823.
> > > pg_dump: The command was: COPY public.downloads ... TO
> stdout;
> > >
> > > if i try pg_dump with -d dump runs with all types (c,t,p),
> > but i cant
> > > restore (out of memory error or corrupt tar header at ...)
> > >
> > > how can i backup (and restore) such a db?
> > >
> > > kr
> > > Thomas
> > >
> > > ---------------------------(end of
> > > broadcast)---------------------------
> > > TIP 4: Have you searched our list archives?
> > >
> > > http://archives.postgresql.org
> > >
> > >
> >
> >
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org
>

--
Marcelo Costa


From: Thomas Markus <t(dot)markus(at)proventis(dot)net>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: out of memory in backup and restore
Date: 2006-12-15 12:53:10
Message-ID: 45829AB6.7050800@proventis.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

df -h

Filesystem Size Used Avail Use% Mounted on
/dev/sda5 132G 99G 34G 75% /
tmpfs 4.0G 0 4.0G 0% /dev/shm
/dev/sda1 74M 16M 54M 23% /boot

is there another dump tool that dumps blobs (or all) as binary content
(not as insert statements, maybe directly dbblocks)?

Marcelo Costa schrieb:
> To decrease shared buffers you need restart your pgsql.
>
> If do you make on df -h command what is the result, please send.
>
> 2006/12/15, Thomas Markus < t(dot)markus(at)proventis(dot)net
> <mailto:t(dot)markus(at)proventis(dot)net>>:
>
> Hi,
>
> free diskspace is 34gb (underlying xfs) (complete db dump is 9gb).
> free
> -tm says 6gb free ram and 6gb unused swap space.
> can i decrease shared buffers without pg restart?
>
> thx
> Thomas
>
> Shoaib Mir schrieb:
> > Looks like with 1.8 GB usage not much left for dump to get the
> > required chunk from memory. Not sure if that will help but try
> > increasing the swap space...
> >
> > -------------
> > Shoaib Mir
> > EnterpriseDB ( www.enterprisedb.com
> <http://www.enterprisedb.com> <http://www.enterprisedb.com>)
> >
> > On 12/15/06, *Thomas Markus* <t(dot)markus(at)proventis(dot)net
> <mailto:t(dot)markus(at)proventis(dot)net>
> > <mailto: t(dot)markus(at)proventis(dot)net
> <mailto:t(dot)markus(at)proventis(dot)net>>> wrote:
> >
> > Hi,
> >
> > logfile content see
> http://www.rafb.net/paste/results/cvD7uk33.html
> > - cat /proc/sys/kernel/shmmax is 2013265920
> > - ulimit is unlimited
> > kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit
> > postmaster process usage is 1.8gb ram atm
> >
> > thx
> > Thomas
> >
> >
> > Shoaib Mir schrieb:
> > > Can you please show the dbserver logs and syslog at the same
> > time when
> > > it goes out of memory...
> > >
> > > Also how much is available RAM you have and the SHMMAX set?
> > >
> > > ------------
> > > Shoaib Mir
> > > EnterpriseDB ( www.enterprisedb.com
> <http://www.enterprisedb.com>
> > < http://www.enterprisedb.com> <http://www.enterprisedb.com>)
> > >
> > > On 12/15/06, *Thomas Markus* < t(dot)markus(at)proventis(dot)net
> <mailto:t(dot)markus(at)proventis(dot)net>
> > <mailto:t(dot)markus(at)proventis(dot)net <mailto:t(dot)markus(at)proventis(dot)net>>
> > > <mailto: t(dot)markus(at)proventis(dot)net
> <mailto:t(dot)markus(at)proventis(dot)net>
> > <mailto: t(dot)markus(at)proventis(dot)net
> <mailto:t(dot)markus(at)proventis(dot)net>>>> wrote:
> > >
> > > Hi,
> > >
> > > i'm running pg 8.1.0 on a debian linux (64bit) box (dual
> > xeon 8gb ram)
> > > pg_dump creates an error when exporting a large table with
> > blobs
> > > (largest blob is 180mb)
> > >
> > > error is:
> > > pg_dump: ERROR: out of memory
> > > DETAIL: Failed on request of size 1073741823.
> > > pg_dump: SQL command to dump the contents of table
> > "downloads" failed:
> > > PQendcopy() failed.
> > > pg_dump: Error message from server: ERROR: out of memory
> > > DETAIL: Failed on request of size 1073741823.
> > > pg_dump: The command was: COPY public.downloads
> ... TO stdout;
> > >
> > > if i try pg_dump with -d dump runs with all types (c,t,p),
> > but i cant
> > > restore (out of memory error or corrupt tar header at
> ...)
> > >
> > > how can i backup (and restore) such a db?
> > >
> > > kr
> > > Thomas
> > >
> > > ---------------------------(end of
> > > broadcast)---------------------------
> > > TIP 4: Have you searched our list archives?
> > >
> > > http://archives.postgresql.org
> <http://archives.postgresql.org>
> > >
> > >
> >
> >
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org
>
>
>
>
> --
> Marcelo Costa


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Thomas Markus <t(dot)markus(at)proventis(dot)net>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: out of memory in backup and restore
Date: 2006-12-15 15:44:34
Message-ID: 9065.1166197474@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Thomas Markus <t(dot)markus(at)proventis(dot)net> writes:
> logfile content see http://www.rafb.net/paste/results/cvD7uk33.html

It looks to me like you must have individual rows whose COPY
representation requires more than half a gigabyte (maybe much more,
but at least that) and the system cannot allocate enough buffer space.

It could be that this is a symptom of corrupted data, if you're certain
that there shouldn't be any such rows in the table.

> kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit

You really need a 64-bit PG build if you want to push
multi-hundred-megabyte field values around --- otherwise there's just
not enough headroom in the process address space. (Something newer than
8.1.0 would be a good idea too.)

regards, tom lane


From: Thomas Markus <t(dot)markus(at)proventis(dot)net>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: out of memory in backup and restore
Date: 2006-12-18 09:07:49
Message-ID: 45865A65.3050207@proventis.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi,

i tried various ways to backup that db.
if i use a separate 'copy table to 'file' with binary' i can export the
problematic table and restore without problems. resulting outputfile is
much smaller than default output and runtime is much shorter.
is there any way to say pg_dump to use a copy command with option 'with
binary'? it should be possible with the custom or tar format. i searched
the docs and manpage and cant find something.

thx
Thomas

Tom Lane schrieb:
> Thomas Markus <t(dot)markus(at)proventis(dot)net> writes:
>
>> logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
>>
>
> It looks to me like you must have individual rows whose COPY
> representation requires more than half a gigabyte (maybe much more,
> but at least that) and the system cannot allocate enough buffer space.
>
yes, msg is DETAIL: Failed on request of size 546321213. (521mb)
> It could be that this is a symptom of corrupted data, if you're certain
> that there shouldn't be any such rows in the table.
>
no
>
>> kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit
>>
>
> You really need a 64-bit PG build if you want to push
> multi-hundred-megabyte field values around --- otherwise there's just
> not enough headroom in the process address space. (Something newer than
> 8.1.0 would be a good idea too.)
>
i cant change the db installation. but thats another problem
> regards, tom lane
>