Lists: | pgsql-bugs |
---|
From: | "Nick Fankhauser" <nickf(at)ontko(dot)com> |
---|---|
To: | <pgsql-bugs(at)postgresql(dot)org> |
Subject: | pg_dump failure in tar format. |
Date: | 2003-07-25 20:12:06 |
Message-ID: | NEBBLAAHGLEEPCGOBHDGAEENHNAA.nickf@ontko.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
I've posted the information below twice to the admin list without getting a
solution, so I'm promoting it to a bug.
I'm not subscribed to the bug list, so please cc me on responses and I'll
try to supply information as needed.
I'm getting the following error message:
pg_dump: [tar archiver] could not write to tar member (wrote 39, attempted
166)
Here are the particulars:
-I'm running this command: "pg_dump -Ft prod > prod.dump.tar" (The database
is named prod)
-The dump gets about 1/4 of the way through, and then gives me the error
message and dies.
-I'm running PostgreSQL version 7.3.2.
-There is plenty of disk space available.
-The same command on the same database and server with same specs worked
last week when I was on V7.2.1.
-Since upgrading, more data has been added, but the structure of the
database is unchanged.
-Using the -v switch shows me that it always quits on the same table, but
otherwise adds no new information.
-The part of the error message in parentheses changes on each run. For
instance, on the last run, I got "(wrote 64, attempted 174)" The rest of
the message remains consistent.
-The table it quits on is fairly large- about 2.6GB. It is both "wide"
because it contains a text field that is usually a few sentences of text,
and "long", containing 9,137,808 records. This is also the only table in our
database that is split into multiple files.
-A text dump using this command works fine and exports the entire database
without a problem: "pg_dump prod > prod.dump.text"
-I have set up an identical system (Same hardware, same software, same Data
in DB) to do testing on, and confirmed that I get the same error on this
system, so it appears to not be a hardware error or just a bad copy of the
software.
-Several folks suggested that I was hitting the 2GB file size limit. This is
not the case. Here is a snip from the console log on the second machine that
I'm using to diagnose with:
nickf(at)morgai:~$ pg_dump -Ft alpha > dump.tar
pg_dump: [tar archiver] could not write to tar member (wrote 110, attempted
398)
nickf(at)morgai:~$
nickf(at)morgai:~$
nickf(at)morgai:~$ ls -al dump.tar
-rw-r--r-- 1 nickf nickf 1388367872 Jul 21 14:49 dump.tar
nickf(at)morgai:~$
Thanks for looking this over- please let me know if anyone has any ideas
about this.
-Regards,
-Nick
---------------------------------------------------------------------
Nick Fankhauser
nickf(at)doxpop(dot)com Phone 1.765.965.7363 Fax 1.765.962.9788
doxpop - Court records at your fingertips - http://www.doxpop.com/
From: | Philip Warner <pjw(at)rhyme(dot)com(dot)au> |
---|---|
To: | <nickf(at)ontko(dot)com>, <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: pg_dump failure in tar format. |
Date: | 2003-08-01 23:37:06 |
Message-ID: | 5.1.0.14.0.20030802093535.070b0008@mail.rhyme.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
At 02:47 PM 1/08/2003 -0500, Nick Fankhauser - Doxpop wrote:
>pg_dump: [tar archiver] could not write to tar member (wrote 39, attempted
>166)
One of the nasty features of TAR format is that it needs to know the file
size before adding it to the archive. As a result, pg_dump stores the file
in the /tmp directory before moving it to the actual output file. For huge
files, this means /tmp must be able to cope with the uncompressed size of
the largest table. It's horrible, I know, which is why I use -Fc, but I'd
guess this is the cause of your error.
It uses tmpfile() to get a temp file, so I can't see a simple way to test
this, unless you can free up 2+GB in /tmp?
Please let me know if this is the cause, and if you can not test it, I will
try to send a patch to (temporarily) avoid using tmpfile(). Ideally, I
suppose pg_dump should support the ability to override the tmpfile() location.
Bye for now,
Philip
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 03 5330 3172 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
From: | "Nick Fankhauser" <nickf(at)ontko(dot)com> |
---|---|
To: | "Philip Warner" <pjw(at)rhyme(dot)com(dot)au>, <pgsql-bugs(at)postgresql(dot)org> |
Cc: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Subject: | Re: pg_dump failure in tar format. |
Date: | 2003-08-02 17:16:43 |
Message-ID: | NEBBLAAHGLEEPCGOBHDGOEEDHOAA.nickf@ontko.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-bugs |
Phillip-
Thanks for the explanation of how this works- I'll check it out & let you
know if that is the source of the problem in my case. If it is, perhaps I
can submit a short paragraph for the documentation that will clue people in
if they run into this in the future. I suppose it could be handled more
gracefully in the code by specifying a tempfile location as you suggest, but
since it isn't really broken, I'd vote that documenting the potential
constraint is a valid and time-saving "fix" that will let you put off
messing with the code until you have a more compelling reason to work with
it.
-NF
> -----Original Message-----
> From: Philip Warner [mailto:pjw(at)rhyme(dot)com(dot)au]
> Sent: Friday, August 01, 2003 6:37 PM
> To: nickf(at)ontko(dot)com; pgsql-bugs(at)postgresql(dot)org
> Subject: Re: [BUGS] pg_dump failure in tar format.
>
>
> At 02:47 PM 1/08/2003 -0500, Nick Fankhauser - Doxpop wrote:
> >pg_dump: [tar archiver] could not write to tar member (wrote 39,
> attempted
> >166)
>
> One of the nasty features of TAR format is that it needs to know the file
> size before adding it to the archive. As a result, pg_dump stores
> the file
> in the /tmp directory before moving it to the actual output file.
> For huge
> files, this means /tmp must be able to cope with the uncompressed size of
> the largest table. It's horrible, I know, which is why I use -Fc, but I'd
> guess this is the cause of your error.
>
> It uses tmpfile() to get a temp file, so I can't see a simple way to test
> this, unless you can free up 2+GB in /tmp?
>
> Please let me know if this is the cause, and if you can not test
> it, I will
> try to send a patch to (temporarily) avoid using tmpfile(). Ideally, I
> suppose pg_dump should support the ability to override the
> tmpfile() location.
>
> Bye for now,
>
> Philip
>
>
>
>
> ----------------------------------------------------------------
> Philip Warner | __---_____
> Albatross Consulting Pty. Ltd. |----/ - \
> (A.B.N. 75 008 659 498) | /(@) ______---_
> Tel: (+61) 0500 83 82 81 | _________ \
> Fax: (+61) 03 5330 3172 | ___________ |
> Http://www.rhyme.com.au | / \|
> | --________--
> PGP key available upon request, | /
> and from pgp5.ai.mit.edu:11371 |/
>