Re: Cygwin-Postgres-IpcMemoryCreate

Lists: pgsql-cygwin
From: Barry Pederson <bp(at)barryp(dot)org>
To: pgsql-cygwin(at)postgresql(dot)org
Subject: Re: Cygwin-Postgres-IpcMemoryCreate
Date: 2002-04-29 17:02:04
Message-ID: 3CCD7C8C.8030902@barryp.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-cygwin

I've been fighting with moving a PostgreSQL database from 7.1.3 to 7.2.1, on a
Win2k SP2 box with NTFS cygwin 1.3.10, cygipc , and found that the backend
would stop on me everytime pg_restore tried to restore a very large
largeobject (like 100+ Megabytes). The error displayed would be:

-------
FATAL 2: link from /d/postgresql_data/pg_xlog/0000000000000047 to /d/postgresql
_data/pg_xlog/000000000000004F (initialization of log file 0, segment 79)
failed: Permission denied
-------

After poking around in the source, I came up with the basically the same patch
I now see you guys were discussing just recently on the pgsql-cygwin archives,
where src/backend/access/transam/xlog.c is altered to use rename() instead of
link()/unlink() - as BeOS does.

So if people need another way to reproduce the problem, one way at least for
me is to create a new database, import a single extremely large (100+mb)
object, pg_dump it with something like: pg_dump -b -Ft -f foo.pgdump.tar
mydbname, and then try to restore it with something like: pg_restore -Ft -d
mydbname foo.pgdump.tar

It'll only take one or two tries for the backend to bail out.

So maybe there is something funky going on with link/unlink, and rename would
be a better choice on cygwin.

Barry


From: Jason Tishler <jason(at)tishler(dot)net>
To: Barry Pederson <bp(at)barryp(dot)org>
Cc: pgsql-cygwin(at)postgresql(dot)org
Subject: Re: Cygwin-Postgres-IpcMemoryCreate
Date: 2002-04-29 19:27:16
Message-ID: 20020429192716.GK1152@tishler.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-cygwin

Barry,

On Mon, Apr 29, 2002 at 12:02:04PM -0500, Barry Pederson wrote:
> So if people need another way to reproduce the problem, one way at least
> for me is to create a new database, import a single extremely large
> (100+mb) object, pg_dump it with something like: pg_dump -b -Ft -f
> foo.pgdump.tar mydbname, and then try to restore it with something like:
> pg_restore -Ft -d mydbname foo.pgdump.tar

Thanks for the test case.

> It'll only take one or two tries for the backend to bail out.

So, it doesn't happen every time? Sounds like a timing issue that
triggers the infamous MS Windows "feature" that prevents two processes
from opening the same file.

> So maybe there is something funky going on with link/unlink, and rename
> would be a better choice on cygwin.

Would someone be willing to take the lead of this one and strace or
debug (i.e., via gdb) this problem? If so, I would be willing to take
the results to pgsql-patches or cygwin-developers as appropriate.

Thanks,
Jason