Re: Missing pg_clog files

Lists: pgsql-admin
From: Carol Walter <walterc(at)indiana(dot)edu>
To: pgsql-admin(at)postgresql(dot)org
Subject: Missing pg_clog files
Date: 2008-09-23 21:31:18
Message-ID: CA701605-BE56-416C-9034-789B20180281@indiana.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

I'm missing about 30 pg_clog files. How do I recover from this?

Carol


From: "Tena Sakai" <tsakai(at)gallo(dot)ucsf(dot)edu>
To: "Carol Walter" <walterc(at)indiana(dot)edu>, <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Missing pg_clog files
Date: 2008-09-23 22:13:11
Message-ID: FE44E0D7EAD2ED4BB2165071DB8E328C0378F509@egcrc-ex01.egcrc.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi Carol,

I don't have answer...

I have had a similar problem and whatever I
did didn't work. As I recall, the files in
pg_clog directory started from 0000 and the
number got bigger by 1 (hex) and there was
a gap between the most recent one and the
second recent one. (Is that how you know
that you are missing about 30 files?) I
suspect the reason why it didn't work for me
was because the last file was incomplete. I
wound up upgrading to the latest postgres and
restored from a backup file I made by pg_dumpall.
My platform is linux (do I recall yours is solaris?)
and the postgres version I was running was 8.3.0.

The lesson I learned was to keep an eye on the
pg_clog directory. Since the installation of
8.3.3 there has been only one file (0000), though
the size keeps growing (I think by 8k bytes).

My two bits.

Tena Sakai
tsakai(at)gallo(dot)ucsf(dot)edu

-----Original Message-----
From: pgsql-admin-owner(at)postgresql(dot)org on behalf of Carol Walter
Sent: Tue 9/23/2008 2:31 PM
To: pgsql-admin(at)postgresql(dot)org
Subject: [ADMIN] Missing pg_clog files

I'm missing about 30 pg_clog files. How do I recover from this?

Carol


From: Carol Walter <walterc(at)indiana(dot)edu>
To: Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>
Cc: <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Missing pg_clog files
Date: 2008-09-23 22:35:01
Message-ID: 1B5F9FC4-578A-44FA-8264-FC674CF5A0FD@indiana.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi, Tena,

Yes, you do recall correctly. It is Solaris 10 and Postgres 8.2.3.

I'm going to run a pg_dumpall and keep my fingers crossed. I'm
getting similar errors when I try to do other things like vacuumdb,
so maybe pg_dumpall won't run either.

Carol
On Sep 23, 2008, at 6:13 PM, Tena Sakai wrote:

> Hi Carol,
>
> I don't have answer...
>
> I have had a similar problem and whatever I
> did didn't work. As I recall, the files in
> pg_clog directory started from 0000 and the
> number got bigger by 1 (hex) and there was
> a gap between the most recent one and the
> second recent one. (Is that how you know
> that you are missing about 30 files?) I
> suspect the reason why it didn't work for me
> was because the last file was incomplete. I
> wound up upgrading to the latest postgres and
> restored from a backup file I made by pg_dumpall.
> My platform is linux (do I recall yours is solaris?)
> and the postgres version I was running was 8.3.0.
>
> The lesson I learned was to keep an eye on the
> pg_clog directory. Since the installation of
> 8.3.3 there has been only one file (0000), though
> the size keeps growing (I think by 8k bytes).
>
> My two bits.
>
> Tena Sakai
> tsakai(at)gallo(dot)ucsf(dot)edu
>
>
> -----Original Message-----
> From: pgsql-admin-owner(at)postgresql(dot)org on behalf of Carol Walter
> Sent: Tue 9/23/2008 2:31 PM
> To: pgsql-admin(at)postgresql(dot)org
> Subject: [ADMIN] Missing pg_clog files
>
> I'm missing about 30 pg_clog files. How do I recover from this?
>
> Carol
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>
>


From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: "Carol Walter" <walterc(at)indiana(dot)edu>
Cc: "Tena Sakai" <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-23 23:56:09
Message-ID: dcc563d10809231656xe8d199eh3093f959e88cc7de@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

On Tue, Sep 23, 2008 at 4:35 PM, Carol Walter <walterc(at)indiana(dot)edu> wrote:
> Hi, Tena,
> Yes, you do recall correctly. It is Solaris 10 and Postgres 8.2.3.
> I'm going to run a pg_dumpall and keep my fingers crossed. I'm getting
> similar errors when I try to do other things like vacuumdb, so maybe
> pg_dumpall won't run either.

You do know that version has known, fixed in later versions, data
eating bugs, right?


From: "Tena Sakai" <tsakai(at)gallo(dot)ucsf(dot)edu>
To: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>, "Carol Walter" <walterc(at)indiana(dot)edu>
Cc: <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Missing pg_clog files
Date: 2008-09-24 00:08:43
Message-ID: FE44E0D7EAD2ED4BB2165071DB8E328C0378F50E@egcrc-ex01.egcrc.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi Carol,

Unless it is impossibly difficult, I would try
to move onto the latest (8.3.4) postgres before
the situation worsens. I had followed Scott's
advice in the past and never regretted.

My two bits, again.

Tena Sakai
tsakai(at)gallo(dot)ucsf(dot)edu

-----Original Message-----
From: Scott Marlowe [mailto:scott(dot)marlowe(at)gmail(dot)com]
Sent: Tue 9/23/2008 4:56 PM
To: Carol Walter
Cc: Tena Sakai; pgsql-admin(at)postgresql(dot)org
Subject: Re: [ADMIN] Missing pg_clog files

On Tue, Sep 23, 2008 at 4:35 PM, Carol Walter <walterc(at)indiana(dot)edu> wrote:
> Hi, Tena,
> Yes, you do recall correctly. It is Solaris 10 and Postgres 8.2.3.
> I'm going to run a pg_dumpall and keep my fingers crossed. I'm getting
> similar errors when I try to do other things like vacuumdb, so maybe
> pg_dumpall won't run either.

You do know that version has known, fixed in later versions, data
eating bugs, right?


From: "Walter, Carol Williams" <walterc(at)indiana(dot)edu>
To: Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 01:59:48
Message-ID: 20080923215948.m743u3mtcwg88s4o@webmail.iu.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi, Tena,

I actually downloaded 8.3.3 last week. The upgrade has been in the
plan. I didn't realize the latest was 8.3.4. I am concerned though. We
have so many dependencies that they never go smoothly.

Carol

Quoting Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>:

> Hi Carol,
>
> Unless it is impossibly difficult, I would try
> to move onto the latest (8.3.4) postgres before
> the situation worsens. I had followed Scott's
> advice in the past and never regretted.
>
> My two bits, again.
>
> Tena Sakai
> tsakai(at)gallo(dot)ucsf(dot)edu
>
>
> -----Original Message-----
> From: Scott Marlowe [mailto:scott(dot)marlowe(at)gmail(dot)com]
> Sent: Tue 9/23/2008 4:56 PM
> To: Carol Walter
> Cc: Tena Sakai; pgsql-admin(at)postgresql(dot)org
> Subject: Re: [ADMIN] Missing pg_clog files
>
> On Tue, Sep 23, 2008 at 4:35 PM, Carol Walter <walterc(at)indiana(dot)edu> wrote:
>> Hi, Tena,
>> Yes, you do recall correctly. It is Solaris 10 and Postgres 8.2.3.
>> I'm going to run a pg_dumpall and keep my fingers crossed. I'm getting
>> similar errors when I try to do other things like vacuumdb, so maybe
>> pg_dumpall won't run either.
>
> You do know that version has known, fixed in later versions, data
> eating bugs, right?
>
>


From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: "Walter, Carol Williams" <walterc(at)indiana(dot)edu>
Cc: "Tena Sakai" <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 03:05:06
Message-ID: dcc563d10809232005i533433d9xde8f515f0f382c26@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

On Tue, Sep 23, 2008 at 7:59 PM, Walter, Carol Williams
<walterc(at)indiana(dot)edu> wrote:
> Hi, Tena,
>
> I actually downloaded 8.3.3 last week. The upgrade has been in the plan. I
> didn't realize the latest was 8.3.4. I am concerned though. We have so many
> dependencies that they never go smoothly.

The real priority is getting the latest bug fix release of 8.2
installed. that only takes a few minutes and is quite easy, just
update the package for pgsql.

Going to 8.3 requires dump restore and testing your app for
compatibility, something you don't have to do going to 8.2.10 or
whatever version is the latest 8.2 release.


From: Carol Walter <walterc(at)indiana(dot)edu>
To: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 03:35:42
Message-ID: E22168D9-DDAD-4EE0-8CB9-8CA79FB26721@indiana.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hello,

Here are the pg_clog files I still have.

-rw------- 1 postgres postgres 262144 Jun 4 18:43 0030
-rw------- 1 postgres postgres 262144 Jun 6 02:06 0031
-rw------- 1 postgres postgres 262144 Jun 7 01:26 0032
-rw------- 1 postgres postgres 262144 Jun 7 21:26 0033
-rw------- 1 postgres postgres 262144 Jun 8 23:16 0034
-rw------- 1 postgres postgres 262144 Jun 9 21:46 0035
-rw------- 1 postgres postgres 262144 Jun 11 06:33 0036
-rw------- 1 postgres postgres 262144 Jun 13 08:33 0037
-rw------- 1 postgres postgres 262144 Jun 16 04:20 0038
-rw------- 1 postgres postgres 262144 Jun 18 11:36 0039
-rw------- 1 postgres postgres 262144 Jun 21 17:06 003A
-rw------- 1 postgres postgres 262144 Jun 24 18:43 003B
-rw------- 1 postgres postgres 262144 Jun 26 05:26 003C
-rw------- 1 postgres postgres 262144 Jun 27 18:40 003D
-rw------- 1 postgres postgres 262144 Jun 28 19:06 003E
-rw------- 1 postgres postgres 262144 Jun 30 09:10 003F
-rw------- 1 postgres postgres 262144 Jul 2 23:40 0040
-rw------- 1 postgres postgres 262144 Jul 4 17:03 0041
-rw------- 1 postgres postgres 262144 Jul 6 03:30 0042
-rw------- 1 postgres postgres 262144 Jul 7 23:06 0043
-rw------- 1 postgres postgres 262144 Jul 9 12:53 0044
-rw------- 1 postgres postgres 262144 Jul 12 19:13 0045
-rw------- 1 postgres postgres 262144 Jul 17 08:06 0046
-rw------- 1 postgres postgres 262144 Jul 21 07:56 0047
-rw------- 1 postgres postgres 262144 Jul 26 22:06 0048
-rw------- 1 postgres postgres 262144 Aug 2 02:10 0049
-rw------- 1 postgres postgres 262144 Aug 5 21:50 004A
-rw------- 1 postgres postgres 262144 Aug 7 11:20 004B
-rw------- 1 postgres postgres 262144 Aug 10 04:58 004C
-rw------- 1 postgres postgres 262144 Aug 11 15:41 004D
-rw------- 1 postgres postgres 262144 Aug 12 22:31 004E
-rw------- 1 postgres postgres 262144 Aug 14 19:31 004F
-rw------- 1 postgres postgres 262144 Aug 23 15:11 0050
-rw------- 1 postgres postgres 262144 Sep 7 16:01 0051
-rw------- 1 postgres postgres 262144 Sep 18 03:41 0052
-rw------- 1 postgres postgres 262144 Sep 23 01:27 0053
-rw------- 1 postgres postgres 40960 Sep 23 23:22 0054

Carol

On Sep 23, 2008, at 11:05 PM, Scott Marlowe wrote:

> On Tue, Sep 23, 2008 at 7:59 PM, Walter, Carol Williams
> <walterc(at)indiana(dot)edu> wrote:
>> Hi, Tena,
>>
>> I actually downloaded 8.3.3 last week. The upgrade has been in
>> the plan. I
>> didn't realize the latest was 8.3.4. I am concerned though. We
>> have so many
>> dependencies that they never go smoothly.
>
> The real priority is getting the latest bug fix release of 8.2
> installed. that only takes a few minutes and is quite easy, just
> update the package for pgsql.
>
> Going to 8.3 requires dump restore and testing your app for
> compatibility, something you don't have to do going to 8.2.10 or
> whatever version is the latest 8.2 release.


From: "Tena Sakai" <tsakai(at)gallo(dot)ucsf(dot)edu>
To: "Carol Walter" <walterc(at)indiana(dot)edu>, "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
Cc: <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Missing pg_clog files
Date: 2008-09-24 04:16:27
Message-ID: FE44E0D7EAD2ED4BB2165071DB8E328C0378F512@egcrc-ex01.egcrc.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi Carol,

I detect in you some apprehension as to pg_dumpall
won't run or complete. Why is that? Have you already
done it and it didn't work? If that's not the case,
why not run pg_dumpall at a quiet hour and see?

I think Scott is right as to install the latest
8.2 on top. It won't be time consuming task.
Why not give it a wheel? It would be good to
find out one way or the other.

Scott: Are files 0000 through 002F (which are
not there) absolutely necessary for recovering data?

Regards,

Tena Sakai
tsakai(at)gallo(dot)ucsf(dot)edu

-----Original Message-----
From: pgsql-admin-owner(at)postgresql(dot)org on behalf of Carol Walter
Sent: Tue 9/23/2008 8:35 PM
To: Scott Marlowe
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: [ADMIN] Missing pg_clog files

Hello,

Here are the pg_clog files I still have.

-rw------- 1 postgres postgres 262144 Jun 4 18:43 0030
-rw------- 1 postgres postgres 262144 Jun 6 02:06 0031
-rw------- 1 postgres postgres 262144 Jun 7 01:26 0032
-rw------- 1 postgres postgres 262144 Jun 7 21:26 0033
-rw------- 1 postgres postgres 262144 Jun 8 23:16 0034
-rw------- 1 postgres postgres 262144 Jun 9 21:46 0035
-rw------- 1 postgres postgres 262144 Jun 11 06:33 0036
-rw------- 1 postgres postgres 262144 Jun 13 08:33 0037
-rw------- 1 postgres postgres 262144 Jun 16 04:20 0038
-rw------- 1 postgres postgres 262144 Jun 18 11:36 0039
-rw------- 1 postgres postgres 262144 Jun 21 17:06 003A
-rw------- 1 postgres postgres 262144 Jun 24 18:43 003B
-rw------- 1 postgres postgres 262144 Jun 26 05:26 003C
-rw------- 1 postgres postgres 262144 Jun 27 18:40 003D
-rw------- 1 postgres postgres 262144 Jun 28 19:06 003E
-rw------- 1 postgres postgres 262144 Jun 30 09:10 003F
-rw------- 1 postgres postgres 262144 Jul 2 23:40 0040
-rw------- 1 postgres postgres 262144 Jul 4 17:03 0041
-rw------- 1 postgres postgres 262144 Jul 6 03:30 0042
-rw------- 1 postgres postgres 262144 Jul 7 23:06 0043
-rw------- 1 postgres postgres 262144 Jul 9 12:53 0044
-rw------- 1 postgres postgres 262144 Jul 12 19:13 0045
-rw------- 1 postgres postgres 262144 Jul 17 08:06 0046
-rw------- 1 postgres postgres 262144 Jul 21 07:56 0047
-rw------- 1 postgres postgres 262144 Jul 26 22:06 0048
-rw------- 1 postgres postgres 262144 Aug 2 02:10 0049
-rw------- 1 postgres postgres 262144 Aug 5 21:50 004A
-rw------- 1 postgres postgres 262144 Aug 7 11:20 004B
-rw------- 1 postgres postgres 262144 Aug 10 04:58 004C
-rw------- 1 postgres postgres 262144 Aug 11 15:41 004D
-rw------- 1 postgres postgres 262144 Aug 12 22:31 004E
-rw------- 1 postgres postgres 262144 Aug 14 19:31 004F
-rw------- 1 postgres postgres 262144 Aug 23 15:11 0050
-rw------- 1 postgres postgres 262144 Sep 7 16:01 0051
-rw------- 1 postgres postgres 262144 Sep 18 03:41 0052
-rw------- 1 postgres postgres 262144 Sep 23 01:27 0053
-rw------- 1 postgres postgres 40960 Sep 23 23:22 0054

Carol

On Sep 23, 2008, at 11:05 PM, Scott Marlowe wrote:

> On Tue, Sep 23, 2008 at 7:59 PM, Walter, Carol Williams
> <walterc(at)indiana(dot)edu> wrote:
>> Hi, Tena,
>>
>> I actually downloaded 8.3.3 last week. The upgrade has been in
>> the plan. I
>> didn't realize the latest was 8.3.4. I am concerned though. We
>> have so many
>> dependencies that they never go smoothly.
>
> The real priority is getting the latest bug fix release of 8.2
> installed. that only takes a few minutes and is quite easy, just
> update the package for pgsql.
>
> Going to 8.3 requires dump restore and testing your app for
> compatibility, something you don't have to do going to 8.2.10 or
> whatever version is the latest 8.2 release.


From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: "Tena Sakai" <tsakai(at)gallo(dot)ucsf(dot)edu>
Cc: "Carol Walter" <walterc(at)indiana(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 04:21:56
Message-ID: dcc563d10809232121y47e27c32u94effea9809e494d@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

On Tue, Sep 23, 2008 at 10:16 PM, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu> wrote:
> Hi Carol,
>
> I detect in you some apprehension as to pg_dumpall
> won't run or complete. Why is that? Have you already
> done it and it didn't work? If that's not the case,
> why not run pg_dumpall at a quiet hour and see?
>
> I think Scott is right as to install the latest
> 8.2 on top. It won't be time consuming task.
> Why not give it a wheel? It would be good to
> find out one way or the other.
>
>
> Scott: Are files 0000 through 002F (which are
> not there) absolutely necessary for recovering data?

Most likely not. If the db won't start up without them, it might be
possible to create new clog files that are nothing but zeroes. Never
been in this position though...


From: "Tena Sakai" <tsakai(at)gallo(dot)ucsf(dot)edu>
To: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
Cc: "Carol Walter" <walterc(at)indiana(dot)edu>, <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Missing pg_clog files
Date: 2008-09-24 04:36:26
Message-ID: FE44E0D7EAD2ED4BB2165071DB8E328C0378F514@egcrc-ex01.egcrc.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi,

As a clueless, optimistic traveller, I'd interpret
what Scott's saying as, "maybe slightly less to
worry about."

Regards,

Tena Sakai
tsakai(at)gallo(dot)ucsf(dot)edu

-----Original Message-----
From: Scott Marlowe [mailto:scott(dot)marlowe(at)gmail(dot)com]
Sent: Tue 9/23/2008 9:21 PM
To: Tena Sakai
Cc: Carol Walter; pgsql-admin(at)postgresql(dot)org
Subject: Re: [ADMIN] Missing pg_clog files

On Tue, Sep 23, 2008 at 10:16 PM, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu> wrote:
> Hi Carol,
>
> I detect in you some apprehension as to pg_dumpall
> won't run or complete. Why is that? Have you already
> done it and it didn't work? If that's not the case,
> why not run pg_dumpall at a quiet hour and see?
>
> I think Scott is right as to install the latest
> 8.2 on top. It won't be time consuming task.
> Why not give it a wheel? It would be good to
> find out one way or the other.
>
>
> Scott: Are files 0000 through 002F (which are
> not there) absolutely necessary for recovering data?

Most likely not. If the db won't start up without them, it might be
possible to create new clog files that are nothing but zeroes. Never
been in this position though...


From: "Walter, Carol Williams" <walterc(at)indiana(dot)edu>
To: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Cc: Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 11:13:08
Message-ID: 20080924071308.8lficujqsck8w44w@webmail.iu.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hello,

I will try this. Is there a particular size these files need to be?
Then I can try the pg_dumpall. That's, of course, my main concern. I
need to get the data out.

By the way, before I went to panic mode, I tried pg_dumpall and
reindexing the database. Everything gets the same error, perhaps with
a different file name.

Carol

Quoting Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>:

> On Tue, Sep 23, 2008 at 10:16 PM, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu> wrote:
>> Hi Carol,
>>
>> I detect in you some apprehension as to pg_dumpall
>> won't run or complete. Why is that? Have you already
>> done it and it didn't work? If that's not the case,
>> why not run pg_dumpall at a quiet hour and see?
>>
>> I think Scott is right as to install the latest
>> 8.2 on top. It won't be time consuming task.
>> Why not give it a wheel? It would be good to
>> find out one way or the other.
>>
>>
>> Scott: Are files 0000 through 002F (which are
>> not there) absolutely necessary for recovering data?
>
> Most likely not. If the db won't start up without them, it might be
> possible to create new clog files that are nothing but zeroes. Never
> been in this position though...
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>


From: Carol Walter <walterc(at)indiana(dot)edu>
To: "Walter, Carol Williams" <walterc(at)indiana(dot)edu>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 12:42:24
Message-ID: 052D4170-A065-4DDB-9F9D-6318E42C23BA@indiana.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hello,

I tried creating the files 0000 through 002F. Pg_dump still will not
run. The error was as follows:

-bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: could not access status
of transaction 20080015
DETAIL: Could not read from file "pg_clog/0013" at offset 32768:
Error 0.
pg_dump: The command was: COPY ebizd.products_categories
(category_id, product_id) TO stdout;

Carol

On Sep 24, 2008, at 7:13 AM, Walter, Carol Williams wrote:

> Hello,
>
> I will try this. Is there a particular size these files need to
> be? Then I can try the pg_dumpall. That's, of course, my main
> concern. I need to get the data out.
>
> By the way, before I went to panic mode, I tried pg_dumpall and
> reindexing the database. Everything gets the same error, perhaps
> with a different file name.
>
> Carol
>
>
> Quoting Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>:
>
>> On Tue, Sep 23, 2008 at 10:16 PM, Tena Sakai
>> <tsakai(at)gallo(dot)ucsf(dot)edu> wrote:
>>> Hi Carol,
>>>
>>> I detect in you some apprehension as to pg_dumpall
>>> won't run or complete. Why is that? Have you already
>>> done it and it didn't work? If that's not the case,
>>> why not run pg_dumpall at a quiet hour and see?
>>>
>>> I think Scott is right as to install the latest
>>> 8.2 on top. It won't be time consuming task.
>>> Why not give it a wheel? It would be good to
>>> find out one way or the other.
>>>
>>>
>>> Scott: Are files 0000 through 002F (which are
>>> not there) absolutely necessary for recovering data?
>>
>> Most likely not. If the db won't start up without them, it might be
>> possible to create new clog files that are nothing but zeroes. Never
>> been in this position though...
>>
>> --
>> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-admin
>>
>
>
>
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Carol Walter <walterc(at)indiana(dot)edu>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 13:10:21
Message-ID: 29313.1222261821@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Carol Walter <walterc(at)indiana(dot)edu> writes:
> I tried creating the files 0000 through 002F. Pg_dump still will not
> run. The error was as follows:

> -bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR: could not access status
> of transaction 20080015
> DETAIL: Could not read from file "pg_clog/0013" at offset 32768:
> Error 0.
> pg_dump: The command was: COPY ebizd.products_categories
> (category_id, product_id) TO stdout;

You need to make the files the right size (256K of zeroes).
A suitable "dd" from /dev/zero will accomplish this on modern
Unixen (ie, anything that has /dev/zero).

Note that this is by no means a fix, it simply allows pg_dump to
complete. What you are really doing by filling those files with
zeroes is saying "assume all these old transactions aborted".
You *will* have data loss. It will only affect rows that haven't
been accessed in a very long time (since at least June, looks like)
but gone is gone.

Another possibility that might be better is to fill the files with
0x55, though this is harder since /dev/zero won't help. That would
force all the old transactions to be considered committed rather than
aborted. This isn't really better from a consistency standpoint, but
if you feel that most of your data-altering commands succeed then
this might give you a closer approximation to the state you want.

The whole thing is pretty troubling because 8.2.x is supposed to
contain defenses against this type of problem. Could we see
the contents of "select datname, datfrozenxid from pg_database"?
Also, have you tried dumping individual databases instead of
pg_dumpall? (It would be good to experiment with that before
you start making bogus pg_clog files; once you do that there's
no going back in terms of recovering the true state of your data.)

regards, tom lane

PS: Since you've evidently got a lot of rows that haven't been
accessed in months, I conclude that you have not been running
routine backups. Tut tut. I trust you'll remedy that oversight
as soon as you get out of the immediate problem.


From: Carol Walter <walterc(at)indiana(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 13:41:28
Message-ID: F6EE2346-B706-43CC-947D-B3E27F34B7E4@indiana.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Yes, as an experienced dba, there ist no excuse for not having the
routine backups. I do it for the other instances but not this one.
It's student contains student databases. The students are required
to do there own backups. The problem is, the database got used for
some non-student data. I should have anticipated that since the
resource is there someone will use it. =) There are three small
databases that I need.

I'm going to create the files again with "0x55". I need 256K hex
characters?

I did try to dump individual databases. Since I need these three
that was the first thing I did.

If I upgrade to the highest version of 8.2, do I still have to do the
dump and restore? That may be a dumb question, but I've seen some
emails that seem to say that I don't.

Here is the results of the query you asked for.

datname | datfrozenxid
------------------+--------------
postgres | 524
aapike | 524
acmweb | 524
aeprice | 524
sgadkari | 524
template0 | 524
ahanders | 524
ajkurtz | 524
akeebaug | 524
aloehrle | 524
amfalcon | 524
amihnen | 524
amlmonro | 524
andmcilw | 524
arhoda | 524
arsteven | 524
asist | 524
askschoo | 524
atawfik | 524
awead | 524
barb | 524
benpeck | 524
bepnelso | 524
berry3 | 524
bflesher | 524
biblio | 524
bjc2 | 524
blbeals | 524
blfay | 524
brichwin | 524
brog | 524
burtont | 524
cartermt | 524
cdwhitlo | 524
cgoodbee | 524
chbishop | 524
clschwie | 524
cmdablog | 524
cmfriend | 524
cwestbro | 524
daltenho | 524
datnguye | 524
davisjs | 524
dlafemin | 524
dlgriggs | 524
dotsonm | 524
dpierz | 524
dsa | 524
dtdo | 524
wke | 524
l548s07c | 524
jm | 524
dbicknel | 524
dwray | 524
eaodonne | 524
jeejacks | 524
edewert | 524
eeich | 524
efhardy | 524
ellwrigh | 524
emerya | 524
emlcoope | 524
emudave | 524
eschramm | 524
bkjacob | 524
jkulneva | 524
kuepeter | 524
ys3 | 524
cepynes | 524
flashb | 524
fullera | 524
gabwong | 524
hbusch | 524
hcapocci | 524
hiteaw | 524
hjtolber | 524
ingschne | 524
iplanton | 524
jajcdb | 524
jfieber | 524
jiwan | 524
jku | 524
josreyes | 524
jowarren | 524
jplong | 524
jschuenz | 524
jtweedy | 524
kacates | 524
karpaden | 524
kbivcsi | 524
kcentann | 524
kcfreder | 524
kcostin | 524
hrosenba | 524
stjmarsh | 524
rvarick | 524
prasadm | 524
kdlib | 524
khenrich | 524
kiyang | 524
kmane | 524
kmauer | 524
knbayles | 524
knoubani | 524
kseki | 524
l546f06a | 524
l548s06a | 524
lair_medinfer | 524
lbikoff | 524
lee55 | 524
leemchri | 524
jacksonj | 524
ageorges | 524
austroud | 524
bmoriari | 524
broos | 524
ceich | 524
edawidow | 524
ljlangnet | 524
ljohnsto | 524
lkaiser2 | 524
lkhooper | 524
lmolefi | 524
ltian | 524
lucas_dictionary | 524
lucas_genedb | 524
lucas_proteindb | 524
macci | 524
magpeter | 524
epoirier | 524
hnethert | 524
jgaley | 524
jtwelty | 524
jwalrath | 524
mamablogs | 524
mapfinder | 524
markane | 524
mcglass | 524
meho | 524
mfr | 524
mmsommer | 524
mnapier | 524
moore35 | 524
morrisjm | 524
mosse | 524
msohl | 524
mtl554 | 524
nachase | 524
ngarrett | 524
nirobins | 524
nlgeorge | 524
nsfitwf | 524
jwoomer | 524
kekbia | 524
koulikom | 524
ksd | 524
lsisler | 524
mwourms | 524
nucleus | 524
omthomas | 524
naalsham | 524
nansuwan | 524
nfcapps | 524
nwahrman | 524
oescue | 524
plpierso | 524
ppatil | 524
psbright | 524
oncosifter | 524
otdelong | 524
paolillo | 524
penwang | 524
perezh | 524
phppgadmin | 524
places | 524
pldillon | 524
prodes | 524
pwelsch | 524
qadrupal | 524
rduhon | 524
rdwillis | 524
repotter | 524
rgao | 524
rkcsi | 524
rklusman | 524
rmukkama | 524
rosea | 524
rosenbsj | 524
rpherwan | 524
rtolnay | 524
sagoodwi | 524
sakram | 524
sambre | 524
scott6 | 524
sestumpf | 524
sghurd | 524
shawd | 524
sjt | 524
sjunk | 524
skashwan | 524
skonkiel | 524
slisprot | 524
slsingle | 524
slspangl | 524
smercure | 524
sp23 | 524
spencers | 524
sprao | 524
spraocal | 524
spraoit | 524
stritt | 524
switzers | 524
tbjacobs | 524
rbrubach | 524
saaalshe | 524
template1 | 524
tigan | 524
tlcamero | 524
tlennis | 524
tlmiles | 524
tneirync | 524
trec | 524
tvdwyer | 524
upriss | 524
l548s07b | 524
videob | 524
vkluehrs | 524
wemigh | 524
wsams | 524
xyao | 524
yasun | 524
yufu | 524
yuwang2 | 524
yz12 | 524
rdurrer | 524
rbain | 524
jgottwig | 524
gallantm | 524
ajwei | 524
rpvander | 524
l548s07a | 524
sbluemle | 524
sstrahl | 524
stevecox | 524
vcsingh | 524
huangb | 524
mpraskav | 524
lvanleer | 524
mmillard | 524
linshedd | 524
mgunkel | 524
aeathava | 524
rbiars | 524
krblackw | 524
boltonb | 524
jcornn | 524
cdethlof | 524
reells | 524
lorhardi | 524
thommey | 524
ckhull | 524
bjules | 524
lklake | 524
rootk | 524
whmcmill | 524
eoverhau | 524
mrome | 524
as37 | 524
krlthoma | 524
jltyner | 524
mavest | 524
lcwelhan | 524
awismer | 524
confluence | 524
jawalsh | 524
hshewale | 524
polavara | 524
s517f07a | 524
ebiz | 524
lalfi | 524
vcob | 524
s602s07f | 524
yangfund | 524
tdbowman | 524
ofabilol | 524
s517s08a | 524
slis_assets | 524
clhoneyc | 524
bzflag | 524
caroltest | 524
citesrch | 524
vgangal | 524
skhowaji | 524
ofeda | 524
jatterbu | 524
s517s08b | 524
emakki | 524
test | 524
dingying | 524
walterc | 524
msinghi | 524
(301 rows)

Thank you for all your help.

Carol

On Sep 24, 2008, at 9:10 AM, Tom Lane wrote:

> Carol Walter <walterc(at)indiana(dot)edu> writes:
>> I tried creating the files 0000 through 002F. Pg_dump still will not
>> run. The error was as follows:
>
>> -bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
>> pg_dump: SQL command failed
>> pg_dump: Error message from server: ERROR: could not access status
>> of transaction 20080015
>> DETAIL: Could not read from file "pg_clog/0013" at offset 32768:
>> Error 0.
>> pg_dump: The command was: COPY ebizd.products_categories
>> (category_id, product_id) TO stdout;
>
> You need to make the files the right size (256K of zeroes).
> A suitable "dd" from /dev/zero will accomplish this on modern
> Unixen (ie, anything that has /dev/zero).
>
> Note that this is by no means a fix, it simply allows pg_dump to
> complete. What you are really doing by filling those files with
> zeroes is saying "assume all these old transactions aborted".
> You *will* have data loss. It will only affect rows that haven't
> been accessed in a very long time (since at least June, looks like)
> but gone is gone.
>
> Another possibility that might be better is to fill the files with
> 0x55, though this is harder since /dev/zero won't help. That would
> force all the old transactions to be considered committed rather than
> aborted. This isn't really better from a consistency standpoint, but
> if you feel that most of your data-altering commands succeed then
> this might give you a closer approximation to the state you want.
>
> The whole thing is pretty troubling because 8.2.x is supposed to
> contain defenses against this type of problem. Could we see
> the contents of "select datname, datfrozenxid from pg_database"?
> Also, have you tried dumping individual databases instead of
> pg_dumpall? (It would be good to experiment with that before
> you start making bogus pg_clog files; once you do that there's
> no going back in terms of recovering the true state of your data.)
>
> regards, tom lane
>
> PS: Since you've evidently got a lot of rows that haven't been
> accessed in months, I conclude that you have not been running
> routine backups. Tut tut. I trust you'll remedy that oversight
> as soon as you get out of the immediate problem.


From: Carol Walter <walterc(at)indiana(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 14:31:05
Message-ID: FE8FD34C-4516-425A-B7AD-1917F134320F@indiana.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Hi, Tom,

Are the files that contain the hex characters supposed to contain a
single string and no control characters? I used Excel to create the
files, but I'll have to edit them when I get them to the Solaris box
to take out any bad characters.

I'm also wondering if, after I create the dummy files, and pg_dump
works, I could restore an old pg_dumpall file and then insert any
data that aren't there from the pg_dumps.

Carol

On Sep 24, 2008, at 9:10 AM, Tom Lane wrote:

> Carol Walter <walterc(at)indiana(dot)edu> writes:
>> I tried creating the files 0000 through 002F. Pg_dump still will not
>> run. The error was as follows:
>
>> -bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
>> pg_dump: SQL command failed
>> pg_dump: Error message from server: ERROR: could not access status
>> of transaction 20080015
>> DETAIL: Could not read from file "pg_clog/0013" at offset 32768:
>> Error 0.
>> pg_dump: The command was: COPY ebizd.products_categories
>> (category_id, product_id) TO stdout;
>
> You need to make the files the right size (256K of zeroes).
> A suitable "dd" from /dev/zero will accomplish this on modern
> Unixen (ie, anything that has /dev/zero).
>
> Note that this is by no means a fix, it simply allows pg_dump to
> complete. What you are really doing by filling those files with
> zeroes is saying "assume all these old transactions aborted".
> You *will* have data loss. It will only affect rows that haven't
> been accessed in a very long time (since at least June, looks like)
> but gone is gone.
>
> Another possibility that might be better is to fill the files with
> 0x55, though this is harder since /dev/zero won't help. That would
> force all the old transactions to be considered committed rather than
> aborted. This isn't really better from a consistency standpoint, but
> if you feel that most of your data-altering commands succeed then
> this might give you a closer approximation to the state you want.
>
> The whole thing is pretty troubling because 8.2.x is supposed to
> contain defenses against this type of problem. Could we see
> the contents of "select datname, datfrozenxid from pg_database"?
> Also, have you tried dumping individual databases instead of
> pg_dumpall? (It would be good to experiment with that before
> you start making bogus pg_clog files; once you do that there's
> no going back in terms of recovering the true state of your data.)
>
> regards, tom lane
>
> PS: Since you've evidently got a lot of rows that haven't been
> accessed in months, I conclude that you have not been running
> routine backups. Tut tut. I trust you'll remedy that oversight
> as soon as you get out of the immediate problem.
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Carol Walter <walterc(at)indiana(dot)edu>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 15:44:22
Message-ID: 4913.1222271062@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Carol Walter <walterc(at)indiana(dot)edu> writes:
> Are the files that contain the hex characters supposed to contain a
> single string and no control characters?

Yes, you want 256K occurrences of the byte value 0x55 and nothing else.

> I'm also wondering if, after I create the dummy files, and pg_dump
> works, I could restore an old pg_dumpall file and then insert any
> data that aren't there from the pg_dumps.

It'd be a good idea to do as much cross-checking as you can, since
it's highly probable that the dumped data will be at least partly wrong.

No, I'm afraid updating to 8.2.latest won't get you out of this.
It might possibly prevent a recurrence.

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Carol Walter <walterc(at)indiana(dot)edu>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 15:53:37
Message-ID: 5071.1222271617@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

Carol Walter <walterc(at)indiana(dot)edu> writes:
> Here is the results of the query you asked for.

> datname | datfrozenxid
> ------------------+--------------
> postgres | 524
> ...

Huh. Postgres should never have removed any of those files with
datfrozenxid this low. So it seems we have two possible explanations:
* there's something subtly wrong with the TruncateCLOG logic
* the files were removed by something else, ie, human error or
operating-system failure

regards, tom lane


From: Carol Walter <walterc(at)indiana(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 18:09:42
Message-ID: BEF66DB1-CD86-413E-9794-3A87FE1C092C@indiana.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

To use the hex value 0x55, do I need to enclose it in single quotes?

Carol

On Sep 24, 2008, at 11:44 AM, Tom Lane wrote:

> Carol Walter <walterc(at)indiana(dot)edu> writes:
>> Are the files that contain the hex characters supposed to contain a
>> single string and no control characters?
>
> Yes, you want 256K occurrences of the byte value 0x55 and nothing
> else.
>
>> I'm also wondering if, after I create the dummy files, and pg_dump
>> works, I could restore an old pg_dumpall file and then insert any
>> data that aren't there from the pg_dumps.
>
> It'd be a good idea to do as much cross-checking as you can, since
> it's highly probable that the dumped data will be at least partly
> wrong.
>
> No, I'm afraid updating to 8.2.latest won't get you out of this.
> It might possibly prevent a recurrence.
>
> regards, tom lane
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: "Carol Walter" <walterc(at)indiana(dot)edu>
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Tena Sakai" <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 20:19:28
Message-ID: dcc563d10809241319r12d27397s6f241019e7f34d93@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin

No, the file needs to be nothing but hex value 55 throughout. The
attachment I sent earlier is just that:

smarlowe(at)sawyer:~$ hexdump 55
0000000 5555 5555 5555 5555 5555 5555 5555 5555
*
0040000

i.e. all 55s, all the time, not the ascii numbers 5 and 5, but the hex
value. The actual file is ascii the letter U over and over:

smarlowe(at)sawyer:~$ head -c 100 55
UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU
UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU

and so on.

On Wed, Sep 24, 2008 at 12:09 PM, Carol Walter <walterc(at)indiana(dot)edu> wrote:
> To use the hex value 0x55, do I need to enclose it in single quotes?
>
> Carol
>
> On Sep 24, 2008, at 11:44 AM, Tom Lane wrote:
>
>> Carol Walter <walterc(at)indiana(dot)edu> writes:
>>>
>>> Are the files that contain the hex characters supposed to contain a
>>> single string and no control characters?
>>
>> Yes, you want 256K occurrences of the byte value 0x55 and nothing else.
>>
>>> I'm also wondering if, after I create the dummy files, and pg_dump
>>> works, I could restore an old pg_dumpall file and then insert any
>>> data that aren't there from the pg_dumps.
>>
>> It'd be a good idea to do as much cross-checking as you can, since
>> it's highly probable that the dumped data will be at least partly wrong.
>>
>> No, I'm afraid updating to 8.2.latest won't get you out of this.
>> It might possibly prevent a recurrence.
>>
>> regards, tom lane
>>
>> --
>> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-admin
>
>