Re: database corruption help

Lists: pgsql-adminpgsql-hackers
From: "John Lister" <john(dot)lister-ps(at)kickstone(dot)com>
To: <pgsql-admin(at)postgresql(dot)org>
Subject: database corruption help
Date: 2009-02-09 16:37:11
Message-ID: 507152DDF8AB4DF093BD01A8BEC7C659@squarepi.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

Hi, my wal archiving broke and postgresql filled up the local disk with transaction logs, which i foolishly deleted in a moment of madness, after resetting the transaction log a few of my tables are damaged but repairable. However the system tables also seemed to have suffered. My main problem is duplicate rows which violated the primary key and if i do a reindex system i get similar errors, for example:

ERROR: could not create unique index "pg_class_oid_index"

a quick inspection of the pg_class table doesn't show any duplicates, is there anyway i can find out which row(s) are duplicated and remove them without a full db restore?

also doing something like this doesn't return anything which worked for my other tables

select oid from pg_class where oid in(select oid from pg_class group by oid having count(oid)>1 )

Many thanks
--

Got needs? Get Goblin'! - http://www.pricegoblin.co.uk/


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "John Lister" <john(dot)lister-ps(at)kickstone(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: database corruption help
Date: 2009-02-09 17:31:33
Message-ID: 17205.1234200693@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

"John Lister" <john(dot)lister-ps(at)kickstone(dot)com> writes:
> ERROR: could not create unique index "pg_class_oid_index"

> a quick inspection of the pg_class table doesn't show any duplicates, is there anyway i can find out which row(s) are duplicated and remove them without a full db restore?

> also doing something like this doesn't return anything which worked for my other tables

> select oid from pg_class where oid in(select oid from pg_class group by oid having count(oid)>1 )

Try doing it with enable_indexscan = off and enable_bitmapscan = off.
The system might be relying on the old, bogus index to do the group by
oid.

regards, tom lane


From: "John Lister" <john(dot)lister-ps(at)kickstone(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-admin(at)postgresql(dot)org>
Subject: Re: database corruption help
Date: 2009-02-09 19:20:40
Message-ID: AB15E89CD7FB4EBD9D76CC5F8D8B3D4E@squarepi.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

Cheers tom, that did it - i've removed the duplicates and seeing what else
is broken.

.

> "John Lister" <john(dot)lister-ps(at)kickstone(dot)com> writes:
>> ERROR: could not create unique index "pg_class_oid_index"
>
>> a quick inspection of the pg_class table doesn't show any duplicates, is
>> there anyway i can find out which row(s) are duplicated and remove them
>> without a full db restore?
>
>> also doing something like this doesn't return anything which worked for
>> my other tables
>
>> select oid from pg_class where oid in(select oid from pg_class group by
>> oid having count(oid)>1 )
>
> Try doing it with enable_indexscan = off and enable_bitmapscan = off.
> The system might be relying on the old, bogus index to do the group by
> oid.
>
> regards, tom lane
>


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: John Lister <john(dot)lister(at)kickstone(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: database corruption help
Date: 2009-02-09 19:34:04
Message-ID: 20151.1234208044@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

John Lister <john(dot)lister(at)kickstone(dot)com> writes:
> Although saying that..
> reindex now works, but doing a vacuum verbose complains that the index
> is out of step with the table and i should reindex..
> would i be better shutting the db down, restarting in standalone mode
> (and also using the -P option) before reindexing?

Yeah, if you have reason not to trust the system indexes then -P is
a good idea until they are fixed. Standalone mode per se isn't that
important --- you could do this from a regular session with -P specified
via PGOPTIONS.

regards, tom lane


From: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: database corruption help
Date: 2009-02-09 20:09:22
Message-ID: 49908D72.7040904@kickstone.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers


>Yeah, if you have reason not to trust the system indexes then -P is
>a good idea until they are fixed. Standalone mode per se isn't that
>important --- you could do this from a regular session with -P specified
>via PGOPTIONS.

still getting the same problem.

> PGOPTIONS="-P"
> psql backend
>> reindex system backend;
>> NOTICE: table "pg_class" was reindexed
>> NOTICE: table "pg_statistic" was reindexed
>> NOTICE: table "pg_ts_config" was reindexed ...

>> vacuum full verbose pg_class

>> INFO: vacuuming "pg_catalog.pg_class"
>> INFO: "pg_class": found 7 removable, 1839 nonremovable row versions
in 18812 pages
>> DETAIL: 27 dead row versions cannot be removed yet.
>> Nonremovable row versions range from 160 to 229 bytes long.
>> There were 929452 unused item pointers.
>> Total free space (including removable row versions) is 149628172 bytes.
>> 18749 pages are or will become empty, including 0 at the end of the
table.
>> 0 pages containing 0 free bytes are potential move destinations.
>> CPU 0.01s/0.07u sec elapsed 0.17 sec.
>> INFO: index "pg_class_oid_index" now contains 1812 row versions in
7 pages
>> DETAIL: 0 index row versions were removed.
>> 0 index pages have been deleted, 0 are currently reusable.
>> CPU 0.00s/0.01u sec elapsed 0.09 sec.
>> WARNING: index "pg_class_oid_index" contains 1812 row versions, but
table contains 1807 row versions
>> HINT: Rebuild the index with REINDEX.
>> INFO: index "pg_class_relname_nsp_index" now contains 1812 row
versions in 23 pages
>> DETAIL: 0 index row versions were removed.
>> 0 index pages have been deleted, 0 are currently reusable.
>> CPU 0.00s/0.01u sec elapsed 0.09 sec.
>> WARNING: index "pg_class_relname_nsp_index" contains 1812 row
versions, but table contains 1807 row versions
>> HINT: Rebuild the index with REINDEX.

Just noticed the 7 dead rows above, are they likely to be causing a
problem?
BTW this is postgresql 8.3.5 if that makes a difference...

I'm guessing it is fairly important to get rid of these warnings....

Thanks


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: database corruption help
Date: 2009-02-10 00:01:30
Message-ID: 25753.1234224090@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

John Lister <john(dot)lister-ps(at)kickstone(dot)com> writes:
> still getting the same problem.

>>> PGOPTIONS="-P"
>>> psql backend

I think you need export PGOPTIONS="-P" to make that work. Whether
it's related to your problem isn't clear though.

regards, tom lane


From: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: database corruption help
Date: 2009-02-10 07:58:03
Message-ID: 4991338B.704@kickstone.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

Sorry, had exported it - bad cut/pasting...
I even tried it in single-user mode passing -P directly but got the same
result
Also confused/concerned by the 7 dead rows in pg_class. as i've
restarted the server all transaction should have finished so i'd like to
reclaim these - especially as vacuum reports the table is using 80Mb+

Thanks

>> still getting the same problem.
>>
>
>
>>>> PGOPTIONS="-P"
>>>> psql backend
>>>>
>
> I think you need export PGOPTIONS="-P" to make that work. Whether
> it's related to your problem isn't clear though.
>
> regards, tom lane
>


From: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: database corruption help
Date: 2009-02-12 21:11:07
Message-ID: 4994906B.6090408@kickstone.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

I seem to have more dead rows now..

doing a vacuum full on pg_class gives me

INFO: vacuuming "pg_catalog.pg_class"INFO: "pg_class": found 37
removable, 1845 nonremovable row versions in 18905 pages
DETAIL: 27 dead row versions cannot be removed yet.
Nonremovable row versions range from 160 to 229 bytes long.
There were 933834 unused item pointers.
Total free space (including removable row versions) is 150368692 bytes.
18839 pages are or will become empty, including 0 at the end of the table.
0 pages containing 0 free bytes are potential move destinations.
CPU 0.01s/0.05u sec elapsed 0.17 sec.INFO: index "pg_class_oid_index"
now contains 1813 row versions in 7 pages
DETAIL: 56 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index
"pg_class_relname_nsp_index" now contains 1818 row versions in 24 pages
DETAIL: 63 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.WARNING: index
"pg_class_relname_nsp_index" contains 1818 row versions, but table
contains 1813 row versions
HINT: Rebuild the index with REINDEX.INFO: analyzing
"pg_catalog.pg_class"INFO: "pg_class": scanned 3000 of 18905 pages,
containing 227 live rows and 6 dead rows; 227 rows in sample, 1430
estimated total rows
Total query runtime: 4469 ms.

I've restarted the server which should have got rid of any
outstanding/long running transactions - which it though was the major
cause of lingering dead rows.

Any ideas how to fix this as i'd like to reduce this from 150+Mb which
can't be good for performance....

Thanks


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: database corruption help
Date: 2009-02-12 21:39:44
Message-ID: 17204.1234474784@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

John Lister <john(dot)lister-ps(at)kickstone(dot)com> writes:
> I seem to have more dead rows now..
> doing a vacuum full on pg_class gives me

> INFO: vacuuming "pg_catalog.pg_class"INFO: "pg_class": found 37
> removable, 1845 nonremovable row versions in 18905 pages
> DETAIL: 27 dead row versions cannot be removed yet.
> Nonremovable row versions range from 160 to 229 bytes long.
> There were 933834 unused item pointers.
> Total free space (including removable row versions) is 150368692 bytes.
> 18839 pages are or will become empty, including 0 at the end of the table.
> 0 pages containing 0 free bytes are potential move destinations.

Hmm, that last seems to indicate that do_shrinking has been reset.
It looks like the cases where that happens are reported with elog(LOG)
which means they probably only went to the postmaster log (maybe we
should fix things so that vacuum verbose reports those). What have
you got in the postmaster log?

regards, tom lane


From: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: database corruption help
Date: 2009-02-12 21:45:35
Message-ID: 4994987F.1030204@kickstone.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

I'm running ubuntu and can see references to only one log file which i'm
assuming is the postmaster log..

Anyway the only relevant bits are:

GMT LOG: relation "pg_class" TID 15538/4: dead HOT-updated tuple ---
cannot shrink relation
2009-02-12 21:06:40 GMT STATEMENT: VACUUM FULL VERBOSE ANALYZE pg_class
2009-02-12 21:06:40 GMT WARNING: index "pg_class_relname_nsp_index"
contains 1818 row versions, but table contains 1813 row versions

If that helps...

Thanks
> John Lister <john(dot)lister-ps(at)kickstone(dot)com> writes:
>
>> I seem to have more dead rows now..
>> doing a vacuum full on pg_class gives me
>>
>
>
>> INFO: vacuuming "pg_catalog.pg_class"INFO: "pg_class": found 37
>> removable, 1845 nonremovable row versions in 18905 pages
>> DETAIL: 27 dead row versions cannot be removed yet.
>> Nonremovable row versions range from 160 to 229 bytes long.
>> There were 933834 unused item pointers.
>> Total free space (including removable row versions) is 150368692 bytes.
>> 18839 pages are or will become empty, including 0 at the end of the table.
>> 0 pages containing 0 free bytes are potential move destinations.
>>
>
> Hmm, that last seems to indicate that do_shrinking has been reset.
> It looks like the cases where that happens are reported with elog(LOG)
> which means they probably only went to the postmaster log (maybe we
> should fix things so that vacuum verbose reports those). What have
> you got in the postmaster log?
>
> regards, tom lane
>


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [ADMIN] database corruption help
Date: 2009-02-12 22:13:03
Message-ID: 17764.1234476783@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

John Lister <john(dot)lister-ps(at)kickstone(dot)com> writes:
> GMT LOG: relation "pg_class" TID 15538/4: dead HOT-updated tuple ---
> cannot shrink relation

Hmm. The comments in vacuum.c about this case suppose that it could
only be a transient condition, ie the tuple became dead while we were
looking at the page. Evidently it's persistent for you, which means
that for some reason heap_page_prune() is failing to remove an
already-dead tuple. I suspect this implies a corrupt HOT chain, but
maybe the data is okay and what you've got is really a bug in
heap_page_prune.

Could you send a dump of page 15538 of pg_class, using pg_filedump?
The source code for it is here:
http://sources.redhat.com/rhdb/utilities.html
Best would be -i -f format, ie
pg_filedump -i -f -R 15538 $PGDATA/base/something/1259

regards, tom lane


From: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org
Subject: Database corruption help
Date: 2009-02-13 08:20:03
Message-ID: 49952D33.3000707@kickstone.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

Originally in psql-admin, but copied here at the request of Tom to..

Story so far, transaction log archiving went wrong causing the
transaction log disk to fill up. Foolishly i deleted the unarchived
transaction logs (early monday morning) which required a pg_resetxlog to
get the db up and running again. Since then we've had some minor db
corruption which has been fixed (mainly duplicate primary keys) except
for the pg_class table.

If i do a vacuum full on pg_class i get something like this:

INFO: vacuuming "pg_catalog.pg_class"INFO: "pg_class": found 37
removable, 1845 nonremovable row versions in 18905 pages
DETAIL: 27 dead row versions cannot be removed yet.
Nonremovable row versions range from 160 to 229 bytes long.
There were 933834 unused item pointers.
Total free space (including removable row versions) is 150368692 bytes.
18839 pages are or will become empty, including 0 at the end of the table.
0 pages containing 0 free bytes are potential move destinations.
CPU 0.01s/0.05u sec elapsed 0.17 sec.INFO: index "pg_class_oid_index"
now contains 1813 row versions in 7 pages
DETAIL: 56 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index
"pg_class_relname_nsp_index" now contains 1818 row versions in 24 pages
DETAIL: 63 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.WARNING: index
"pg_class_relname_nsp_index" contains 1818 row versions, but table
contains 1813 row versions
HINT: Rebuild the index with REINDEX.INFO: analyzing
"pg_catalog.pg_class"INFO: "pg_class": scanned 3000 of 18905 pages,
containing 227 live rows and 6 dead rows; 227 rows in sample, 1430
estimated total rows
Total query runtime: 4469 ms.

As you can see there are non-removable dead rows (which slowly grows)
and the table size is also increasing in size.. A reindex on the
indexes mentions also succeeds but another vacuum reports the same thing...

In the log files the relevant bits are:

GMT LOG: relation "pg_class" TID 15538/4: dead HOT-updated tuple ---
cannot shrink relation
2009-02-12 21:06:40 GMT STATEMENT: VACUUM FULL VERBOSE ANALYZE pg_class
2009-02-12 21:06:40 GMT WARNING: index "pg_class_relname_nsp_index"
contains 1818 row versions, but table contains 1813 row versions

and finally the last message in the psql-admin thread suggested dumping
the above page, which is attached before.

>Hmm. The comments in vacuum.c about this case suppose that it could
>only be a transient condition, ie the tuple became dead while we were
>looking at the page. Evidently it's persistent for you, which means
>that for some reason heap_page_prune() is failing to remove an
>already-dead tuple. I suspect this implies a corrupt HOT chain, but
>maybe the data is okay and what you've got is really a bug in
>heap_page_prune.

>Could you send a dump of page 15538 of pg_class, using pg_filedump?
>The source code for it is here:
>http://sources.redhat.com/rhdb/utilities.html
>Best would be -i -f format, ie
> pg_filedump -i -f -R 15538 $PGDATA/base/something/1259
>
> regards, tom lane

Any help would be appreciated as the pg_class table is constantly
growing which i'm guessing is going to start to affect performance
fairly soon. I'd like to avoid a full restore from backup if possible.

Thanks

JOHN

Attachment Content-Type Size
dump.txt text/plain 18.6 KB

From: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Database corruption help
Date: 2009-02-13 09:17:39
Message-ID: 49953AB3.50304@kickstone.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

>Please send to pgsql-hackers --- I'd like to get more eyeballs on this.
>There's no personally identifiable information here except that you've
>got a table named temp_queue that you've repeatedly TRUNCATEd or
>CLUSTERed or some such (likely the former since the reltuples counts
>are all zero). It'd be useful to know exactly what you were doing
>with it, though.
>
> regards, tom lane

More info as requested....

TEMP_QUEUE is used continously throughout the day (for some reason which
i chose not to use a normal temp table) data is grouped and inserted
from other tables, before being processed, and you are right the table
is then truncated before repeating - this happens at least once a minute
throughout the day..

After restarted following the pg_resetxlog, i noticed that the
temp_queue table was missing (had been created a day ago). I couldn't
select from it and had problems trying to recreate it (bits seemed to
exist already). I couldn't see it in pg_class or the other catalogs, but
after selecting to ignore the presumably corrupt indexes it appeared so
i manually removed the references (I've since learned about the -P
option to the server which may have helped here). The corruption in
pg_class manifested itself as multiple rows (for this and another table)
with the same OIDs

So not sure at what point what if anything got corrupted and what my
subsequent actions did to compound this..

Thanks

John Lister wrote:
> Originally in psql-admin, but copied here at the request of Tom to..
>
> Story so far, transaction log archiving went wrong causing the
> transaction log disk to fill up. Foolishly i deleted the unarchived
> transaction logs (early monday morning) which required a pg_resetxlog
> to get the db up and running again. Since then we've had some minor db
> corruption which has been fixed (mainly duplicate primary keys) except
> for the pg_class table.
> If i do a vacuum full on pg_class i get something like this:
>
> INFO: vacuuming "pg_catalog.pg_class"INFO: "pg_class": found 37
> removable, 1845 nonremovable row versions in 18905 pages
> DETAIL: 27 dead row versions cannot be removed yet.
> Nonremovable row versions range from 160 to 229 bytes long.
> There were 933834 unused item pointers.
> Total free space (including removable row versions) is 150368692 bytes.
> 18839 pages are or will become empty, including 0 at the end of the
> table.
> 0 pages containing 0 free bytes are potential move destinations.
> CPU 0.01s/0.05u sec elapsed 0.17 sec.INFO: index "pg_class_oid_index"
> now contains 1813 row versions in 7 pages
> DETAIL: 56 index row versions were removed.
> 0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: index
> "pg_class_relname_nsp_index" now contains 1818 row versions in 24 pages
> DETAIL: 63 index row versions were removed.
> 0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.WARNING: index
> "pg_class_relname_nsp_index" contains 1818 row versions, but table
> contains 1813 row versions
> HINT: Rebuild the index with REINDEX.INFO: analyzing
> "pg_catalog.pg_class"INFO: "pg_class": scanned 3000 of 18905 pages,
> containing 227 live rows and 6 dead rows; 227 rows in sample, 1430
> estimated total rows
> Total query runtime: 4469 ms.
>
> As you can see there are non-removable dead rows (which slowly grows)
> and the table size is also increasing in size.. A reindex on the
> indexes mentions also succeeds but another vacuum reports the same
> thing...
>
> In the log files the relevant bits are:
>
> GMT LOG: relation "pg_class" TID 15538/4: dead HOT-updated tuple ---
> cannot shrink relation
> 2009-02-12 21:06:40 GMT STATEMENT: VACUUM FULL VERBOSE ANALYZE pg_class
> 2009-02-12 21:06:40 GMT WARNING: index "pg_class_relname_nsp_index"
> contains 1818 row versions, but table contains 1813 row versions
>
>
> and finally the last message in the psql-admin thread suggested
> dumping the above page, which is attached before.
>
>> Hmm. The comments in vacuum.c about this case suppose that it could
>> only be a transient condition, ie the tuple became dead while we were
>> looking at the page. Evidently it's persistent for you, which means
>> that for some reason heap_page_prune() is failing to remove an
>> already-dead tuple. I suspect this implies a corrupt HOT chain, but
>> maybe the data is okay and what you've got is really a bug in
>> heap_page_prune.
>
>> Could you send a dump of page 15538 of pg_class, using pg_filedump?
>> The source code for it is here:
>> http://sources.redhat.com/rhdb/utilities.html
>> Best would be -i -f format, ie
>> pg_filedump -i -f -R 15538 $PGDATA/base/something/1259
>>
>> regards, tom lane
>
>
>
> Any help would be appreciated as the pg_class table is constantly
> growing which i'm guessing is going to start to affect performance
> fairly soon. I'd like to avoid a full restore from backup if possible.
>
> Thanks
>
> JOHN
> ------------------------------------------------------------------------
>
>
>


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Database corruption help
Date: 2009-02-13 16:19:00
Message-ID: 3161.1234541940@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

John Lister <john(dot)lister-ps(at)kickstone(dot)com> writes:
> Originally in psql-admin, but copied here at the request of Tom to..

Thanks for forwarding this. The reason I wanted to call it to the
attention of pgsql-hackers is that the page contents seem a bit odd,
and I'm not sure that we should just write it off as "pilot error".
What we've got here is a page full of transient states of the pg_class
row for temp_queue, which as you explained is constantly being
re-TRUNCATEd by your application. So the data contents of each state
of the row vary only in relfilenode, as expected. One would also expect
that all the copies on a particular page of pg_class would form a single
HOT chain (the database is 8.3.5). What we've actually got is a chain
that is broken in two places and lacks a root tuple(!).

How could it have got that way? John's ill-advised removal of the
transaction logs should have resulted only in the page being a lot older
than it should be, not in a logically corrupt page.

The only other corruption mechanism I can think of is that pg_clog might
contain commit bits for some logically inconsistent set of transaction
numbers, due to some pages of pg_clog having made it to disk and others
not. That could result in some of the intermediate tuples in the chain
not being seen as dead --- but that's not what we see here either.

Aside from the "how did this happen" puzzle, the real point of any
investigation of course ought to be whether we can make heap_page_prune
more robust. At the very least it's undesirable to be leaving the page
in a state where VACUUM FULL will decide it can't shrink.

Ideas anyone?

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Database corruption help
Date: 2009-02-13 16:28:28
Message-ID: 3304.1234542508@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

John Lister <john(dot)lister-ps(at)kickstone(dot)com> writes:
> Any help would be appreciated as the pg_class table is constantly
> growing which i'm guessing is going to start to affect performance
> fairly soon. I'd like to avoid a full restore from backup if possible.

BTW, what I would recommend as a recovery action is to zero out that
page of pg_class while the postmaster is stopped. We know that none
of those rows are useful to you, and there shouldn't be any index
entries pointing at them (since they're all HOT tuples), so at least
in theory that won't cause any damage. Then you can try another
VACUUM FULL and see if there are any more pages with, er, issues.

If you're on a machine that has /dev/zero then something like this
should work:

dd bs=8k count=1 seek=15538 conv=notrunc if=/dev/zero of=$PGDATA/base/16392/1259

but it'd be a good idea to save a copy of the target file so you can try
again if you mess up.

Also, it'd really be prudent to do a dump, initdb, reload once you
get to a point where pg_dump succeeds without complaints. We don't
have any good way to know what other corruption might be lurking
undetected.

regards, tom lane


From: John Lister <john(dot)lister-ps(at)kickstone(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Database corruption help
Date: 2009-02-13 17:57:11
Message-ID: 4995B477.7090805@kickstone.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

Cheers, i'll give it ago. I'm probably going to do a full restore over
the weekend while i can shut things down without too many complaints...

I can save any of the files if you are interested in them later on...

JOHN

Tom Lane wrote:
> John Lister <john(dot)lister-ps(at)kickstone(dot)com> writes:
>
>> Any help would be appreciated as the pg_class table is constantly
>> growing which i'm guessing is going to start to affect performance
>> fairly soon. I'd like to avoid a full restore from backup if possible.
>>
>
> BTW, what I would recommend as a recovery action is to zero out that
> page of pg_class while the postmaster is stopped. We know that none
> of those rows are useful to you, and there shouldn't be any index
> entries pointing at them (since they're all HOT tuples), so at least
> in theory that won't cause any damage. Then you can try another
> VACUUM FULL and see if there are any more pages with, er, issues.
>
> If you're on a machine that has /dev/zero then something like this
> should work:
>
> dd bs=8k count=1 seek=15538 conv=notrunc if=/dev/zero of=$PGDATA/base/16392/1259
>
> but it'd be a good idea to save a copy of the target file so you can try
> again if you mess up.
>
> Also, it'd really be prudent to do a dump, initdb, reload once you
> get to a point where pg_dump succeeds without complaints. We don't
> have any good way to know what other corruption might be lurking
> undetected.
>
> regards, tom lane
>


From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: John Lister <john(dot)lister-ps(at)kickstone(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Database corruption help
Date: 2009-02-14 08:26:11
Message-ID: 49968023.8030404@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

Tom Lane wrote:
> Aside from the "how did this happen" puzzle, the real point of any
> investigation of course ought to be whether we can make heap_page_prune
> more robust. At the very least it's undesirable to be leaving the page
> in a state where VACUUM FULL will decide it can't shrink.

I'm as puzzled as you are on how it happened.

The fundamental problem here is that we have HOT-updated tuples that are
missing the root tuple. heap_prune_chain doesn't know how to remove the
dead tuples from such chains, and neither does vacuum. What's worse is
that there's no index pointer to the live tuple in the chain, and even
VACUUM FULL doesn't fix that.

Could we modify heap_page_prune so that it detects such orphaned HOT
tuples, and clears the HOT_UPDATED flag for them? That would at least
let you recover. I'm pretty wary of trying to "fix" things after
corruption in general since you can't be sure what's correct and what's
not, but clearing the HOT_UPDATED flag seems safe and more likely to
help than hurt. Clearing the flag would let the tuple become indexed
again in a VACUUM FULL or REINDEX, so it would fix the inconsistency
that otherwise a sequential scan can see the tuple but an index scan can
not. It would also allow the next heap_page_prune operation to remove
the remaining dead tuples in the chain.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: John Lister <john(dot)lister-ps(at)kickstone(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Database corruption help
Date: 2009-02-15 10:18:10
Message-ID: 1234693090.4500.1142.camel@ebony.2ndQuadrant
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers


On Fri, 2009-02-13 at 11:19 -0500, Tom Lane wrote:

> Aside from the "how did this happen" puzzle, the real point of any
> investigation of course ought to be whether we can make heap_page_prune
> more robust. At the very least it's undesirable to be leaving the page
> in a state where VACUUM FULL will decide it can't shrink.

I think it would be useful to have a function that can scan a table to
see if this issue exists. ISTM if it has happened once it can have
happened many times. It would be useful to have some more trouble
reports so we can assess the size and scope of this problem.

--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support


From: Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: John Lister <john(dot)lister-ps(at)kickstone(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Database corruption help
Date: 2009-02-25 17:42:31
Message-ID: 2e78013d0902250942u7d6103fev96e3b6d4eb20e5b8@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-admin pgsql-hackers

On Fri, Feb 13, 2009 at 9:49 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> The only other corruption mechanism I can think of is that pg_clog might
> contain commit bits for some logically inconsistent set of transaction
> numbers, due to some pages of pg_clog having made it to disk and others
> not.  That could result in some of the intermediate tuples in the chain
> not being seen as dead --- but that's not what we see here either.
>

Or can it be otherwise where some transactions which in fact
committed, are marked as aborted because of clog corruption ? In that
case, some of the intermediate tuples in the HOT chain may get removed
(because we handle aborted heap-only tuples separately) and break the
HOT chain.

I am also looking at the pruning logic to see if I can spot something unusual.

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com