Lists: | pgsql-generalpgsql-hackers |
---|
From: | Jamie Fox <jfox(at)directcommerce(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | large object does not exist after pg_migrator |
Date: | 2009-07-13 21:13:37 |
Message-ID: | de22a6520907131413w574c5bc8g5fed79be163fdc1a@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Hi -
After what seemed to be a normal successful pg_migrator migration from 8.3.7
to 8.4.0, in either link or copy mode, vacuumlo fails on both our production
and qa databases:
Jul 1 11:17:03 db2 postgres[9321]: [14-1] LOG: duration: 175.563 ms
statement: DELETE FROM vacuum_l WHERE lo IN (SELECT "xml_data" FROM
"public"."xml_user")
Jul 1 11:17:03 db2 postgres[9321]: [15-1] ERROR: large object 17919608
does not exist
Jul 1 11:17:03 db2 postgres[9321]: [16-1] ERROR: current transaction is
aborted, commands ignored until end of transaction block
I migrated our qa database using pg_dump/pg_restore and vacuumlo has no
problem with it. When I try querying the two databases for large objects
manually I see the same error in the one that was migrated with pg_migrator:
select loread(lo_open(xml_data,262144),1073741819) from xml_user where id =
'10837246';
ERROR: large object 24696063 does not exist
SQL state: 42704
I can also see that the pg_largeobject table is different, in the pg_restore
version the Rows (estimated) is 316286 and Rows (counted) is the same, in
the pg_migrator version the Rows (counted) is only 180507.
Any advice on what I might look for to try and track down this problem?
pg_restore on our production database takes too long so it would be really
nice to use pg_migrator instead.
Thanks,
Jamie
From: | Jamie Fox <jfox(at)directcommerce(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: large object does not exist after pg_migrator |
Date: | 2009-07-13 22:21:39 |
Message-ID: | de22a6520907131521g41a3c7bdg7d9f083be54a819a@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Hi -
This is probably more helpful - the pg_largeobject table only changed after
vacuumlo, not before. When comparing pre- and post- pg_migrator databases
(no vacuum or vacuumlo):
select * from pg_largeobject where loid = '24696063';
in the pre- there are three rows, having pageno 0 through 3, in the post-
database there are no results.
Thanks for any advice,
Jamie
On Mon, Jul 13, 2009 at 2:13 PM, Jamie Fox <jfox(at)directcommerce(dot)com> wrote:
> Hi -
> After what seemed to be a normal successful pg_migrator migration from
> 8.3.7 to 8.4.0, in either link or copy mode, vacuumlo fails on both our
> production and qa databases:
>
> Jul 1 11:17:03 db2 postgres[9321]: [14-1] LOG: duration: 175.563 ms
> statement: DELETE FROM vacuum_l WHERE lo IN (SELECT "xml_data" FROM
> "public"."xml_user")
> Jul 1 11:17:03 db2 postgres[9321]: [15-1] ERROR: large object 17919608
> does not exist
> Jul 1 11:17:03 db2 postgres[9321]: [16-1] ERROR: current transaction is
> aborted, commands ignored until end of transaction block
>
> I migrated our qa database using pg_dump/pg_restore and vacuumlo has no
> problem with it. When I try querying the two databases for large objects
> manually I see the same error in the one that was migrated with pg_migrator:
>
> select loread(lo_open(xml_data,262144),1073741819) from xml_user where id =
> '10837246';
> ERROR: large object 24696063 does not exist
> SQL state: 42704
>
> I can also see that the pg_largeobject table is different, in the
> pg_restore version the Rows (estimated) is 316286 and Rows (counted) is the
> same, in the pg_migrator version the Rows (counted) is only 180507.
>
> Any advice on what I might look for to try and track down this problem?
> pg_restore on our production database takes too long so it would be really
> nice to use pg_migrator instead.
>
> Thanks,
>
> Jamie
>
>
From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Jamie Fox <jfox(at)directcommerce(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-13 22:27:35 |
Message-ID: | 200907132227.n6DMRZ507475@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Jamie Fox wrote:
> Hi -
> After what seemed to be a normal successful pg_migrator migration from 8.3.7
> to 8.4.0, in either link or copy mode, vacuumlo fails on both our production
> and qa databases:
>
> Jul 1 11:17:03 db2 postgres[9321]: [14-1] LOG: duration: 175.563 ms
> statement: DELETE FROM vacuum_l WHERE lo IN (SELECT "xml_data" FROM
> "public"."xml_user")
> Jul 1 11:17:03 db2 postgres[9321]: [15-1] ERROR: large object 17919608
> does not exist
> Jul 1 11:17:03 db2 postgres[9321]: [16-1] ERROR: current transaction is
> aborted, commands ignored until end of transaction block
>
> I migrated our qa database using pg_dump/pg_restore and vacuumlo has no
> problem with it. When I try querying the two databases for large objects
> manually I see the same error in the one that was migrated with pg_migrator:
>
> select loread(lo_open(xml_data,262144),1073741819) from xml_user where id =
> '10837246';
> ERROR: large object 24696063 does not exist
> SQL state: 42704
>
> I can also see that the pg_largeobject table is different, in the pg_restore
> version the Rows (estimated) is 316286 and Rows (counted) is the same, in
> the pg_migrator version the Rows (counted) is only 180507.
>
> Any advice on what I might look for to try and track down this problem?
> pg_restore on our production database takes too long so it would be really
> nice to use pg_migrator instead.
[ Email moved to hackers list.]
Wow, I didn't test large objects specifically, and I am confused why
there would be a count discrepancy. I will need to do some research
unless someone else can guess about the cause.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Jamie Fox <jfox(at)directcommerce(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-13 22:28:10 |
Message-ID: | 200907132228.n6DMSAO07561@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Forwarded to hackers.
---------------------------------------------------------------------------
Jamie Fox wrote:
> Hi -
> This is probably more helpful - the pg_largeobject table only changed after
> vacuumlo, not before. When comparing pre- and post- pg_migrator databases
> (no vacuum or vacuumlo):
>
> select * from pg_largeobject where loid = '24696063';
>
> in the pre- there are three rows, having pageno 0 through 3, in the post-
> database there are no results.
>
> Thanks for any advice,
>
> Jamie
>
>
> On Mon, Jul 13, 2009 at 2:13 PM, Jamie Fox <jfox(at)directcommerce(dot)com> wrote:
>
> > Hi -
> > After what seemed to be a normal successful pg_migrator migration from
> > 8.3.7 to 8.4.0, in either link or copy mode, vacuumlo fails on both our
> > production and qa databases:
> >
> > Jul 1 11:17:03 db2 postgres[9321]: [14-1] LOG: duration: 175.563 ms
> > statement: DELETE FROM vacuum_l WHERE lo IN (SELECT "xml_data" FROM
> > "public"."xml_user")
> > Jul 1 11:17:03 db2 postgres[9321]: [15-1] ERROR: large object 17919608
> > does not exist
> > Jul 1 11:17:03 db2 postgres[9321]: [16-1] ERROR: current transaction is
> > aborted, commands ignored until end of transaction block
> >
> > I migrated our qa database using pg_dump/pg_restore and vacuumlo has no
> > problem with it. When I try querying the two databases for large objects
> > manually I see the same error in the one that was migrated with pg_migrator:
> >
> > select loread(lo_open(xml_data,262144),1073741819) from xml_user where id =
> > '10837246';
> > ERROR: large object 24696063 does not exist
> > SQL state: 42704
> >
> > I can also see that the pg_largeobject table is different, in the
> > pg_restore version the Rows (estimated) is 316286 and Rows (counted) is the
> > same, in the pg_migrator version the Rows (counted) is only 180507.
> >
> > Any advice on what I might look for to try and track down this problem?
> > pg_restore on our production database takes too long so it would be really
> > nice to use pg_migrator instead.
> >
> > Thanks,
> >
> > Jamie
> >
> >
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Jamie Fox <jfox(at)directcommerce(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-13 22:35:45 |
Message-ID: | 20090713223545.GP4930@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Bruce Momjian wrote:
> Jamie Fox wrote:
> > I can also see that the pg_largeobject table is different, in the pg_restore
> > version the Rows (estimated) is 316286 and Rows (counted) is the same, in
> > the pg_migrator version the Rows (counted) is only 180507.
> Wow, I didn't test large objects specifically, and I am confused why
> there would be a count discrepancy. I will need to do some research
> unless someone else can guess about the cause.
Maybe pg_largeobject is not getting frozen?
--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Jamie Fox <jfox(at)directcommerce(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-13 22:38:13 |
Message-ID: | 200907132238.n6DMcDZ08620@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Alvaro Herrera wrote:
> Bruce Momjian wrote:
> > Jamie Fox wrote:
>
> > > I can also see that the pg_largeobject table is different, in the pg_restore
> > > version the Rows (estimated) is 316286 and Rows (counted) is the same, in
> > > the pg_migrator version the Rows (counted) is only 180507.
>
> > Wow, I didn't test large objects specifically, and I am confused why
> > there would be a count discrepancy. I will need to do some research
> > unless someone else can guess about the cause.
>
> Maybe pg_largeobject is not getting frozen?
That would explain the change in count, but I thought we froze
_everything_, and had to.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Jamie Fox <jfox(at)directcommerce(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-14 00:28:58 |
Message-ID: | 20090714002858.GR4930@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Bruce Momjian wrote:
> Alvaro Herrera wrote:
> > Bruce Momjian wrote:
> > > Jamie Fox wrote:
> >
> > > > I can also see that the pg_largeobject table is different, in the pg_restore
> > > > version the Rows (estimated) is 316286 and Rows (counted) is the same, in
> > > > the pg_migrator version the Rows (counted) is only 180507.
> >
> > > Wow, I didn't test large objects specifically, and I am confused why
> > > there would be a count discrepancy. I will need to do some research
> > > unless someone else can guess about the cause.
> >
> > Maybe pg_largeobject is not getting frozen?
>
> That would explain the change in count, but I thought we froze
> _everything_, and had to.
After a quick chat with Bruce it was determined that we don't freeze
anything (it would be horrid for downtime if we did so in pg_migrator;
and it would be useless if ran anywhere else). What we do is migrate
pg_clog from the old cluster to the new. So never mind that hypothesis.
Bruce noticed that the pg_dump/pg_migrator combo is failing to restore
pg_largeobject's relfrozenxid. We're not sure how this is causing the
errors Jamie is seeing, because what I think should happen is that scans
of the table should fail with failures to open pg_clog files
such-and-such, but not missing tuples ...
--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Jamie Fox <jfox(at)directcommerce(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-14 00:36:22 |
Message-ID: | 200907140036.n6E0aMK00263@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Alvaro Herrera wrote:
> Bruce Momjian wrote:
> > Alvaro Herrera wrote:
> > > Bruce Momjian wrote:
> > > > Jamie Fox wrote:
> > >
> > > > > I can also see that the pg_largeobject table is different, in the pg_restore
> > > > > version the Rows (estimated) is 316286 and Rows (counted) is the same, in
> > > > > the pg_migrator version the Rows (counted) is only 180507.
> > >
> > > > Wow, I didn't test large objects specifically, and I am confused why
> > > > there would be a count discrepancy. I will need to do some research
> > > > unless someone else can guess about the cause.
> > >
> > > Maybe pg_largeobject is not getting frozen?
> >
> > That would explain the change in count, but I thought we froze
> > _everything_, and had to.
>
> After a quick chat with Bruce it was determined that we don't freeze
> anything (it would be horrid for downtime if we did so in pg_migrator;
> and it would be useless if ran anywhere else). What we do is migrate
> pg_clog from the old cluster to the new. So never mind that hypothesis.
FYI, we do freeze the new cluster that has only schema definitions, no
data.
> Bruce noticed that the pg_dump/pg_migrator combo is failing to restore
> pg_largeobject's relfrozenxid. We're not sure how this is causing the
> errors Jamie is seeing, because what I think should happen is that scans
> of the table should fail with failures to open pg_clog files
> such-and-such, but not missing tuples ...
Yea, I can fix that in PG 8.4.1, but that doesn't seem like the cause of
the missing rows. Alvaro and I are still investigating.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Jamie Fox <jfox(at)directcommerce(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-14 03:03:04 |
Message-ID: | 200907140303.n6E334h02217@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Alvaro Herrera wrote:
> Bruce Momjian wrote:
> > Alvaro Herrera wrote:
> > > Bruce Momjian wrote:
> > > > Jamie Fox wrote:
> > >
> > > > > I can also see that the pg_largeobject table is different, in the pg_restore
> > > > > version the Rows (estimated) is 316286 and Rows (counted) is the same, in
> > > > > the pg_migrator version the Rows (counted) is only 180507.
> > >
> > > > Wow, I didn't test large objects specifically, and I am confused why
> > > > there would be a count discrepancy. I will need to do some research
> > > > unless someone else can guess about the cause.
> > >
> > > Maybe pg_largeobject is not getting frozen?
> >
> > That would explain the change in count, but I thought we froze
> > _everything_, and had to.
>
> After a quick chat with Bruce it was determined that we don't freeze
> anything (it would be horrid for downtime if we did so in pg_migrator;
> and it would be useless if ran anywhere else). What we do is migrate
> pg_clog from the old cluster to the new. So never mind that hypothesis.
>
> Bruce noticed that the pg_dump/pg_migrator combo is failing to restore
> pg_largeobject's relfrozenxid. We're not sure how this is causing the
> errors Jamie is seeing, because what I think should happen is that scans
> of the table should fail with failures to open pg_clog files
> such-and-such, but not missing tuples ...
Jamie, is it possible for you to apply the attached patch to the 8.4
server, install the new pg_dump, and run the test again to see if
pg_largeobject is fixed? This patch properly sets the relfrozenxid in
the system tables for each database.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
Attachment | Content-Type | Size |
---|---|---|
/pgpatches/largeobject | text/x-diff | 2.4 KB |
From: | Jamie Fox <jfox(at)directcommerce(dot)com> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Alvaro Herrera <alvherre(at)commandprompt(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-14 18:59:39 |
Message-ID: | de22a6520907141159i7dcf7e0cye28e3905d21a18a8@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
On Mon, Jul 13, 2009 at 8:03 PM, Bruce Momjian <bruce(at)momjian(dot)us> wrote:
> Alvaro Herrera wrote:
> > Bruce Momjian wrote:
> > > Alvaro Herrera wrote:
> > > > Bruce Momjian wrote:
> > > > > Jamie Fox wrote:
> > > >
> > > > > > I can also see that the pg_largeobject table is different, in the
> pg_restore
> > > > > > version the Rows (estimated) is 316286 and Rows (counted) is the
> same, in
> > > > > > the pg_migrator version the Rows (counted) is only 180507.
> > > >
> > > > > Wow, I didn't test large objects specifically, and I am confused
> why
> > > > > there would be a count discrepancy. I will need to do some research
> > > > > unless someone else can guess about the cause.
> > > >
> > > > Maybe pg_largeobject is not getting frozen?
> > >
> > > That would explain the change in count, but I thought we froze
> > > _everything_, and had to.
> >
> > After a quick chat with Bruce it was determined that we don't freeze
> > anything (it would be horrid for downtime if we did so in pg_migrator;
> > and it would be useless if ran anywhere else). What we do is migrate
> > pg_clog from the old cluster to the new. So never mind that hypothesis.
> >
> > Bruce noticed that the pg_dump/pg_migrator combo is failing to restore
> > pg_largeobject's relfrozenxid. We're not sure how this is causing the
> > errors Jamie is seeing, because what I think should happen is that scans
> > of the table should fail with failures to open pg_clog files
> > such-and-such, but not missing tuples ...
>
> Jamie, is it possible for you to apply the attached patch to the 8.4
> server, install the new pg_dump, and run the test again to see if
> pg_largeobject is fixed? This patch properly sets the relfrozenxid in
> the system tables for each database.
>
Sorry for the confusion, an addendum meant to be helpful fell out of this
thread during the move from -general. I will try this patch now, but to be
clear - the number of rows in pg_largeobject changed after I ran vacuumlo
(that eventually failed).
Here's what I have found that got broken during pg_migrate: In two side by
side databases (an 8.3.7 copy and 8.4.0 migrated with pg_migrator) the
pg_largeobject table has the same number of rows. However, in the 8.4
database any select for an loid in pg_largeobject returns zero rows. If I
select all loids to a file, and compare to select all loids from 8.3.7
they're the same. When I select != an loid it seems to exclude the one and
return the rest, but all other comparisons <, > or = return zero rows. Or
I'm completely batty. Dereferencing via lo_open of blob_data (an oid) in
other tables fails in the 8.4 database with 'large object xxxxid does not
exist'.
Thanks again,
Jamie
From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Jamie Fox <jfox(at)directcommerce(dot)com> |
Cc: | Bruce Momjian <bruce(at)momjian(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-14 19:03:04 |
Message-ID: | 20090714190304.GL4799@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Jamie Fox wrote:
> Here's what I have found that got broken during pg_migrate: In two side by
> side databases (an 8.3.7 copy and 8.4.0 migrated with pg_migrator) the
> pg_largeobject table has the same number of rows. However, in the 8.4
> database any select for an loid in pg_largeobject returns zero rows. If I
> select all loids to a file, and compare to select all loids from 8.3.7
> they're the same. When I select != an loid it seems to exclude the one and
> return the rest, but all other comparisons <, > or = return zero rows. Or
> I'm completely batty. Dereferencing via lo_open of blob_data (an oid) in
> other tables fails in the 8.4 database with 'large object xxxxid does not
> exist'.
Oh, so maybe it's pg_largeobject's index that's borked ... Did you try
reindexing it?
How are we transferring pg_largeobject, and are we transferring its
index too?
--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
From: | Jamie Fox <jfox(at)directcommerce(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Bruce Momjian <bruce(at)momjian(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-14 19:54:15 |
Message-ID: | de22a6520907141254m2d46e374w8e0dbb7af91aba42@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
> > Here's what I have found that got broken during pg_migrate: In two side
> by
> > side databases (an 8.3.7 copy and 8.4.0 migrated with pg_migrator) the
> > pg_largeobject table has the same number of rows. However, in the 8.4
> > database any select for an loid in pg_largeobject returns zero rows. If
> I
> > select all loids to a file, and compare to select all loids from 8.3.7
> > they're the same. When I select != an loid it seems to exclude the one
> and
> > return the rest, but all other comparisons <, > or = return zero rows.
> Or
> > I'm completely batty. Dereferencing via lo_open of blob_data (an oid) in
> > other tables fails in the 8.4 database with 'large object xxxxid does not
> > exist'.
>
> Oh, so maybe it's pg_largeobject's index that's borked ... Did you try
> reindexing it?
>
> How are we transferring pg_largeobject, and are we transferring its
> index too?
Hi -
REINDEX INDEX pg_largeobject_loid_pn_index;
This seems to have fixed the problem, lo_open of lob data is working again -
now to see how vacuumlo likes it.
Thanks,
Jamie
From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Jamie Fox <jfox(at)directcommerce(dot)com> |
Cc: | Bruce Momjian <bruce(at)momjian(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-15 16:28:57 |
Message-ID: | 20090715162857.GJ4551@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Jamie Fox wrote:
> Hi -
> REINDEX INDEX pg_largeobject_loid_pn_index;
>
> This seems to have fixed the problem, lo_open of lob data is working again -
> now to see how vacuumlo likes it.
So did it work?
--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
From: | Jamie Fox <jfox(at)directcommerce(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Bruce Momjian <bruce(at)momjian(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-15 21:22:49 |
Message-ID: | de22a6520907151422h525abc48qce68618592754aad@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Worked great, vacuumlo finished, a vacuum -full finished amazingly quickly,
very exciting. We're pointing qa apps at it now for testing.
For some reason though, that index has to be rebuilt after running
pg_migrator.
I'll be testing on our 100GB+ prod copy shortly and will let you know if you
want.
Thanks,
Jamie
On Wed, Jul 15, 2009 at 9:28 AM, Alvaro Herrera
<alvherre(at)commandprompt(dot)com>wrote:
> Jamie Fox wrote:
>
> > Hi -
> > REINDEX INDEX pg_largeobject_loid_pn_index;
> >
> > This seems to have fixed the problem, lo_open of lob data is working
> again -
> > now to see how vacuumlo likes it.
>
> So did it work?
>
> --
> Alvaro Herrera
> http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.
>
From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Jamie Fox <jfox(at)directcommerce(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | pg_migrator 8.4.1 alpha 1 released with bug mention |
Date: | 2009-07-18 00:19:13 |
Message-ID: | 200907180019.n6I0JDb22104@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
To more clearly identify that pg_migrator now has known bugs, I have
released pg_migrator 8.4.1 alpha1, and mentioned in the README that
there are known bugs related to migrating sequences and large objects.
I have removed the 8.4 source file from pgfoundry.
---------------------------------------------------------------------------
Alvaro Herrera wrote:
> Jamie Fox wrote:
>
> > Hi -
> > REINDEX INDEX pg_largeobject_loid_pn_index;
> >
> > This seems to have fixed the problem, lo_open of lob data is working again -
> > now to see how vacuumlo likes it.
>
> So did it work?
>
> --
> Alvaro Herrera http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Jamie Fox <jfox(at)directcommerce(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-20 20:55:07 |
Message-ID: | 200907202055.n6KKt7026899@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Bruce Momjian wrote:
> > After a quick chat with Bruce it was determined that we don't freeze
> > anything (it would be horrid for downtime if we did so in pg_migrator;
> > and it would be useless if ran anywhere else). What we do is migrate
> > pg_clog from the old cluster to the new. So never mind that hypothesis.
> >
> > Bruce noticed that the pg_dump/pg_migrator combo is failing to restore
> > pg_largeobject's relfrozenxid. We're not sure how this is causing the
> > errors Jamie is seeing, because what I think should happen is that scans
> > of the table should fail with failures to open pg_clog files
> > such-and-such, but not missing tuples ...
>
> Jamie, is it possible for you to apply the attached patch to the 8.4
> server, install the new pg_dump, and run the test again to see if
> pg_largeobject is fixed? This patch properly sets the relfrozenxid in
> the system tables for each database.
I have applied the attached patch to have pg_dump restore
pg_largeobject.relfrozenxid in binary upgrade mode; backpatched to
8.4.X.
This doesn't fix the reported problem, but it is still a bug.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
Attachment | Content-Type | Size |
---|---|---|
/rtmp/diff | text/x-diff | 2.4 KB |
From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Jamie Fox <jfox(at)directcommerce(dot)com> |
Cc: | Alvaro Herrera <alvherre(at)commandprompt(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] large object does not exist after pg_migrator |
Date: | 2009-07-20 22:06:57 |
Message-ID: | 200907202206.n6KM6vA08805@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general pgsql-hackers |
Jamie Fox wrote:
> > > Here's what I have found that got broken during pg_migrate: In two side
> > by
> > > side databases (an 8.3.7 copy and 8.4.0 migrated with pg_migrator) the
> > > pg_largeobject table has the same number of rows. However, in the 8.4
> > > database any select for an loid in pg_largeobject returns zero rows. If
> > I
> > > select all loids to a file, and compare to select all loids from 8.3.7
> > > they're the same. When I select != an loid it seems to exclude the one
> > and
> > > return the rest, but all other comparisons <, > or = return zero rows.
> > Or
> > > I'm completely batty. Dereferencing via lo_open of blob_data (an oid) in
> > > other tables fails in the 8.4 database with 'large object xxxxid does not
> > > exist'.
> >
> > Oh, so maybe it's pg_largeobject's index that's borked ... Did you try
> > reindexing it?
> >
> > How are we transferring pg_largeobject, and are we transferring its
> > index too?
>
>
> Hi -
> REINDEX INDEX pg_largeobject_loid_pn_index;
>
> This seems to have fixed the problem, lo_open of lob data is working again -
> now to see how vacuumlo likes it.
I have applied the attached patch to pg_migrator to properly migrate the
pg_largeobject index. I have added large object comment migration as a
TODO item.
This eliminates the last known bug in pg_migrator.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ If your life is a hard drive, Christ can be your backup. +
Attachment | Content-Type | Size |
---|---|---|
/rtmp/diff | text/x-diff | 6.3 KB |