Lists: | pgsql-hackers |
---|
From: | MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com> |
---|---|
To: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | vacuumlo issue |
Date: | 2012-03-20 09:34:13 |
Message-ID: | BAY164-W265A089BD32F8901A686C9FF430@phx.gbl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Hi,
We have noticed the following issue with vacuumlo database that have millions of record in pg_largeobject i.e.
WARNING: out of shared memoryFailed to remove lo 155987: ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction.
Why do we need to increase max_locks_per_transaction/shared memory for clean up operation, if there are huge number records how can we tackle this situation with limited memory?. It is reproducible on postgresql-9.1.2. The steps are as following (PFA vacuumlo-test_data.sql that generates dummy data) i.e.
Steps:
1. ./bin/initdb -D data-vacuumlo_test12. ./bin/pg_ctl -D data-vacuumlo_test1 -l logfile_data-vacuumlo_test1 start3. ./bin/createdb vacuumlo_test4. bin/psql -d vacuumlo_test -f vacuumlo-test_data.sql5. bin/vacuumlo vacuumlo_test
~/work/pg/postgresql-9.1.2/inst$ bin/psql -d vacuumlo_test -f vacuumlo-test_data.sql
CREATE FUNCTION
CREATE FUNCTION
create_manylargeobjects
-------------------------
(1 row)
count
-------
13001
(1 row)
~/work/pg/postgresql-9.1.2/inst$ bin/vacuumlo vacuumlo_test
WARNING: out of shared memory
Failed to remove lo 36726: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
Failed to remove lo 36727: ERROR: current transaction is aborted, commands ignored until end of transaction block
Failed to remove lo 36728: ERROR: current transaction is aborted, commands ignored until end of transaction block
Failed to remove lo 36729: ERROR: current transaction is aborted, commands ignored until end of transaction block
....
....
Best Regards,Muhammad Asif Naeem
Attachment | Content-Type | Size |
---|---|---|
vacuumlo-test_data.sql | application/octet-stream | 775 bytes |
From: | MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com> |
---|---|
To: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: vacuumlo issue |
Date: | 2012-03-20 10:07:39 |
Message-ID: | BAY164-W3619249102551CCA2617BCFF430@phx.gbl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
I have reformatted the mail, sorry for inconvenience. Thanks.
We have noticed the following issue with vacuumlo database that have millions of record in pg_largeobject i.e. WARNING: out of shared memory Failed to remove lo 155987: ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction.
Why do we need to increase max_locks_per_transaction/shared memory for clean up operation, if there are huge number records how can we tackle this situation with limited memory?. It is reproducible on postgresql-9.1.2. The steps are as following (PFA vacuumlo-test_data.sql that generates dummy data) i.e.
Steps:
1. ./bin/initdb -D data-vacuumlo_test12. ./bin/pg_ctl -D data-vacuumlo_test1 -l logfile_data-vacuumlo_test1 start3. ./bin/createdb vacuumlo_test4. bin/psql -d vacuumlo_test -f vacuumlo-test_data.sql5. bin/vacuumlo vacuumlo_test
~/work/pg/postgresql-9.1.2/inst$ bin/psql -d vacuumlo_test -f vacuumlo-test_data.sqlCREATE FUNCTIONCREATE FUNCTION create_manylargeobjects------------------------- (1 row) count------- 13001(1 row)
~/work/pg/postgresql-9.1.2/inst$ bin/vacuumlo vacuumlo_test
WARNING: out of shared memoryFailed to remove lo 36726: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.Failed to remove lo 36727: ERROR: current transaction is aborted, commands ignored until end of transaction blockFailed to remove lo 36728: ERROR: current transaction is aborted, commands ignored until end of transaction blockFailed to remove lo 36729: ERROR: current transaction is aborted, commands ignored until end of transaction block........
Best Regards,Asif Naeem
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: vacuumlo issue |
Date: | 2012-03-20 14:53:07 |
Message-ID: | 25231.1332255187@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com> writes:
> We have noticed the following issue with vacuumlo database that have millions of record in pg_largeobject i.e.
> WARNING: out of shared memoryFailed to remove lo 155987: ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction.
> Why do we need to increase max_locks_per_transaction/shared memory for
> clean up operation,
This seems to be a consequence of the 9.0-era decision to fold large
objects into the standard dependency-deletion algorithm and hence
take out locks on them individually.
I'm not entirely convinced that that was a good idea. However, so far
as vacuumlo is concerned, the only reason this is a problem is that
vacuumlo goes out of its way to do all the large-object deletions in a
single transaction. What's the point of that? It'd be useful to batch
them, probably, rather than commit each deletion individually. But the
objects being deleted are by assumption unreferenced, so I see no
correctness argument why they should need to go away all at once.
regards, tom lane
From: | "Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
---|---|
To: | "MUHAMMAD ASIF *EXTERN*" <anaeem(dot)it(at)hotmail(dot)com>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: vacuumlo issue |
Date: | 2012-03-20 14:53:37 |
Message-ID: | D960CB61B694CF459DCFB4B0128514C207A2B200@exadv11.host.magwien.gv.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
MUHAMMAD ASIF wrote:
> We have noticed the following issue with vacuumlo database that have millions of record
> in pg_largeobject i.e.
[...]
> ~/work/pg/postgresql-9.1.2/inst$ bin/vacuumlo vacuumlo_test
>
> WARNING: out of shared memory
> Failed to remove lo 36726: ERROR: out of shared memory
>
> HINT: You might need to increase max_locks_per_transaction.
This is not a question for the hackers list.
vacuumlo handles all deletes in a single transaction, so
max_locks_per_transaction can be exhausted.
Yours,
Laurenz Albe
From: | Josh Kupershmidt <schmiddy(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: vacuumlo issue |
Date: | 2012-03-20 15:24:00 |
Message-ID: | CAK3UJRH7Z+WV64RAuKxqP3iqusQ7W=41nLmF_fz-V1T=7jBvLw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Mar 20, 2012 at 7:53 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> I'm not entirely convinced that that was a good idea. However, so far
> as vacuumlo is concerned, the only reason this is a problem is that
> vacuumlo goes out of its way to do all the large-object deletions in a
> single transaction. What's the point of that? It'd be useful to batch
> them, probably, rather than commit each deletion individually. But the
> objects being deleted are by assumption unreferenced, so I see no
> correctness argument why they should need to go away all at once.
I think you are asking for this option:
-l LIMIT stop after removing LIMIT large objects
which was added in b69f2e36402aaa.
Josh
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Josh Kupershmidt <schmiddy(at)gmail(dot)com> |
Cc: | MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: vacuumlo issue |
Date: | 2012-03-20 15:50:53 |
Message-ID: | 26230.1332258653@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Josh Kupershmidt <schmiddy(at)gmail(dot)com> writes:
> On Tue, Mar 20, 2012 at 7:53 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I'm not entirely convinced that that was a good idea. However, so far
>> as vacuumlo is concerned, the only reason this is a problem is that
>> vacuumlo goes out of its way to do all the large-object deletions in a
>> single transaction. What's the point of that? It'd be useful to batch
>> them, probably, rather than commit each deletion individually. But the
>> objects being deleted are by assumption unreferenced, so I see no
>> correctness argument why they should need to go away all at once.
> I think you are asking for this option:
> -l LIMIT stop after removing LIMIT large objects
> which was added in b69f2e36402aaa.
Uh, no, actually that flag seems utterly brain-dead. Who'd want to
abandon the run after removing some arbitrary subset of the
known-unreferenced large objects? You'd just have to do all the search
work over again. What I'm thinking about is doing a COMMIT after every
N large objects.
I see that patch has not made it to any released versions yet.
Is it too late to rethink the design? I propose (a) redefining it
as committing after every N objects, and (b) having a limit of 1000
or so objects by default.
regards, tom lane
From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Josh Kupershmidt <schmiddy(at)gmail(dot)com>, MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: vacuumlo issue |
Date: | 2012-03-20 15:57:40 |
Message-ID: | CA+TgmoZo+KE9esmDqXuJVXgCEKa6mfoBfvdn9JFqNqBitmGq-g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
On Tue, Mar 20, 2012 at 11:50 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Josh Kupershmidt <schmiddy(at)gmail(dot)com> writes:
>> On Tue, Mar 20, 2012 at 7:53 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> I'm not entirely convinced that that was a good idea. However, so far
>>> as vacuumlo is concerned, the only reason this is a problem is that
>>> vacuumlo goes out of its way to do all the large-object deletions in a
>>> single transaction. What's the point of that? It'd be useful to batch
>>> them, probably, rather than commit each deletion individually. But the
>>> objects being deleted are by assumption unreferenced, so I see no
>>> correctness argument why they should need to go away all at once.
>
>> I think you are asking for this option:
>> -l LIMIT stop after removing LIMIT large objects
>> which was added in b69f2e36402aaa.
>
> Uh, no, actually that flag seems utterly brain-dead. Who'd want to
> abandon the run after removing some arbitrary subset of the
> known-unreferenced large objects? You'd just have to do all the search
> work over again. What I'm thinking about is doing a COMMIT after every
> N large objects.
>
> I see that patch has not made it to any released versions yet.
> Is it too late to rethink the design? I propose (a) redefining it
> as committing after every N objects, and (b) having a limit of 1000
> or so objects by default.
I'll dispute the characterization of "utterly brain-dead"; it's better
than what we had before, which was nothing. However, I think your
proposal might be better still.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From: | MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com> |
---|---|
To: | <tgl(at)sss(dot)pgh(dot)pa(dot)us>, <schmiddy(at)gmail(dot)com> |
Cc: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: vacuumlo issue |
Date: | 2012-03-20 20:15:20 |
Message-ID: | BAY164-W492BF295A482CD2CB60EA8FF430@phx.gbl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
> > I think you are asking for this option:
> > -l LIMIT stop after removing LIMIT large objects
> > which was added in b69f2e36402aaa.
Thank you for informing about -l option in 9.2. Can I build/use this contrib with older pg versions i.e. pg 9.1 ? . Thanks.
> Uh, no, actually that flag seems utterly brain-dead. Who'd want to
> abandon the run after removing some arbitrary subset of the
> known-unreferenced large objects? You'd just have to do all the search
> work over again. What I'm thinking about is doing a COMMIT after every
> N large objects.
>
> I see that patch has not made it to any released versions yet.
> Is it too late to rethink the design? I propose (a) redefining it
> as committing after every N objects, and (b) having a limit of 1000
> or so objects by default.
>
That will be really nice and helpful if it automatically clean all of the orphan large objects. Thanks.
> regards, tom lane
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Josh Kupershmidt <schmiddy(at)gmail(dot)com>, MUHAMMAD ASIF <anaeem(dot)it(at)hotmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: vacuumlo issue |
Date: | 2012-03-20 21:10:44 |
Message-ID: | 1827.1332277844@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Tue, Mar 20, 2012 at 11:50 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I see that patch has not made it to any released versions yet.
>> Is it too late to rethink the design? I propose (a) redefining it
>> as committing after every N objects, and (b) having a limit of 1000
>> or so objects by default.
> I'll dispute the characterization of "utterly brain-dead"; it's better
> than what we had before, which was nothing. However, I think your
> proposal might be better still.
Not hearing any objections, I will go make that happen.
regards, tom lane