Re: Out of memory on vacuum analyze

Lists: pgsql-general
From: John Cole <john(dot)cole(at)uai(dot)com>
To: "'pgsql-general(at)postgresql(dot)org'" <pgsql-general(at)postgresql(dot)org>
Subject: Out of memory on vacuum analyze
Date: 2007-02-19 18:47:01
Message-ID: 76758090F8686C47A44B6FF52514A1D307909AE1@hermes.uai.int
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

I have a large table (~55 million rows) and I'm trying to create an index
and vacuum analyze it. The index has now been created, but the vacuum
analyze is failing with the following error:

ERROR: out of memory
DETAIL: Failed on request of size 943718400.

I've played with several settings, but I'm not sure what I need to set to
get this to operate. I'm running on a dual Quad core system with 4GB of
memory and Postgresql 8.2.3 on W2K3 Server R2 32bit.

Maintenance_work_mem is 900MB
Max_stack_depth is 3MB
Shared_buffers is 900MB
Temp_buffers is 32MB
Work_mem is 16MB
Max_fsm_pages is 204800
Max_connections is 50

Any help would be greatly appreciated.

Thanks,

John Cole

--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.441 / Virus Database: 268.18.2/692 - Release Date: 2/18/2007
4:35 PM

This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the sender. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail.


From: Jeff Davis <pgsql(at)j-davis(dot)com>
To: John Cole <john(dot)cole(at)uai(dot)com>
Cc: "'pgsql-general(at)postgresql(dot)org'" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Out of memory on vacuum analyze
Date: 2007-02-19 19:19:41
Message-ID: 1171912781.10824.192.camel@dogma.v10.wvs
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Mon, 2007-02-19 at 12:47 -0600, John Cole wrote:
> I have a large table (~55 million rows) and I'm trying to create an index
> and vacuum analyze it. The index has now been created, but the vacuum
> analyze is failing with the following error:
>
> ERROR: out of memory
> DETAIL: Failed on request of size 943718400.
>
> I've played with several settings, but I'm not sure what I need to set to
> get this to operate. I'm running on a dual Quad core system with 4GB of
> memory and Postgresql 8.2.3 on W2K3 Server R2 32bit.
>
> Maintenance_work_mem is 900MB
> Max_stack_depth is 3MB
> Shared_buffers is 900MB
> Temp_buffers is 32MB
> Work_mem is 16MB
> Max_fsm_pages is 204800
> Max_connections is 50
>

You told PostgreSQL that you have 900MB available for
maintenance_work_mem, but your OS is denying the request. Try *lowering*
that setting to something that your OS will allow. That seems like an
awfully high setting to me.

Regards,
Jeff Davis


From: Jim Nasby <decibel(at)decibel(dot)org>
To: Jeff Davis <pgsql(at)j-davis(dot)com>
Cc: John Cole <john(dot)cole(at)uai(dot)com>, "'pgsql-general(at)postgresql(dot)org'" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Out of memory on vacuum analyze
Date: 2007-02-21 02:51:19
Message-ID: 51C436F6-C357-4462-9BD2-BFCEF93DE18E@decibel.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Feb 19, 2007, at 1:19 PM, Jeff Davis wrote:
> You told PostgreSQL that you have 900MB available for
> maintenance_work_mem, but your OS is denying the request. Try
> *lowering*
> that setting to something that your OS will allow. That seems like an
> awfully high setting to me.

900MB isn't that unreasonable if you're building indexes on a restore
or something similar. I have run into issues when trying to set it
much over 1G, though... on various OSes and platforms.
--
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)


From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
To: Jim Nasby <decibel(at)decibel(dot)org>
Cc: Jeff Davis <pgsql(at)j-davis(dot)com>, John Cole <john(dot)cole(at)uai(dot)com>, "'pgsql-general(at)postgresql(dot)org'" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Out of memory on vacuum analyze
Date: 2007-02-21 06:58:02
Message-ID: 45DBED7A.3010404@kaltenbrunner.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Jim Nasby wrote:
> On Feb 19, 2007, at 1:19 PM, Jeff Davis wrote:
>> You told PostgreSQL that you have 900MB available for
>> maintenance_work_mem, but your OS is denying the request. Try *lowering*
>> that setting to something that your OS will allow. That seems like an
>> awfully high setting to me.
>
> 900MB isn't that unreasonable if you're building indexes on a restore or
> something similar. I have run into issues when trying to set it much
> over 1G, though... on various OSes and platforms.

versions before 8.2 have some issues(mostly reporting bogus errors) with
very large settings for maintenance_work_mem. 8.2 and up are behaving
more sanely but I don't think they can actually make anything better
with values in the GB range.
Have you actually measured a performance improvment going beyond
250-350MB(that seemed about to be the sweet spot last I tested) or so
for index creation and friends ?

Stefan


From: Jim Nasby <decibel(at)decibel(dot)org>
To: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
Cc: Jeff Davis <pgsql(at)j-davis(dot)com>, John Cole <john(dot)cole(at)uai(dot)com>, "'pgsql-general(at)postgresql(dot)org'" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Out of memory on vacuum analyze
Date: 2007-02-22 23:32:45
Message-ID: 5982FDC4-A0A8-4A3B-98C1-D0FE9E55485D@decibel.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Feb 21, 2007, at 12:58 AM, Stefan Kaltenbrunner wrote:
> Have you actually measured a performance improvment going beyond
> 250-350MB(that seemed about to be the sweet spot last I tested) or so
> for index creation and friends ?

To be honest, no; I just set it high to play on the safe side. But I
have seen reports of large in-memory sorts actually being slower than
tape sorts in some cases, so I probably am leaving some performance
on the table.
--
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)