Re: Huge amount of memory consumed during transaction

From: Erik Jones <erik(at)myemma(dot)com>
To: henk de wit <henk53602(at)hotmail(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Huge amount of memory consumed during transaction
Date: 2007-10-12 22:13:14
Message-ID: C604C8A9-C05F-41CE-AD21-F612809EF7AB@myemma.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Oct 12, 2007, at 4:48 PM, henk de wit wrote:

> > > I have work_mem set to 256MB.
> > Wow. That's inordinately high. I'd recommend dropping that to
> 32-43MB.
>
> Ok, it seems I was totally wrong with the work_mem setting. I'll
> adjust it to a more saner level. Thanks a lot for the advice everyone!
>
> > Explain is your friend in that respect.
>
> It shows all the operators, but it doesn't really say that these
> all will actually run in parallel right? Of course I guess it would
> give a good idea about what the upper bound is.

You can determine what runs in parellel based on the indentation of
the output. Items at the same indentation level under the same
"parent" line will run in parallel

Erik Jones

Software Developer | Emma®
erik(at)myemma(dot)com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2007-10-12 22:17:41 Re: How to speed up min/max(id) in 50M rows table?
Previous Message Alan Hodgson 2007-10-12 22:04:40 Re: How to speed up min/max(id) in 50M rows table?