Re: Avoid memory leaks during ANALYZE's compute_index_stats() ?

Lists: pgsql-hackers
From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgreSQL(dot)org
Cc: Jakub Ouhrabka <kuba(at)comgate(dot)cz>
Subject: Avoid memory leaks during ANALYZE's compute_index_stats() ?
Date: 2010-11-09 01:04:19
Message-ID: 2903.1289264659@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I looked into the out-of-memory problem reported by Jakub Ouhrabka here:
http://archives.postgresql.org/pgsql-general/2010-11/msg00353.php

It's pretty simple to reproduce, even in HEAD; what you need is an index
expression that computes a bulky intermediate result. His example is

md5(array_to_string(f1, ''::text))

where f1 is a bytea array occupying typically 15kB per row. Even
though the final result of md5() is only 32 bytes, evaluation of this
expression will eat about 15kB for the detoasted value of f1, roughly
double that for the results of the per-element output function calls
done inside array_to_string, and another 30k for the final result string
of array_to_string. And *none of that gets freed* until
compute_index_stats() is all done. In my testing, with the default
stats target of 100, this gets repeated for 30k sample rows, requiring
something in excess of 2GB in transient space. Jakub was using stats
target 500 so it'd be closer to 10GB for him.

AFAICS the only practical fix for this is to have the inner loop of
compute_index_stats() copy each index expression value out of the
per-tuple memory context and into the per-index "Analyze Index" context.
That would allow it to reset the per-tuple memory context after each
FormIndexDatum call and thus clean up whatever intermediate result trash
the evaluation left behind. The extra copying is a bit annoying, since
it would add cycles while accomplishing nothing useful for index
expressions with no intermediate results, but I'm thinking this is a
must-fix.

Comments?

regards, tom lane


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Avoid memory leaks during ANALYZE's compute_index_stats() ?
Date: 2010-11-09 01:09:55
Message-ID: 4CD89F63.6010800@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11/8/10 5:04 PM, Tom Lane wrote:
> The extra copying is a bit annoying, since
> it would add cycles while accomplishing nothing useful for index
> expressions with no intermediate results, but I'm thinking this is a
> must-fix.

I'd say that performance numbers is what to check on this. How much
does it affect low-memory expressions to do the copying?

--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Avoid memory leaks during ANALYZE's compute_index_stats() ?
Date: 2010-11-09 04:03:06
Message-ID: 9003.1289275386@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Josh Berkus <josh(at)agliodbs(dot)com> writes:
> On 11/8/10 5:04 PM, Tom Lane wrote:
>> The extra copying is a bit annoying, since
>> it would add cycles while accomplishing nothing useful for index
>> expressions with no intermediate results, but I'm thinking this is a
>> must-fix.

> I'd say that performance numbers is what to check on this. How much
> does it affect low-memory expressions to do the copying?

It's noticeable but not horrible. I tried this test case:

regression=# \d tst
Table "public.tst"
Column | Type | Modifiers
--------+------------------+-----------
f1 | double precision |
Indexes:
"tsti" btree ((f1 + 1.0::double precision))

with 100000 rows on a 32-bit machine (so that float8 is pass-by-ref).
The ANALYZE time went from about 625 msec to about 685. I believe that
this is pretty much the worst case percentage-wise: the table is small
enough to fit in RAM, so no I/O is involved, and the index expression is
about as simple and cheap to evaluate as it could possibly be, and the
amount of work done analyzing the main table is about as small as it
could possibly be. In any other situation those other components of
the ANALYZE cost would grow proportionally more than the copying cost.

Not-too-well-tested-yet patch attached.

regards, tom lane

Attachment Content-Type Size
analyze-leak.patch text/x-patch 1.8 KB

From: Jakub Ouhrabka <kuba(at)comgate(dot)cz>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: Avoid memory leaks during ANALYZE's compute_index_stats() ?
Date: 2010-11-09 11:16:31
Message-ID: 4CD92D8F.6050305@comgate.cz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Tom,

thanks for brilliant analysis - now we know how to avoid the problem.

As a side note: from the user's point of view it would be really nice to
know that the error was caused by auto-ANALYZE - at least on 8.2 it's
not that obvious from the server log. It was the first message with
given backend PID so it seemed to me as it's problem during backend
startup - we have log_connections to on...

Thanks,

Kuba

Dne 9.11.2010 2:04, Tom Lane napsal(a):
> I looked into the out-of-memory problem reported by Jakub Ouhrabka here:
> http://archives.postgresql.org/pgsql-general/2010-11/msg00353.php
>
> It's pretty simple to reproduce, even in HEAD; what you need is an index
> expression that computes a bulky intermediate result. His example is
>
> md5(array_to_string(f1, ''::text))
>
> where f1 is a bytea array occupying typically 15kB per row. Even
> though the final result of md5() is only 32 bytes, evaluation of this
> expression will eat about 15kB for the detoasted value of f1, roughly
> double that for the results of the per-element output function calls
> done inside array_to_string, and another 30k for the final result string
> of array_to_string. And *none of that gets freed* until
> compute_index_stats() is all done. In my testing, with the default
> stats target of 100, this gets repeated for 30k sample rows, requiring
> something in excess of 2GB in transient space. Jakub was using stats
> target 500 so it'd be closer to 10GB for him.
>
> AFAICS the only practical fix for this is to have the inner loop of
> compute_index_stats() copy each index expression value out of the
> per-tuple memory context and into the per-index "Analyze Index" context.
> That would allow it to reset the per-tuple memory context after each
> FormIndexDatum call and thus clean up whatever intermediate result trash
> the evaluation left behind. The extra copying is a bit annoying, since
> it would add cycles while accomplishing nothing useful for index
> expressions with no intermediate results, but I'm thinking this is a
> must-fix.
>
> Comments?
>
> regards, tom lane