Re: Curing plpgsql's memory leaks for statement-lifespan values

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Curing plpgsql's memory leaks for statement-lifespan values
Date: 2016-08-10 13:35:03
Message-ID: CAFj8pRBLHaNSr=tZo7DXACTwSoOm=jPKQFN4z+kPMxepUGMicA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

2016-08-10 11:25 GMT+02:00 Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>:

> Hi
>
> 2016-07-27 16:49 GMT+02:00 Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
>
>> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> > On Mon, Jul 25, 2016 at 6:04 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> >> I suppose that a fix based on putting PG_TRY blocks into all the
>> affected
>> >> functions might be simple enough that we'd risk back-patching it, but
>> >> I don't really want to go that way.
>>
>> > try/catch blocks aren't completely free, either, and PL/pgsql is not
>> > suffering from a deplorable excess of execution speed.
>>
>> BTW, just to annotate that a bit: I did some measurements and found out
>> that on my Linux box, creating/deleting a memory context
>> (AllocSetContextCreate + MemoryContextDelete) is somewhere around 10x
>> more expensive than a PG_TRY block. This means that the PG_TRY approach
>> would actually be faster for cases involving only a small number of
>> statements-needing-local-storage within a single plpgsql function
>> execution. However, the memory context creation cost is amortized across
>> repeated executions of a statement, whereas of course PG_TRY won't be.
>> We can roughly estimate that PG_TRY would lose any time we loop through
>> the statement in question more than circa ten times. So I believe the
>> way I want to do it will win speed-wise in cases where it matters, but
>> it's not entirely an open-and-shut conclusion.
>>
>> Anyway, there are enough other reasons not to go the PG_TRY route.
>>
>
> I did some synthetic benchmarks related to plpgsql speed - bubble sort and
> loop over handling errors and I don't see any slowdown
>
> handling exceptions is little bit faster with your patch
>
> CREATE OR REPLACE FUNCTION public.loop_test(a integer)
> RETURNS void
> LANGUAGE plpgsql
> AS $function$
> declare x int;
> begin
> for i in 1..a
> loop
> declare s text;
> begin
> s := 'AHOJ';
> x := (random()*1000)::int;
> raise exception '*****';
> exception when others then
> x := 0; --raise notice 'handled';
> end;
> end loop;
> end;
> $function$
>
> head - 100K loops 640ms, patched 610ms
>
> Regards
>

Hi

I am sending a review of this patch:

1. There was not any problem with patching and compilation
2. All tests passed without any problem
3. The advantages and disadvantages was discussed detailed in this thread -
selected way is good

+ the code is little bit reduced and cleaned
+ special context can be used in future

4. For this patch new regress tests or documentation is not necessary
5. I didn't find performance slowdown in special tests - the impact on
performance in real code should be insignificant

I'll mark this patch as ready for commiter

Regards

Pavel

>
> Pavel
>
>
>
>> regards, tom lane
>>
>>
>> --
>> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-hackers
>>
>
>

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alexander Korotkov 2016-08-10 13:37:21 Re: Proposal for CSN based snapshots
Previous Message Alexander Korotkov 2016-08-10 13:34:38 Re: Proposal for CSN based snapshots