Re: why can't plpgsql return a row-expression?

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Asif Rehman <asifr(dot)rehman(at)gmail(dot)com>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: why can't plpgsql return a row-expression?
Date: 2012-12-05 08:06:22
Message-ID: CAFj8pRAYvrE8i8+Kc9rx8crkvMpLAHa0JVMj1cbgTqB1R0snow@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

2012/12/4 Asif Rehman <asifr(dot)rehman(at)gmail(dot)com>:
> Hi,
>
> Here is the updated patch. I overlooked the loop, checking to free the
> conversions map. Here are the results now.
>
> postgres=# select sum(a) from generate_series(1,3000) g(i), lateral foo(i)
> as (a numeric, b numeric);
> sum
> ----------
> 303000.0
> (1 row)
>
> postgres=# select sum(a) from generate_series(1,3000) g(i), lateral foo(i)
> as (a float, b numeric);
> sum
> ------------------
> 303000.000000012
>
> Regards,
> --Asif

yes, it is fixed.

ok I have no objection

Regards

Pavel Stehule

>
>
> On Wed, Dec 5, 2012 at 12:40 AM, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
> wrote:
>>
>> Hello
>>
>> I fully agree with Asif's arguments
>>
>> previous tupconvert implementation was really strict - so using
>> enhanced tupconver has very minimal impact for old user - and I
>> checked same speed for plpgsql function - patched and unpatched pg.
>>
>> tested
>>
>> CREATE OR REPLACE FUNCTION public.foo(i integer)
>> RETURNS SETOF record
>> LANGUAGE plpgsql
>> AS $function$
>> declare r record;
>> begin
>> r := (10.1,20.1); for i in 1..10 loop return next r; end loop; return;
>> end;
>> $function$
>>
>> select sum(a) from generate_series(1,3000) g(i), lateral foo(i) as (a
>> numeric, b numeric);
>>
>> More - everywhere where we use tupconvert internally, then we use
>> cached conversion map - so I am sure, so speed of ANALYZE cannot be
>> impacted by this patch
>>
>> There are other two issue:
>>
>> it allow to write new differnt slow code - IO cast is about 5-10-20%
>> slower, and with this path anybody has new possibilities for new "bad"
>> code. But it is not a problem of this patch. It is related to plpgsql
>> design and probably we should to move some conversions to outside
>> plpgsql to be possible reuse conversion map and enhance plpgsql. I
>> have a simple half solutions - plpgsql_check_function can detect this
>> situation in almost typical use cases and can raises a performance
>> warning. So this issue is not stopper for me - because it is not new
>> issue in plpgsql.
>>
>> Second issue is more significant:
>>
>> there are bug:
>>
>> postgres=# select sum(a) from generate_series(1,3000) g(i), lateral
>> foo(i) as (a numeric, b numeric);
>> sum
>> ----------
>> 303000.0
>> (1 row)
>>
>> postgres=# select sum(a) from generate_series(1,3000) g(i), lateral
>> foo(i) as (a float, b numeric);
>> sum
>> -----------------------
>> 7.33675699577682e-232
>> (1 row)
>>
>> it produces wrong result
>>
>> And with minimal change it kill session
>>
>> create or replace function foo(i integer)
>> returns setof record as $$
>> declare r record;
>> begin
>> r := (10,20); for i in 1..10 loop return next r; end loop; return;
>> end;
>> $$ language plpgsql;
>>
>> postgres=# select sum(a) from generate_series(1,3000) g(i), lateral
>> foo(i) as (a numeric, b numeric);
>> The connection to the server was lost. Attempting reset: Failed.
>>
>> create or replace function fo(i integer)
>> returns record as $$
>> declare r record;
>> begin
>> r := (10,20); return r;
>> end;
>> $$ language plpgsql;
>>
>> postgres=# select sum(a) from generate_series(1,3000) g(i), lateral
>> fo(i) as (a int, b numeric);
>> sum
>> -------
>> 30000
>> (1 row)
>>
>> postgres=# select sum(a) from generate_series(1,3000) g(i), lateral
>> fo(i) as (a numeric, b numeric);
>> The connection to the server was lost. Attempting reset: Failed.
>>
>> Regards
>>
>> Pavel Stehule
>>
>>
>> 2012/12/3 Asif Rehman <asifr(dot)rehman(at)gmail(dot)com>:
>> > Hi,
>> >
>> > Thanks for the review. Please see the updated patch.
>> >
>> >> Hmm. We're running an I/O cast during do_tup_convert() now, and look
>> >> up the required functions for each tuple. I think if we're going to
>> >> support this operation at that level, we need to look up the necessary
>> >> functions at convert_tuples_by_foo level, and then apply
>> >> unconditionally
>> >> if they've been set up.
>> >
>> > Done. TupleConversionMap struct should keep the array of functions oid's
>> > that
>> > needs to be applied. Though only for those cases where both attribute
>> > type
>> > id's
>> > do not match. This way no unnecessary casting will happen.
>> >
>> >>
>> >> Also, what are the implicancies for existing users of tupconvert? Do
>> >> we
>> >> want to apply casting during ANALYZE for example? AFAICS the patch
>> >> shouldn't break any case that works today, but I don't see that there
>> >> has been any analysis of this.
>> >
>> > I believe this part of the code should not impact existing users of
>> > tupconvert.
>> > As this part of code is relaxing a restriction upon which an error would
>> > have been
>> > generated. Though it could have a little impact on performance but
>> > should be
>> > only for
>> > cases where attribute type id's are different and are implicitly cast
>> > able.
>> >
>> > Besides existing users involve tupconvert in case of inheritance. And in
>> > that case
>> > an exact match is expected.
>> >
>> > Regards,
>> > --Asif
>
>

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Heikki Linnakangas 2012-12-05 08:08:38 Re: the number of pending entries in GIN index with FASTUPDATE=on
Previous Message Pavel Stehule 2012-12-05 07:45:47 Re: review: Deparsing DDL command strings