Re: Performance on Bulk Insert to Partitioned Table

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Charles Gomes <charlesrg(at)outlook(dot)com>, Ondrej Ivanič <ondrej(dot)ivanic(at)gmail(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance on Bulk Insert to Partitioned Table
Date: 2012-12-27 18:46:12
Message-ID: CAFj8pRAkL2L0-Dvq8q6gavz2k+eoBjAPawZiRemq3N8JMvV=aA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

2012/12/27 Jeff Janes <jeff(dot)janes(at)gmail(dot)com>:
> On Wednesday, December 26, 2012, Pavel Stehule wrote:
>>
>> 2012/12/27 Jeff Janes <jeff(dot)janes(at)gmail(dot)com>:
>> >
>> > More automated would be nice (i.e. one operation to make both the check
>> > constraints and the trigger, so they can't get out of sync), but would
>> > not
>> > necessarily mean faster.
>>
>
> <snip some benchmarking>
>
>> Native implementation should significantly effective evaluate
>>
>> expressions, mainly simple expressions - (this is significant for
>> large number of partitions) and probably can do tuple forwarding
>> faster than is heavy INSERT statement (is question if is possible
>> decrease some overhead with more sophisticate syntax (by removing
>> record expand).
>
>
> If the main goal is to make it faster, I'd rather see all of plpgsql get
> faster, rather than just a special case of partitioning triggers. For
> example, right now a CASE <expression> statement with 100 branches is about
> the same speed as an equivalent list of 100 elsif. So it seems to be doing
> a linear search, when it could be doing a hash that should be a lot faster.

a bottleneck is not in PL/pgSQL directly. It is in PostgreSQL
expression executor. Personally I don't see any simple optimization -
maybe some variant of JIT (for expression executor) should to improve
performance.

Any other optimization require significant redesign PL/pgSQL what is
job what I don't would do now - personally, it is not work what I
would to start by self, because using plpgsql triggers for
partitioning is bad usage of plpgsql - and I believe so after native
implementation any this work will be useless. Design some generic C
trigger or really full implementation is better work.

More, there is still expensive INSERT statement - forwarding tuple on
C level should be significantly faster - because it don't be generic.

>
>
>>
>>
>> So native implementation can carry significant speed up - mainly if we
>> can distribute tuples without expression evaluating (evaluated by
>> executor)
>
>
> Making partitioning inserts native does open up other opportunities to make
> it faster, and also to make it administratively easier; but do we want to
> try to tackle both of those goals simultaneously? I think the
> administrative aspects would come first. (But I doubt I will be the one to
> implement either, so my vote doesn't count for much here.)

Anybody who starts work on native implementation will have my support
(it is feature that lot of customers needs). I have customers that can
support development and I believe so there are others. Actually It
needs only one tenacious man, because it is work for two years.

Regards

Pavel

>
>
> Cheers,
>
> Jeff
>>
>>
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Charles Gomes 2012-12-27 19:00:02 Re: Performance on Bulk Insert to Partitioned Table
Previous Message Jeff Janes 2012-12-27 18:24:15 Re: Performance on Bulk Insert to Partitioned Table