Re: In progress INSERT wrecks plans on table

From: Vitalii Tymchyshyn <tivv00(at)gmail(dot)com>
To: Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>
Cc: pgsql-performance(at)postgresql(dot)org, Gavin Flower <gavinflower(at)archidevsys(dot)co(dot)nz>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Simon Riggs <simon(at)2ndquadrant(dot)com>
Subject: Re: In progress INSERT wrecks plans on table
Date: 2013-05-10 10:48:49
Message-ID: CABWW-d3A3r6sQW71phLJqGkZBdbWOxo+WsrA3qwpx3F5KYRM1g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-performance

Well, could you write a trigger that would do what you need? AFAIR analyze
data is stored no matter transaction boundaries. You could store some
counters in session vars and issue an explicit analyze when enough rows
were added.
7 трав. 2013 08:33, "Mark Kirkwood" <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz> напис.

> On 07/05/13 18:10, Simon Riggs wrote:
>
>> On 7 May 2013 01:23, <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz**> wrote:
>>
>> I'm thinking that a variant of (2) might be simpler to inplement:
>>>
>>> (I think Matt C essentially beat me to this suggestion - he originally
>>> discovered this issue). It is probably good enough for only *new* plans
>>> to
>>> react to the increased/increasing number of in progress rows. So this
>>> would require backends doing significant numbers of row changes to either
>>> directly update pg_statistic or report their in progress numbers to the
>>> stats collector. The key change here is the partial execution numbers
>>> would need to be sent. Clearly one would need to avoid doing this too
>>> often (!) - possibly only when number of changed rows >
>>> autovacuum_analyze_scale_**factor proportion of the relation concerned
>>> or
>>> similar.
>>>
>>
>> Are you loading using COPY? Why not break down the load into chunks?
>>
>>
> INSERT - but we could maybe workaround by chunking the INSERT. However
> that *really* breaks the idea that in SQL you just say what you want, not
> how the database engine should do it! And more practically means that the
> most obvious and clear way to add your new data has nasty side effects, and
> you have to tip toe around muttering secret incantations to make things
> work well :-)
>
> I'm still thinking that making postgres smarter about having current stats
> for getting the actual optimal plan is the best solution.
>
> Cheers
>
> Mark
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)**
> org <pgsql-performance(at)postgresql(dot)org>)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>
>

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Kapila 2013-05-10 11:00:40 Re: Fast promotion failure
Previous Message Simon Riggs 2013-05-10 10:23:01 Concurrent HOT Update interference

Browse pgsql-performance by date

  From Date Subject
Next Message Mark Kirkwood 2013-05-10 12:51:20 Re: In progress INSERT wrecks plans on table
Previous Message Andres Freund 2013-05-09 17:22:05 Re: Setting vacuum_freeze_min_age really low