Re: Add min and max execute statement time in pg_stat_statement

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Peter Geoghegan <pg(at)heroku(dot)com>
Cc: Mitsumasa KONDO <kondo(dot)mitsumasa(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, KONDO Mitsumasa <kondo(dot)mitsumasa(at)lab(dot)ntt(dot)co(dot)jp>, Rajeev rastogi <rajeev(dot)rastogi(at)huawei(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Add min and max execute statement time in pg_stat_statement
Date: 2014-01-31 21:00:13
Message-ID: 21566.1391202013@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Peter Geoghegan <pg(at)heroku(dot)com> writes:
> On Fri, Jan 31, 2014 at 5:07 AM, Mitsumasa KONDO
> <kondo(dot)mitsumasa(at)gmail(dot)com> wrote:
>> It accelerate to affect
>> update(delete and insert) cost in pg_stat_statements table. So you proposed
>> new setting
>> 10k in default max value. But it is not essential solution, because it is
>> also good perfomance
>> for old pg_stat_statements.

> I was merely pointing out that that would totally change the outcome
> of your very artificial test-case. Decreasing the number of statements
> to 5,000 would too. I don't think we should assign much weight to any
> test case where the large majority or all statistics are wrong
> afterwards, due to there being so much churn.

To expand a bit:

(1) It's really not very interesting to consider pg_stat_statement's
behavior when the table size is too small to avoid 100% turnover;
that's not a regime where the module will provide useful functionality.

(2) It's completely unfair to pretend that a given hashtable size has the
same cost in old and new code; there's more than a 7X difference in the
shared-memory space eaten per hashtable entry. That does have an impact
on whether people can set the table size large enough to avoid churn.

The theory behind this patch is that for the same shared-memory
consumption you can make the table size enough larger to move the cache
hit rate up significantly, and that that will more than compensate
performance-wise for the increased time needed to make a new table entry.
Now that theory is capable of being disproven, but an artificial example
with a flat query frequency distribution isn't an interesting test case.
We don't care about optimizing performance for such cases, because they
have nothing to do with real-world usage.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2014-01-31 21:04:50 Re: [COMMITTERS] pgsql: libpq: Support TLS versions beyond TLSv1.
Previous Message Anirudh 2014-01-31 20:35:58 Re: Regarding google summer of code