contrib/pg_stat_statements 1212

Lists: pgsql-hackers
From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: pgsql-hackers(at)postgresql(dot)org
Subject: contrib/pg_stat_statements 1202
Date: 2008-12-02 09:35:30
Message-ID: 20081202170106.A0EC.52131E4D@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Here is an update version of contrib/pg_stat_statements.

New modifications from the last version are:
1. New counters in struct Instrumentation.
2. EXPLAIN ANALYZE VERBOSE shows buffer statistics in 'actual' section.
3. Buffer statistics counsters are not reset to zero anymore.

----
1. New counters in struct Instrumentation.
[in include/executor/instrument.h, backend/executor/instrument.c]

The following fields are added. They are used by pg_stat_statements
and EXPLAIN ANALYZE VERBOSE. getrusage() is called for each nodes.
Write-counters are not included because I think they are not so useful.

buffer_gets; /* # of buffer hits */
buffer_hits; /* # of buffer gets */
buffile_reads; /* # of buffile reads */
utime; /* user time in sec */
stime; /* sys time in sec */

----
2. EXPLAIN ANALYZE VERBOSE shows buffer statistics in 'actual' section.
[in backend/commands/explain.c]

I borrowed the idea from Vladimir,
"Buffer pool statistics in Explain Analyze".
http://archives.postgresql.org/message-id/1d709ecc0810111624i7419b179v1789b2ca681987c0@mail.gmail.com

Here is an sample output. We'd better to add a linebreak before
the 'actual' section because the line is too wide to display.

=# EXPLAIN ANALYZE VERBOSE SELECT * FROM accounts;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Seq Scan on accounts (cost=0.00..2688.29 rows=101829 width=97) (actual time=0.072..119.633 rows=100000 loops=1 gets=1670 reads=1638 local_reads=0 CPU=0.06/0.03 sec)
Output: aid, bid, abalance, filler
Total runtime: 209.556 ms
(3 rows)

----
3. Buffer statistics counsters are not reset to zero anymore.
[in storage/buffer/bufmgr.c]

ResetBufferUsage() is save the current counters in global variables as
baseline and buffer statistics are measured in difference from them
because the counters are used by struct Instrumentation.

----

Comments welcome.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center

Attachment Content-Type Size
pg_stat_statements.1202.tar.gz application/octet-stream 14.2 KB

From: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>
To: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-04 21:26:03
Message-ID: 1d709ecc0812041326t6885813at457a73796076c433@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> 2. EXPLAIN ANALYZE VERBOSE shows buffer statistics in 'actual' section.
>
I do not get the point of "VERBOSE".
As far as I understand, explain analyze (without verbose) will anyway add
overhead for calculation of "gets/hits/cpu". Why discard that information in
"non verbose" mode? Just to make the investigation more complex?

Write-counters are not included because I think they are not so useful.

Never say never. I guess they (or just one counter for accumulated writes)
could be useful for monitoring operations that spill to the disk. For
instance, PostgreSQL does not show the amount of temp used for the join.

>
> buffer_gets; /* # of buffer hits */
> buffer_hits; /* # of buffer gets */
> buffile_reads; /* # of buffile reads */

I guess it makes sense expanding "buffile reads" into "buffer file reads" or
just "file reads"

Here is an sample output. We'd better to add a linebreak before
> the 'actual' section because the line is too wide to display.

I wish there was a way to get the results of explain into some table. I wish
it was the default output "format". That would make life of pgAdmin easier,
and improve readability even in psql. Do not you think there is something
wrong with having "cost=... rows=... loops=..." in each and every row?

> ResetBufferUsage() is save the current counters in global variables as
> baseline and buffer statistics are measured in difference from them
> because the counters are used by struct Instrumentation.

That would definitely work well for Instrumentation (it will not notice
resetBufferUsage any more), however that will not isolate the guys who do
the reset. I am afraid the idea of having api for "reset" is broken and it
makes sense removing that function. However it looks like it is out of scope
of this patch.

Regards,
Vladimir Sitnikov


From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-04 22:11:51
Message-ID: 873ah3zj2w.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com> writes:

> I wish there was a way to get the results of explain into some table. I wish
> it was the default output "format". That would make life of pgAdmin easier,
> and improve readability even in psql. Do not you think there is something
> wrong with having "cost=... rows=... loops=..." in each and every row?

A number of people have suggesting we switch to XML.

An alternative would be to build up a tuplestore of data and then send that to
the client in a separate result set. That's kind of nice because it would give
us a way to send both the real results and the explain results. And at least
we already have an api for accessing result sets.

Oracle's approach is to have the explain command stuff the results into a
table. That has advantages for tools, especially if you want to be able to
look at plans generated by other sessions. But it's pretty awkward for the
simple case.

I'm sure there are dozens of ways to skin this cat. Anyone have any more?
We probably just have to pick one and run with it.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!


From: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>
To: "Gregory Stark" <stark(at)enterprisedb(dot)com>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-05 00:32:30
Message-ID: 1d709ecc0812041632u512d8a9cq641cd14eaa0e85c1@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com> writes:
>
> > I wish there was a way to get the results of explain into some table. I
> wish
> > it was the default output "format". That would make life of pgAdmin
> easier,
> > and improve readability even in psql. Do not you think there is something
> > wrong with having "cost=... rows=... loops=..." in each and every row?
>
> A number of people have suggesting we switch to XML.

I do not see much benefit of XML:
* XML is not human-readable
* Plain old result set is even easier to process since it is the main
PostgreSQL interface at this point

The only benefit of XML I could imagine is it could provide a nicer markup
for sort/hash/etc nodes. It is not that nice to have a column "sort method"
that would be empty nearly for all the rows. At the same time it looks fine
to have a column with xml inside for any additional information execution
node wants provide (like topN-allrows sort / number of batches in hash join
or whatever)

>
> An alternative would be to build up a tuplestore of data and then send that
> to
> the client in a separate result set. That's kind of nice because it would
> give
> us a way to send both the real results and the explain results. And at
> least
> we already have an api for accessing result sets.

Sounds good. As for me, current output of explain is not very easy to read:
it suits well only for "find timings for particular node" workflow only (I
mean, the source is a particular node, the result is
timings/rows/buffers/etc). However from my point of view, when it comes to
query tuning the main workflow is "find node by suspicious timings". If all
the relevant data were displayed in the same column it would be easier to
read. Consider all the row counts in the very first column.

>
>
> Oracle's approach is to have the explain command stuff the results into a
> table. That has advantages for tools, especially if you want to be able to
> look at plans generated by other sessions.

I do not believe that workflow makes sense. I have never ever thought of it.

External table makes sense if you have several output formats (say, create a
formatting function for psql and let pgAdmin format the plan on its own)

Regards,
Vladimir Sitnikov


From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com>
Cc: Gregory Stark <stark(at)enterprisedb(dot)com>, ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-05 08:39:48
Message-ID: Pine.GSO.4.64.0812050330520.22750@westnet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, 5 Dec 2008, Vladimir Sitnikov wrote:

>> Oracle's approach is to have the explain command stuff the results into a
>> table. That has advantages for tools, especially if you want to be able to
>> look at plans generated by other sessions.
>
> I do not believe that workflow makes sense. I have never ever thought of it.

The main benefit is that you can track how EXPLAIN plans change over time.
Archive a snapshot of what the plans for all your common queries look like
every day, and the day one of them blows up and starts doing something
wrong you've got a *lot* more information to work with for figuring out
what happened--whether it was a minor query change, some stats that got
slowly out of whack, etc. I wouldn't just immediately dismiss that
workflow as unsensible without thinking about it a bit first, there are
some good advantages to it.

Greg Sabino Mullane had a neat blog entry on saving plans to tables in
PostgreSQL, unfortunately the Planet PostgreSQL outage seems to have eaten
it.

--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD


From: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>
To: "Greg Smith" <gsmith(at)gregsmith(dot)com>
Cc: "Gregory Stark" <stark(at)enterprisedb(dot)com>, "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-05 14:14:19
Message-ID: 1d709ecc0812050614o7d5ffd77o81c9c1b01bbe9bf1@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> The main benefit is that you can track how EXPLAIN plans change over time.

It is not required to output plan *into* some table to be able track it over
time. If EXPLAIN returns a table, it is up to you to perform "insert into
history select * from explain(...)".

Workflow that does not make sense for me is "look at plans generated _into
some plan_table_ by other sessions in Oracle".
I am 100% sure it really makes sense have some view like pg_execute_plan
that will reveal execution plans for currently running queries (see
v$sql_plan in Oracle). However, I would stress once again I have no idea
what the sense could be in "one session explained into plan_table, while the
other reads that plan".

Does that make sense?

Regards,
Vladimir Sitnikov


From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-08 02:13:35
Message-ID: 34d269d40812071813v281326c8s47fd40b0dfddd2f4@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Dec 2, 2008 at 02:35, ITAGAKI Takahiro
<itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> wrote:
> Here is an update version of contrib/pg_stat_statements.

Hello again!

I was assigned to review this.

Submission review:
Is the patch in standard form? Yes
Does it apply cleanly to current HEAD? Yes (with fuzz)
Does it include reasonable tests, docs, etc? Yes

Usability review:
Does the patch actually implement that? Yes
Do we want that? I think so
Do we already have it? No
Does it follow SQL spec, or the community-agreed behavior? Sure
Are there dangers? No
Have all the bases been covered? Yes

Feature test:
Does the feature work as advertised? Yes
Are there corner cases the author has failed to consider? No

Performance review
Does the patch slow down simple tests?

Does not seem to...

(test.sql)
select * from tenk1 a join tenk1 b using (unique1);

(dual core machine, --enable-debug, --enable-cassert build)
pgbench -c 2 -T60 -n -f test.sql

HEAD: tps = 9.674423
PATCH: tps = 9.695784

If it claims to improve performance, does it?
Does it slow down other things?

Coding review:
Does it follow the project coding guidelines? Yes
Are there portability issues? No
Will it work on Windows/BSD etc? Think so
Are the comments sufficient and accurate? I think so
Does it do what it says, correctly? Yes
Does it produce compiler warnings? No
Can you make it crash? No

I'm not sure about the new counters in struct Instrumentation or the
hooks (but did not see anything obviously wrong with them)... A
commiter can better comment on those. Also find attached some very
minor verbiage changes. If there is nothing else on your todo list
for this Ill mark it as Ready for commiter on the wiki.

Attachment Content-Type Size
patch.patch text/x-patch 895 bytes

From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: "Alex Hunsaker" <badalex(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 06:28:46
Message-ID: 20081209151642.8818.52131E4D@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


"Alex Hunsaker" <badalex(at)gmail(dot)com> wrote:

> I was assigned to review this.

Thanks for your reviewing.
I assume that the basic concepts are ok and focus of discussion is in:
- New counters in struct Instrumentation.
(buffer usage and CPU usage)
- Should EXPLAIN ANALYZE show those counters.

> Performance review
> HEAD: tps = 9.674423
> PATCH: tps = 9.695784
>
> If it claims to improve performance, does it?
> Does it slow down other things?

The patch should not slow down normal use if you don't use
pg_stat_statements module, but it might slow down EXPLAIN ANALYZE
because some fields are added in struct Instrumentation and
they are counted up per tuple in EXPLAIN ANALYZE.

> Also find attached some very minor verbiage changes.

Thanks. I'll apply your fixes.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 07:16:37
Message-ID: 20081209154601.881B.52131E4D@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


"Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com> wrote:

> > 2. EXPLAIN ANALYZE VERBOSE shows buffer statistics in 'actual' section.
> >
> I do not get the point of "VERBOSE".
> As far as I understand, explain analyze (without verbose) will anyway add
> overhead for calculation of "gets/hits/cpu". Why discard that information in
> "non verbose" mode? Just to make the investigation more complex?

I thought that output of new counters are too wide and it brakes
compatibility of EXPLAIN ANALYZE. On the other hand, we don't have to
think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly
added in 8.4. However, overheads should be avoided. We could have
two kinds of instrumentations, time-only or all-stats.

Another idea is to have buffer and cpu counters not in Instrumentation
but in QueryDesc (i.e, only one per query). A new field for the counter
will be initialized in pg_stat_statements module. EXPLAIN ANALYZE could
also use the field but the result is printed only the end of query plan,
as the follows. We can avoid too-wide-line problem with the approach.

=# EXPLAIN ANALYZE SELECT ...;
QUERY PLAN
---------------------------
...
Total runtime: 24.273 ms
CPU usage: user=20.2 sys=2.4 #
Buffer usage: gets=100 reads=10 writes=20 # added lines
Tempfile i/o: reads=50 writes=50 #
(10 rows)

> I wish there was a way to get the results of explain into some table. I wish
> it was the default output "format". That would make life of pgAdmin easier,
> and improve readability even in psql. Do not you think there is something
> wrong with having "cost=... rows=... loops=..." in each and every row?

I absolutely agree that XML-explain and TABLE-expalin are useful,
but they are independent features from pg_stat_statements. So I'd
like to discuss them in separated threads (and it will come in 8.5).

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


From: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>
To: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 07:56:46
Message-ID: 1d709ecc0812082356n6f7d0c95k180deeaa39f47dd@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
>
> > > 2. EXPLAIN ANALYZE VERBOSE shows buffer statistics in 'actual' section.
> > >
> > I do not get the point of "VERBOSE".
> > As far as I understand, explain analyze (without verbose) will anyway add
> > overhead for calculation of "gets/hits/cpu". Why discard that information
> in
> > "non verbose" mode? Just to make the investigation more complex?
>
> I thought that output of new counters are too wide and it brakes
> compatibility of EXPLAIN ANALYZE. On the other hand, we don't have to
> think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly
> added in 8.4. However, overheads should be avoided. We could have
> two kinds of instrumentations, time-only or all-stats.

I am not sure if this impact (in terms of compatibility) is really that big.
As far as I could understand, pgAdmin parses modified explain analyze output
well. However, pgAdmin does not support "verbose".

The impact in terms of "computational" overhead could be measured. I guess
it depends only on the number of calls to the instrumentation (that is
either rows or loops). Thus, after explaining some basic select from
generate_series, one could adjust the timings. We could even incorporate
this to EXPLAIN ANALYZE, so it would display "cpu is 0.5s, while
instrumentation cpu is 0.3s".

Another idea is to have buffer and cpu counters not in Instrumentation
> but in QueryDesc (i.e, only one per query). A new field for the counter
> will be initialized in pg_stat_statements module. EXPLAIN ANALYZE could
> also use the field but the result is printed only the end of query plan,
> as the follows. We can avoid too-wide-line problem with the approach.

Single number per query is sufficient only for pg_stat_statements purposes.
That will give an insight of what the top consuming queries are (by cpu
time, by gets, etc).
However, single "gets=... reads=..." is not sufficient to pinpoint the
problem especially in case of complex query (that is comparable to "query
returned N rows" vs "this plan node returned N rows") .

> =# EXPLAIN ANALYZE SELECT ...;
> QUERY PLAN
> ---------------------------
> ...
> Total runtime: 24.273 ms
> CPU usage: user=20.2 sys=2.4 #
> Buffer usage: gets=100 reads=10 writes=20 # added lines
> Tempfile i/o: reads=50 writes=50 #
> (10 rows)

I wish pgAdmin (or whatever client) had an option to fetch that counters for
each and every SQL query and display the consumed resources at a separate
tab. I mean, even before/after plain "select" (without any explain). That
will show you how the query would behave without any instrumentation.

Regards,
Vladimir Sitnikov


From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: Alex Hunsaker <badalex(at)gmail(dot)com>
Cc: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 08:20:46
Message-ID: Pine.GSO.4.64.0812090307490.13024@westnet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, 7 Dec 2008, Alex Hunsaker wrote:

> (dual core machine, --enable-debug, --enable-cassert build)
> pgbench -c 2 -T60 -n -f test.sql
>
> HEAD: tps = 9.674423
> PATCH: tps = 9.695784

Two general suggestions here, not specific to this patch:

While it's good to do most testing with debug and cassert turned on, you
shouldn't report performance results with those two flags enabled. What
if the patch has some large amount of overhead that only shows up when
compiled with debug or asserts on? You'd end up reporting a performance
loss that doesn't actually exist in a real build. Unfortunately, the only
way to get good performance results is to have a parallel build done with
those off, in addition to the debug/assert one used to catch bugs.

The above pgbench is executing less than 600 actual tests (60 seconds @
9.7TPS). That seems a bit short to me. If you sorted out the above and
run this again, it would be good to let pgbench run for a lot longer than
1 minute, to see if the results show some more significant difference.
With this few TPS, it would be nice to let that run for 30 minutes or more
if you can find some time to schedule that.

--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 13:59:51
Message-ID: 13696.1228831191@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> writes:
> I thought that output of new counters are too wide and it brakes
> compatibility of EXPLAIN ANALYZE. On the other hand, we don't have to
> think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly
> added in 8.4. However, overheads should be avoided. We could have
> two kinds of instrumentations, time-only or all-stats.

I've got a serious problem with the way that this patch is being
presented. It's being named and described as though it's "just another
contrib module", but in fact it makes invasive, undocumented changes to
the behavior of the core EXPLAIN functionality --- changes that
certainly cannot be claimed to having been agreed to by the community,
since most of us probably weren't aware that there was any such thing
going on inside this patch.

Please split this into two separate patches that can be separately
evaluated.

regards, tom lane


From: "Robert Haas" <robertmhaas(at)gmail(dot)com>
To: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 15:26:04
Message-ID: 603c8f070812090726h50f41e3em68c219dab4f02b2@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> On the other hand, we don't have to
> think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly
> added in 8.4.

Uh, it exists for me in 8.2.9.

Welcome to psql 8.2.9, the PostgreSQL interactive terminal.

Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit

portal=# explain analyze verbose select 1;
QUERY PLAN

--------------------------------------------------------------------------------
----
{RESULT
:startup_cost 0.00
:total_cost 0.01
:plan_rows 1
:plan_width 0
:targetlist (
{TARGETENTRY
:expr
{CONST
:consttype 23
:constlen 4
:constbyval true
:constisnull false
:constvalue 4 [ 1 0 0 0 ]
}
:resno 1
:resname ?column?
:ressortgroupref 0
:resorigtbl 0
:resorigcol 0
:resjunk false
}
)
:qual <>
:lefttree <>
:righttree <>
:initPlan <>
:extParam (b)
:allParam (b)
:nParamExec 0
:resconstantqual <>
}

Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.010..0.012 rows=1 loops
=1)
Total runtime: 0.244 ms
(35 rows)

...Robert


From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Robert Haas" <robertmhaas(at)gmail(dot)com>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 16:12:55
Message-ID: 87bpvliayg.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Robert Haas" <robertmhaas(at)gmail(dot)com> writes:

>> On the other hand, we don't have to
>> think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly
>> added in 8.4.
>
> Uh, it exists for me in 8.2.9.

The current behaviour is newly added in 8.4. In 8.2 it meant something
completely different and quite useless for end-users in any case, so backwards
compatibility isn't important.

What strikes me as a convenient approach is basically using EXPLAIN VERBOSE as
a playground where we feel free to add everything we think of. If people run a
command marked "VERBOSE" and complain it prints too much...

As stuff matures and becomes indispensable we could consider moving it to the
regular EXPLAIN or implement some way to specify precisely which data the user
wants. Or just say XML/table data/whatever will solve the problem for us.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Robert Haas" <robertmhaas(at)gmail(dot)com>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 16:22:54
Message-ID: 16198.1228839774@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Robert Haas" <robertmhaas(at)gmail(dot)com> writes:
>> On the other hand, we don't have to
>> think of compatibility in EXPLAIN ANALYZE VERBOSE because it is newly
>> added in 8.4.

> Uh, it exists for me in 8.2.9.

EXPLAIN VERBOSE has existed at least back to 7.0, probably further.
However, we've felt free to whack around what it outputs, so maybe
the backwards-compatibility issue isn't very strong.

A possibly stronger complaint is that ANALYZE and VERBOSE have always
been orthogonal options to EXPLAIN, and now there'd be some interaction
between them.

regards, tom lane


From: "Robert Haas" <robertmhaas(at)gmail(dot)com>
To: "Gregory Stark" <stark(at)enterprisedb(dot)com>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 16:35:04
Message-ID: 603c8f070812090835l3d800550gb249e21d67220704@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> As stuff matures and becomes indispensable we could consider moving it to the
> regular EXPLAIN or implement some way to specify precisely which data the user
> wants. Or just say XML/table data/whatever will solve the problem for us.

I think some way to specify precisely which data the user wants is the
way to go. The amount of data that there is to be printed is only
going to continue to increase. If the only toggle is a boolean flag
to display ALL or NONE of it, then every time someone proposes a new
type of output, we're going to argue about whether it's useful enough
to be worth the display real estate.

I'm not sure what the best way is though. I don't think continuing to
add keywords between EXPLAIN and the start of the query is very
scalable. Putting parentheses around the option list seems like it
might eliminate a lot of grammar headaches:

EXPLAIN (option, option, option...) SELECT ...

...Robert


From: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>
To: "Robert Haas" <robertmhaas(at)gmail(dot)com>
Cc: "Gregory Stark" <stark(at)enterprisedb(dot)com>, "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 16:51:44
Message-ID: 1d709ecc0812090851p2d6fa1a7uee3908ccc4775d8@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> I'm not sure what the best way is though. I don't think continuing to
> add keywords between EXPLAIN and the start of the query is very
> scalable. Putting parentheses around the option list seems like it
> might eliminate a lot of grammar headaches:

Do you think it is required to invent special grammar just for presentation
purposes?

I guess database should not deal with presentation. Provided "explain"
retuns table, it is up to the client to do the formatting. I do not believe
it makes sense creating several different explain outputs, and redo all the
work in 8.5.

It still could make sense having several options for "explain" if that would
result in *different instrumentation *(e.g. "explain" vs "explain analyze").

Regards,
Vladimir Sitnikov


From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 17:07:04
Message-ID: 34d269d40812090907h764879f1k6f18962e9bc69041@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 8, 2008 at 23:28, ITAGAKI Takahiro
<itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> wrote:
>
> "Alex Hunsaker" <badalex(at)gmail(dot)com> wrote:
>
>> I was assigned to review this.
>
> Thanks for your reviewing.

> I assume that the basic concepts are ok and focus of discussion is in:
> - New counters in struct Instrumentation.
> (buffer usage and CPU usage)
> - Should EXPLAIN ANALYZE show those counters.

Right, I would split out your next patch in 3 parts: the hooks you
need, contrib module and the new counters. I think I saw older
versions of the patch that did this... just got lost for this version?

>
>> Performance review
>> HEAD: tps = 9.674423
>> PATCH: tps = 9.695784
>>
>> If it claims to improve performance, does it?
>> Does it slow down other things?

> The patch should not slow down normal use if you don't use
> pg_stat_statements module, but it might slow down EXPLAIN ANALYZE
> because some fields are added in struct Instrumentation and
> they are counted up per tuple in EXPLAIN ANALYZE.

Err yes sorry I was just following
http://wiki.postgresql.org/wiki/Reviewing_a_Patch, those two did not
seem pertainant so I did not answer them.


From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "Greg Smith" <gsmith(at)gregsmith(dot)com>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 17:11:20
Message-ID: 34d269d40812090911k76d3789fmc41ff0d40c8dc226@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Dec 9, 2008 at 01:20, Greg Smith <gsmith(at)gregsmith(dot)com> wrote:
> On Sun, 7 Dec 2008, Alex Hunsaker wrote:
>
>> (dual core machine, --enable-debug, --enable-cassert build)
>> pgbench -c 2 -T60 -n -f test.sql
>>
>> HEAD: tps = 9.674423
>> PATCH: tps = 9.695784
>
> Two general suggestions here, not specific to this patch:
>
> While it's good to do most testing with debug and cassert turned on, you
> shouldn't report performance results with those two flags enabled. What if
> the patch has some large amount of overhead that only shows up when compiled
> with debug or asserts on? You'd end up reporting a performance loss that
> doesn't actually exist in a real build. Unfortunately, the only way to get
> good performance results is to have a parallel build done with those off, in
> addition to the debug/assert one used to catch bugs.

Right, which is part of the reason I noted it was a cassert build.

> The above pgbench is executing less than 600 actual tests (60 seconds @
> 9.7TPS). That seems a bit short to me. If you sorted out the above and run
> this again, it would be good to let pgbench run for a lot longer than 1
> minute, to see if the results show some more significant difference. With
> this few TPS, it would be nice to let that run for 30 minutes or more if you
> can find some time to schedule that.

Ok thats useful to know as well, thanks! (ill go re-run them)

> --
> * Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD
>


From: Greg Stark <greg(dot)stark(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Gregory Stark <stark(at)enterprisedb(dot)com>, ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 17:44:51
Message-ID: EBE2E804-23D5-4862-AFE8-15AE2EA81AD7@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Yes this is one reasonable option, as is the idea of using XML or a
table and making it the client's problem. Neither are going to happen
for this release I think.

And in any case it will always be useful to have an option to print
all the available information anyways so we make as well do that with
"verbose".

--
Greg

On 9 Dec 2008, at 16:35, "Robert Haas" <robertmhaas(at)gmail(dot)com> wrote:

>> As stuff matures and becomes indispensable we could consider moving
>> it to the
>> regular EXPLAIN or implement some way to specify precisely which
>> data the user
>> wants. Or just say XML/table data/whatever will solve the problem
>> for us.
>
> I think some way to specify precisely which data the user wants is the
> way to go. The amount of data that there is to be printed is only
> going to continue to increase. If the only toggle is a boolean flag
> to display ALL or NONE of it, then every time someone proposes a new
> type of output, we're going to argue about whether it's useful enough
> to be worth the display real estate.
>
> I'm not sure what the best way is though. I don't think continuing to
> add keywords between EXPLAIN and the start of the query is very
> scalable. Putting parentheses around the option list seems like it
> might eliminate a lot of grammar headaches:
>
> EXPLAIN (option, option, option...) SELECT ...
>
> ...Robert


From: "Robert Haas" <robertmhaas(at)gmail(dot)com>
To: "Greg Stark" <greg(dot)stark(at)enterprisedb(dot)com>
Cc: "Gregory Stark" <stark(at)enterprisedb(dot)com>, "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 17:53:57
Message-ID: 603c8f070812090953w2f6a65d1gaad92de9cbd4534e@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Dec 9, 2008 at 12:44 PM, Greg Stark <greg(dot)stark(at)enterprisedb(dot)com> wrote:
> Yes this is one reasonable option, as is the idea of using XML or a table
> and making it the client's problem. Neither are going to happen for this
> release I think.

Agreed.

> And in any case it will always be useful to have an option to print all the
> available information anyways so we make as well do that with "verbose".

Sounds very nice.

...Robert


From: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>
To: "Robert Haas" <robertmhaas(at)gmail(dot)com>, "Greg Stark" <greg(dot)stark(at)enterprisedb(dot)com>, "Gregory Stark" <stark(at)enterprisedb(dot)com>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 18:01:18
Message-ID: 1d709ecc0812091001jb9e362emd28b656bf6831f7e@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Dec 9, 2008 at 8:53 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> On Tue, Dec 9, 2008 at 12:44 PM, Greg Stark <greg(dot)stark(at)enterprisedb(dot)com>
> wrote:
> > Yes this is one reasonable option, as is the idea of using XML or a table
> > and making it the client's problem. Neither are going to happen for this
> > release I think.
>
> Agreed.

I 100% agree with that point. Thus I suggest output additional information
into "explain analyze" since:
1) it will require minimal code change
2) it will be consistent with previous behaviour
3) looks like a natural EXPLAIN's feature improvement
4) will be anyway changed when "table" for explain will come

> > And in any case it will always be useful to have an option to print all
> the
> > available information anyways so we make as well do that with "verbose".
>
> Sounds very nice.

Can I ask my question once again?
Why don't you want to make "print all the info" the default output format?
As long as it comes to "pgsql-performance", they used to recommend: "please,
provide EXPLAIN ANALYZE, and not just EXPLAIN".
If the default output format is not changed in 8.4, this will transform into
"please, provide EXPLAIN ANALYZE VERBOSE, not just EXPLAIN ANALYZE or
EXPLAIN". Do you really want that?

Regards,
Vladimir Sitnikov


From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-09 19:32:35
Message-ID: 34d269d40812091132n669f0d91jb85f87e86f14f359@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Dec 7, 2008 at 19:13, Alex Hunsaker <badalex(at)gmail(dot)com> wrote:
> On Tue, Dec 2, 2008 at 02:35, ITAGAKI Takahiro
> <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> wrote:
>> Here is an update version of contrib/pg_stat_statements.
>
> Hello again!
>
> I was assigned to review this.

... Some other things I accidentally left out.

#define GUCNAME(name) ("statistics." name)

Why statistics?
Would not something like stat_statements make more sense? Statistics
seems fairly arbitrary...

Also per the
/* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */

Maybe it should be configurable, personally I would want something
like # of calls / time. Mainly because I don't for instance really
care that my backups get tracked but would be more interested in the
things that get called most often that also take the longest. (aka
the most bang for the buck, as far as optimizing those goes...)

?


From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-10 02:53:15
Message-ID: 20081210110946.85FB.52131E4D@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> Please split this into two separate patches that can be separately
> evaluated.

Sure. I want to disucuss only where to add counters of buffer usage
and cpu usage, or they should not be added. However, it seems to
affect future of EXPLAIN ANALYZE, so we might also need to discuss
about EXPLAIN.

I assume we have 3 choices here:

1. Add those counters to struct Instrument.
We can get statistics for each line in EXPLAIN ANALYZE,
but it might have overhead to update counters.

2. Add those counters only to top instruments (one per query).
We can get accumulated statistics for each query.
It might be unsufficient for complex queries.

3. Should not add any counters.
No changes to core, but usability of pg_stat_statement module
would be very poor...

Which should we take? or are there another idea?

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: "Alex Hunsaker" <badalex(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1202
Date: 2008-12-10 03:17:11
Message-ID: 20081210115946.8601.52131E4D@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


"Alex Hunsaker" <badalex(at)gmail(dot)com> wrote:

> #define GUCNAME(name) ("statistics." name)
>
> Why statistics?
> Would not something like stat_statements make more sense? Statistics
> seems fairly arbitrary...

Not to use duplicated "statements" words;
variable names contains "statements" already.
- stat_statements.max_statements
- stat_statements.track_statements
seem to be ugly for me, but avoiding arbitrariness might be more
important. If there are agreements, I will to change the prefix.

> Also per the
> /* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */
>
> Maybe it should be configurable, personally I would want something
> like # of calls / time. Mainly because I don't for instance really
> care that my backups get tracked but would be more interested in the
> things that get called most often that also take the longest. (aka
> the most bang for the buck, as far as optimizing those goes...)

Configurability is better, but we need documentations of how to
configure them and I have no clear idea for it. Also, we already have
means for logging slow queries. We could use the logging for slow
queries executed rarely and use the module queries executed many times.

Excluding backup scripts is easy; You can create a database role for
backup and disable statement-tracking for the user using ALTER ROLE.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: pgsql-hackers(at)postgresql(dot)org
Cc: "Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com>
Subject: contrib/pg_stat_statements 1212
Date: 2008-12-12 09:51:38
Message-ID: 20081212182754.BBA1.52131E4D@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Here is an updated version of pg_stat_statements.

[Changes]
- A new GUC variable 'explain_analyze_format' is added.
- Statistics counters are converted into a variable 'BufferStats'.

"Vladimir Sitnikov" <sitnikov(dot)vladimir(at)gmail(dot)com> wrote:

> Can I ask my question once again?
> Why don't you want to make "print all the info" the default output format?

I added a new GUC variable 'explain_analyze_format' to print all
statistics counters in EXPLAIN ANALYZE. We can set special variables
into it to get non-default counters. I think everyone don't always
require all the info. Of course TABLE-explain and XML-explain could
handle the counters better, but this patch should not include them.

A variable 'BufferStats' is for cleanup calculations of many separated
counters. It is exported (marked as PGDLLIMPORT) so that users can add
a user-defined view like pg_stat_session_buffer_usage if needed.

Comments welcome.

[Sample output of explain_analyze_format]
=# SET explain_analyze_format = ' gets=%g reads=%r temp=%t cpu=%p';
=# EXPLAIN ANALYZE SELECT * FROM accounts;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------
Seq Scan on accounts (cost=0.00..2640.00 rows=100000 width=97) (actual time=0.054..117.046 rows=100000 loops=1 gets=1640 reads=1608 temp=0 cpu=125.00)
Total runtime: 208.167 ms
(2 rows)

The following special variables are available:
%g : # of shared buffer gets (= %h + %r)
%h : # of shared buffer hits
%r : # of shared buffer reads
%w : # of shared buffer writes
%G : # of local buffer gets (= %H + %R)
%H : # of local buffer hits
%R : # of local buffer reads
%W : # of local buffer writes
%t : # of buffile reads
%T : # of buffile writes
%u : user cpu time
%s : sys cpu time
%p : total cpu times (= %u + %s)

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center

Attachment Content-Type Size
pg_stat_statements-1212.tar.gz application/octet-stream 15.9 KB

From: Euler Taveira de Oliveira <euler(at)timbira(dot)com>
To: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org, Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com>
Subject: Re: contrib/pg_stat_statements 1212
Date: 2008-12-13 20:55:20
Message-ID: 49442138.1040001@timbira.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

ITAGAKI Takahiro escreveu:

> - A new GUC variable 'explain_analyze_format' is added.
I'm afraid that this variable name doesn't tell what it means. What about
'explain_analyze_stats_format' or even 'explain_stats_format'?

--
Euler Taveira de Oliveira
http://www.timbira.com/


From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1212
Date: 2008-12-22 05:37:50
Message-ID: 34d269d40812212137p48117b0v5a8b6824791630c9@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Dec 12, 2008 at 02:51, ITAGAKI Takahiro
<itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> wrote:
> Here is an updated version of pg_stat_statements.

I took the liberty of splitting this out into 3 patches for easier review:

patch: 001-hooks.patch
-adds 2 new hooks (startup_hook and shutdown_hook).
-fixes missing process_shared_preload_libraries on EXEC_BACKEND
-changes int pgstat_track_activity_query_size to PGDLLIMPORT int
pgstat_track_activity_query_size
-addes sourceText to QueyDesc and modifies callers to populate it

patch: 002-explain-fmt.patch
-adds new guc explain_analyze_format to print out statistic counters
-adds rtime/utime to explain analyze printout
-moves statistic counters (ReadBufferCount etc...) into struct
Instrumentation via a new struct BufferCounter (allows modules like
the proposed easy access)

patch: pg_stat_statements-1212.patch
the contrib module

A few comments:

Is there a reason you add sourceText to QueryDesc? AFAICT you can do
ActivePortal->sourceText and it will always be populated correctly.

I think the explain_analyze_format guc is a clever way of getting
around the explain analyze verbose you proposed earlier. But I dont
see any doc updates for it.

Im still not overly fond of the "statistics." custom guc name, but
what can you do...

Other than that it looks good, though I admit I have not had the time
to sit down and thoroughly test it yet...

Attachment Content-Type Size
001-hooks.patch text/x-patch 15.4 KB
002-explain-fmt.patch text/x-patch 19.2 KB
pg_stat_statements-1212.patch text/x-patch 32.9 KB

From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: "Alex Hunsaker" <badalex(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1212
Date: 2008-12-22 07:44:01
Message-ID: 20081222162154.8485.52131E4D@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


"Alex Hunsaker" <badalex(at)gmail(dot)com> wrote:

> A few comments:
>
> Is there a reason you add sourceText to QueryDesc? AFAICT you can do
> ActivePortal->sourceText and it will always be populated correctly.

That's for nested statements (SQLs called in stored functions).
ActivePortal->sourceText shows text of only top most query.

> I think the explain_analyze_format guc is a clever way of getting
> around the explain analyze verbose you proposed earlier. But I dont
> see any doc updates for it.

Sure, no docs for now. The guc approach is acceptable?
(I'm not sure whether it is the best way...)
If ok, I'll write docs for it.

> Im still not overly fond of the "statistics." custom guc name, but
> what can you do...

I have no obsessions with the name. The "pg_stat_statements.*" might
be better to avoid confliction of prefix. If so, we'd better to rename
variables to kill duplication of "statements" from the names.

Ex.
statistics.max_statements -> pg_stat_statements.limit
statistics.track_statements -> pg_stat_statements.target
statistics.saved_file -> pg_stat_statements.saved_file

> Other than that it looks good, though I admit I have not had the time
> to sit down and thoroughly test it yet...

I found another bug in my patch.

[pg_stat_statements-1212.patch # pg_stat_statements()]
SpinLockAcquire(&entry->mutex);
values[i++] = Int64GetDatumFast(entry->counters.calls);
values[i++] = Float8GetDatumFast(entry->counters.total_time);
values[i++] = Float8GetDatumFast(entry->counters.cpu_user);
values[i++] = Float8GetDatumFast(entry->counters.cpu_sys);
values[i++] = Int64GetDatumFast(entry->counters.gets);
values[i++] = Int64GetDatumFast(entry->counters.reads);
values[i++] = Int64GetDatumFast(entry->counters.lreads);
values[i++] = Int64GetDatumFast(entry->counters.rows);
SpinLockRelease(&entry->mutex);

The variables are not protected by spinlock actually when float64 and
int64 are passed by reference (especially on 32bit platform).
It'd be better to copy values:

Counters tmp;
/* copy the actual values in spinlock */
SpinLockAcquire(&entry->mutex);
tmp = entry->counters;
SpinLockRelease(&entry->mutex);
/* create a tuple after lock is released. */
values[i++] = Int64GetDatumFast(tmp.calls);
values[i++] = Float8GetDatumFast(tmp.total_time);
...

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1212
Date: 2008-12-23 03:33:01
Message-ID: 34d269d40812221933q553796a7s19a285ed6294caf7@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 22, 2008 at 00:44, ITAGAKI Takahiro
<itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> wrote:
>
> "Alex Hunsaker" <badalex(at)gmail(dot)com> wrote:
>
>> A few comments:
>>
>> Is there a reason you add sourceText to QueryDesc? AFAICT you can do
>> ActivePortal->sourceText and it will always be populated correctly.
>
> That's for nested statements (SQLs called in stored functions).
> ActivePortal->sourceText shows text of only top most query.
>
>> I think the explain_analyze_format guc is a clever way of getting
>> around the explain analyze verbose you proposed earlier. But I dont
>> see any doc updates for it.
>
> Sure, no docs for now. The guc approach is acceptable?
> (I'm not sure whether it is the best way...)
> If ok, I'll write docs for it.

I dunno, Im hopping that splitting up the patches and making the
change more visible some more people might chime in :)

>> Im still not overly fond of the "statistics." custom guc name, but
>> what can you do...
>
> I have no obsessions with the name. The "pg_stat_statements.*" might
> be better to avoid confliction of prefix. If so, we'd better to rename
> variables to kill duplication of "statements" from the names.
>
> Ex.
> statistics.max_statements -> pg_stat_statements.limit
> statistics.track_statements -> pg_stat_statements.target

How about just pg_stat_statements.track ?

> statistics.saved_file -> pg_stat_statements.saved_file

I do like the consistency of having the custom gucs be the same as the
module name, easy to grep or google for.

>> Other than that it looks good, though I admit I have not had the time
>> to sit down and thoroughly test it yet...
>
> I found another bug in my patch.
>
> [pg_stat_statements-1212.patch # pg_stat_statements()]
> SpinLockAcquire(&entry->mutex);
> values[i++] = Int64GetDatumFast(entry->counters.calls);
> values[i++] = Float8GetDatumFast(entry->counters.total_time);
> values[i++] = Float8GetDatumFast(entry->counters.cpu_user);
> values[i++] = Float8GetDatumFast(entry->counters.cpu_sys);
> values[i++] = Int64GetDatumFast(entry->counters.gets);
> values[i++] = Int64GetDatumFast(entry->counters.reads);
> values[i++] = Int64GetDatumFast(entry->counters.lreads);
> values[i++] = Int64GetDatumFast(entry->counters.rows);
> SpinLockRelease(&entry->mutex);
>
> The variables are not protected by spinlock actually when float64 and
> int64 are passed by reference (especially on 32bit platform).
> It'd be better to copy values:
>
> Counters tmp;
> /* copy the actual values in spinlock */
> SpinLockAcquire(&entry->mutex);
> tmp = entry->counters;
> SpinLockRelease(&entry->mutex);
> /* create a tuple after lock is released. */
> values[i++] = Int64GetDatumFast(tmp.calls);
> values[i++] = Float8GetDatumFast(tmp.total_time);
> ...

Ive only been testing on 64bit... maybe thats why I never ran into this.


From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: pgsql-hackers(at)postgresql(dot)org, "Alex Hunsaker" <badalex(at)gmail(dot)com>
Subject: contrib/pg_stat_statements 1226
Date: 2008-12-26 06:04:30
Message-ID: 20081226145707.AF00.52131E4D@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Here is an updated version of contrib/pg_stat_statements patch.

"Alex Hunsaker" <badalex(at)gmail(dot)com> wrote:

> >> I think the explain_analyze_format guc is a clever way of getting
> >> around the explain analyze verbose you proposed earlier. But I dont
> >> see any doc updates for it.

Documentation is added.

> How about just pg_stat_statements.track ?

I renamed the variables to:
- pg_stat_statements.limit
- pg_stat_statements.track
- pg_stat_statements.saved_file

I also modified assign_custom_variable_classes()
to accept '_' as a prefix character, not only 0-9A-Za-z.

> I do like the consistency of having the custom gucs be the same as the
> module name, easy to grep or google for.

Should I also rename variables used in auto_explain module?
It uses 'explain.*' now.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center

Attachment Content-Type Size
pg_stat_statements.1226.tar.gz application/octet-stream 17.2 KB

From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1226
Date: 2008-12-29 22:46:36
Message-ID: 34d269d40812291446i1295b65fl7f1791f66041a4d1@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Dec 25, 2008 at 23:04, ITAGAKI Takahiro
<itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> wrote:
> Here is an updated version of contrib/pg_stat_statements patch.

Im going to do some more testing, give the typedef Chunk stuff another
look (you did fix the race/not protected by a spinlock you found
earlier right?) . And if all looks good mark it as ready for
commiter. (Assuming I find time tonight :))

I think the other questions a -commiter needs to respond to, namely:
- explain_analyze_format guc, do we want it?

- queryDesc->sourceText changes
(look good to me though, and I don't see any other obvious way to do it)

- rtime/utime to explain analyze printout

- moves statistic counters (ReadBufferCount etc...) into struct
Instrumentation via a new struct BufferCounter
(looks like a good cleanup regardless...)

> Should I also rename variables used in auto_explain module?
> It uses 'explain.*' now.

Well in the school of second thought you *do* have to manually define
them in custom_variable_classes, so maybe its fine. Personally though
I would like them to be auto_explain.*... it seems harder to mistake
them later as having something to do EXPLAIN.


From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1226
Date: 2008-12-30 16:59:09
Message-ID: 34d269d40812300859x49c1e40fmfb834d67f06cfd66@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Dec 29, 2008 at 15:46, Alex Hunsaker <badalex(at)gmail(dot)com> wrote:
> On Thu, Dec 25, 2008 at 23:04, ITAGAKI Takahiro
> <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> wrote:
>> Here is an updated version of contrib/pg_stat_statements patch.
>
> Im going to do some more testing, give the typedef Chunk stuff another
> look (you did fix the race/not protected by a spinlock you found
> earlier right?) . And if all looks good mark it as ready for
> commiter. (Assuming I find time tonight :))

Ok i ran a portion of tpc-h with 4 simultaneous clients doing the same
query on a 32bit dual core machine about a hundred times (lets call
this A). I then compared the # of calls, # of gets and # rows from
pg_stat_statements view to a single run (lets cal this B) to make sure
they matched (i.e. made sure A.gets = A.calls * B.gets), that part all
looks good. So I think if there was a race you squashed it. My only
nit this time around was patch complaining about (Stripping trailing
CRs from patch.) but that's no big deal. So Im going to mark it as
ready for commmiter.


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Alex Hunsaker" <badalex(at)gmail(dot)com>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-02 00:28:34
Message-ID: 15791.1230856114@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Alex Hunsaker" <badalex(at)gmail(dot)com> writes:
> ... So Im going to mark it as
> ready for commmiter.

Has this patch been tested on Windows? (Or more generally, with EXEC_BACKEND?)

The reason I ask is that eyeballing the code suggests a couple of major
problems in that area:

* the startup/shutdown hooks will be installed in the postmaster
process, but the patch expects them to be executed in a child process.
I think nothing will happen.

* in an EXEC_BACKEND situation, we re-execute
process_shared_preload_libraries() when starting a fresh backend
(but not in other kinds of child processes, which is why the other
problem is a problem). This means re-executing the _PG_init function,
which will try to redefine the custom GUC variables, which will fail.
I don't think this is really a bug in this patch per se, it's a bug
in the custom-GUC support; but nonetheless it looks like a problem.

regards, tom lane


From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-02 02:45:43
Message-ID: 34d269d40901011845l686638bx9c6ba040f1b5d987@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Jan 1, 2009 at 17:28, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> "Alex Hunsaker" <badalex(at)gmail(dot)com> writes:
>> ... So Im going to mark it as
>> ready for commmiter.
>
> Has this patch been tested on Windows? (Or more generally, with EXEC_BACKEND?)

I was under the impression thats where Itagaki-san develops.You'll
note some other specific windows changes:

pgstat_track_activity_query_size gains PGDLLIMPORT
process_shared_preload_libraries()
also seems of intreset:
http://archives.postgresql.org/pgsql-hackers/2008-12/msg01416.php

varoius carriage returns in the patch...

I could be wrong though.

> The reason I ask is that eyeballing the code suggests a couple of major
> problems in that area:
>
> * the startup/shutdown hooks will be installed in the postmaster
> process, but the patch expects them to be executed in a child process.
> I think nothing will happen.

I dunno about this one, not very familiar with EXEC_BACKEND

> * in an EXEC_BACKEND situation, we re-execute
> process_shared_preload_libraries() when starting a fresh backend
> (but not in other kinds of child processes, which is why the other
> problem is a problem). This means re-executing the _PG_init function,
> which will try to redefine the custom GUC variables, which will fail.
> I don't think this is really a bug in this patch per se, it's a bug
> in the custom-GUC support; but nonetheless it looks like a problem.

I see 3 options:
- add a GUC_CUSTOM_REDEFINE flag

- ignore redefines of custom gucs

-change the define_custom_variable() to return a bool (or something)
true if it got added
false if it was already there

Seems to me we could probably just ignore any redefines of custom gucs
outright. Im not to worried about some other module picking the same
custom guc. And frankly the op should notice. Especially when they
go to add it to custom_variable_classes.


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Alex Hunsaker" <badalex(at)gmail(dot)com>, "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-02 02:59:04
Message-ID: 18013.1230865144@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I wrote:
> * in an EXEC_BACKEND situation, we re-execute
> process_shared_preload_libraries() when starting a fresh backend
> (but not in other kinds of child processes, which is why the other
> problem is a problem). This means re-executing the _PG_init function,
> which will try to redefine the custom GUC variables, which will fail.
> I don't think this is really a bug in this patch per se, it's a bug
> in the custom-GUC support; but nonetheless it looks like a problem.

Oh, never mind that part. I was thinking that the child process would
already know the real definition of the custom variable at the time it
tries to load the shared library, but actually the mechanism for pushing
GUC values into EXEC_BACKEND child processes doesn't transfer the whole
variable definition. It causes any such values to be loaded as
placeholders, and then the later load of the shared library converts the
placeholder to a regular variable. So it all works, or nearly anyway:
the code fails on a custom variable class whose name alphabetically
precedes "custom_variable_class".

http://archives.postgresql.org/pgsql-committers/2009-01/msg00008.php

regards, tom lane


From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-02 03:05:00
Message-ID: 34d269d40901011905g10aacad9j6be405e6b2be922b@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Jan 1, 2009 at 19:59, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> I wrote:
>> * in an EXEC_BACKEND situation, we re-execute
>> process_shared_preload_libraries() when starting a fresh backend
>> (but not in other kinds of child processes, which is why the other
>> problem is a problem). This means re-executing the _PG_init function,
>> which will try to redefine the custom GUC variables, which will fail.
>> I don't think this is really a bug in this patch per se, it's a bug
>> in the custom-GUC support; but nonetheless it looks like a problem.
>
> Oh, never mind that part. I was thinking that the child process would
> already know the real definition of the custom variable at the time it
> tries to load the shared library, but actually the mechanism for pushing
> GUC values into EXEC_BACKEND child processes doesn't transfer the whole
> variable definition. It causes any such values to be loaded as
> placeholders, and then the later load of the shared library converts the
> placeholder to a regular variable.

> So it all works, or nearly anyway:
> the code fails on a custom variable class whose name alphabetically
> precedes "custom_variable_class".

Cool! Err interesting...

> http://archives.postgresql.org/pgsql-committers/2009-01/msg00008.php

Yeah I saw your commits just shortly after hitting send on my reply :)


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Alex Hunsaker" <badalex(at)gmail(dot)com>, "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-03 03:20:34
Message-ID: 15933.1230952834@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I wrote:
> * the startup/shutdown hooks will be installed in the postmaster
> process, but the patch expects them to be executed in a child process.
> I think nothing will happen.

OK, I figured out what is happening there: the patch makes it work by
means of this expedient:

diff -cprN HEAD/src/backend/storage/lmgr/proc.c pg_stat_statements/src/backend/storage/lmgr/proc.c
*** HEAD/src/backend/storage/lmgr/proc.c 2008-12-10 02:03:02.366590000 +0900
--- pg_stat_statements/src/backend/storage/lmgr/proc.c 2008-12-26 14:51:58.062500000 +0900
*************** InitAuxiliaryProcess(void)
*** 381,386 ****
--- 381,390 ----
if (MyProc != NULL)
elog(ERROR, "you already exist");

+ #ifdef EXEC_BACKEND
+ process_shared_preload_libraries();
+ #endif
+
/*
* We use the ProcStructLock to protect assignment and releasing of
* AuxiliaryProcs entries.

I find this mighty Rube Goldbergian. We have a startup hook which is
declared in include/storage/ipc.h, but defined and called in bootstrap.c
(whence it will actually be executed down inside the startup process).
We have a shutdown hook which is also declared in include/storage/ipc.h,
but defined and called in bgwriter.c (executed in the bgwriter process,
of course). And to make those hooks work in the EXEC_BACKEND case, we
have a kluge inserted in proc.c, miles away from where the existing
process_shared_preload_libraries() calls are (in postmaster.c), and with
extremely high probability of someday resulting in duplicate preload
operations if the postmaster.c code gets shuffled.

I don't really care for the idea of initializing the pg_stat_statements
shared memory down inside the startup process. All the rest of shared
memory is initialized by the postmaster process itself, and I think
pg_stat_statements should probably work the same way. Normally I am
against having more functionality in the postmaster than absolutely
necessary, but I don't see any robustness loss in this situation: if we
have the pg_stat_statements init code in the startup subprocess, and it
dies, we'll abort startup anyway. So we might as well run it directly
in the postmaster. I think the right place is probably at the bottom of
CreateSharedMemoryAndSemaphores(). This will serve two purposes: to
create the pg_stat_statements shared memory when run in the postmaster,
or to re-attach to it when starting an EXEC_BACKEND child. The existing
coding in the patch that will try to create or attach to the shared
memory at any old random instant ought to go away --- if one of these
functions is called and doesn't find the pgss shared memory pointer
ready-to-go, it should error out or return quietly as appropriate.
That would indicate that pg_stat_statements.so got loaded into an
already-running backend, which means that the shared memory situation
can't possibly be right.

As for the shutdown hook, I don't think we need it at all in this
design. When loaded into the postmaster process, pg_stat_statements can
insert itself into the on_proc_exit or on_shmem_exit hook lists ... it
doesn't need a private hook.

BTW, as for the question of where to call
process_shared_preload_libraries: we currently have postmaster.c doing
this for itself, and when spawning a regular backend child in the
EXEC_BACKEND case. We don't do it for other child process types;
which is the omission that Itagaki-san tried to work around with
the above diff hunk. You could argue that this is a case of premature
optimization (a/k/a the root of all evil). I think the idea was that
we'd not possibly need any loadable libraries installed in special child
processes --- but that seems more than a little bit dubious if you
think about a loadable library whose goal is to monitor system activity,
as opposed to implementing some SQL-callable functionality. Moreover
it fails to duplicate what will happen in the non-EXEC_BACKEND case;
all child processes will inherit loadable libraries then. So I'm
thinking that Itagaki-san had the right idea in wanting to make
auxiliary processes load libraries, just the wrong implementation.
The right way to make that happen is to rearrange the coding in
SubPostmasterMain() so that process_shared_preload_libraries is
done in all cases, just after the read_nondefault_variables call.

Comments?

regards, tom lane


From: "Alex Hunsaker" <badalex(at)gmail(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-03 20:00:35
Message-ID: 34d269d40901031200p71b7974en25abb85faab5acfc@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Jan 2, 2009 at 20:20, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> I wrote:
>> * the startup/shutdown hooks will be installed in the postmaster
>> process, but the patch expects them to be executed in a child process.
>> I think nothing will happen.
>
> OK, I figured out what is happening there: the patch makes it work by
> means of this expedient:

<snip>

> I find this mighty Rube Goldbergian. We have a startup hook which is
> declared in include/storage/ipc.h, but defined and called in bootstrap.c
> (whence it will actually be executed down inside the startup process).
> We have a shutdown hook which is also declared in include/storage/ipc.h,
> but defined and called in bgwriter.c (executed in the bgwriter process,
> of course). And to make those hooks work in the EXEC_BACKEND case, we
> have a kluge inserted in proc.c, miles away from where the existing
> process_shared_preload_libraries() calls are (in postmaster.c), and with
> extremely high probability of someday resulting in duplicate preload
> operations if the postmaster.c code gets shuffled.

Kudos to Itagaki-san for getting that to work?

> As for the shutdown hook, I don't think we need it at all in this
> design. When loaded into the postmaster process, pg_stat_statements can
> insert itself into the on_proc_exit or on_shmem_exit hook lists ... it
> doesn't need a private hook.

Ok cool.

> The right way to make that happen is to rearrange the coding in
> SubPostmasterMain() so that process_shared_preload_libraries is
> done in all cases, just after the read_nondefault_variables call.

This should also fix the rmg hooks patch on EXEC_BACKEND.


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org, "Alex Hunsaker" <badalex(at)gmail(dot)com>
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-03 20:14:25
Message-ID: 9092.1231013665@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> writes:
> "Alex Hunsaker" <badalex(at)gmail(dot)com> wrote:
>> How about just pg_stat_statements.track ?

> I renamed the variables to:
> - pg_stat_statements.limit
> - pg_stat_statements.track
> - pg_stat_statements.saved_file

Hmm, this has got a problem:

regression=# show pg_stat_statements.limit ;
ERROR: syntax error at or near "limit"
LINE 1: show pg_stat_statements.limit ;
^

LIMIT is a reserved word ...

I thought max_statements was fine, but if people think that's too long
I guess we could go with just "max".

I'm not enamored of "saved_file" either, it seems like the wrong
part of speech somehow. Maybe "save_in_file"?

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org, "Alex Hunsaker" <badalex(at)gmail(dot)com>
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-04 21:57:59
Message-ID: 29692.1231106279@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I wrote:
> I'm not enamored of "saved_file" either, it seems like the wrong
> part of speech somehow. Maybe "save_in_file"?

Actually ... what is the point of letting users control the filename at
all? It seems like the only useful nondefault value is the empty string
(to suppress storing the stats). So why not simplify this to a boolean?
"pg_stat_statements.save" = on or off.

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
Cc: pgsql-hackers(at)postgresql(dot)org, "Alex Hunsaker" <badalex(at)gmail(dot)com>
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-04 22:41:00
Message-ID: 572.1231108860@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp> writes:
> Here is an updated version of contrib/pg_stat_statements patch.

I've committed this with significant revisions. Other than the points
already mentioned in previous messages:

* I removed the proposed changes to the behavior of the core EXPLAIN
code. I think that that should be submitted and discussed as a separate
patch, not slide in under the misleading title of being a "contrib
module". I'm personally against those changes anyway, on two grounds:

1. The proposed change to track system/user CPU time presents an
enormous cost, and no argument has been made to show that there is any
comparable benefit. The change causes each EXPLAIN ANALYZE tracking
call to invoke getrusage() as well as gettimeofday(). I did a little
bit of testing and found that this is over seven times slower on Fedora
9 on x86_64 (Xeon hardware) and over twenty-seven times slower on Darwin
(on Core 2 Duo hardware). Considering that EXPLAIN ANALYZE overhead is
already higher than anyone would like, you would need a pretty darn
convincing argument to persuade us to accept that kind of slowdown.
At the very least the code would need to be modified so that it doesn't
execute getrusage() unless the user is actually going to look at the
results.

2. I'm unconvinced by the proposed changes to accumulate backend-local
I/O counters, too. The fact of the matter is that those counters are
left over from Berkeley days, a time when PG hackers tended to do their
performance measurements in standalone backends (!). They're obviously
not the full story now on write measurements, and I don't have any
confidence in them as read measurements either, particularly seeing that
the wave of the future is likely to be asynchronous read operations
(with the posix_fadvise patch and foreseeable follow-on work). I think
those counters should more likely be done away with than
institutionalized in EXPLAIN ANALYZE output. You can get more reliable
information about what's happening from the existing pgstats system-wide
I/O counts.

* I changed the default track setting to "top". I don't see the
likelihood that someone would load this module into their server
and not want it turned on.

* I'm not entirely seeing the point of a server-wide tracking facility
that only counts SELECT/INSERT/UPDATE/DELETE. ISTM this should be
modified to count utility commands too; which probably means adding some
hooks around ProcessUtility (and what about autovacuum?). I left that
work for someone else to do, though.

* As already mentioned I find the entry_dealloc logic pretty darn
dubious; but I didn't touch that either in this go-round. If we do
keep it in its current form, ISTM that usage ought to be proportional
to total execution time not total call count.

regards, tom lane


From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org, "Alex Hunsaker" <badalex(at)gmail(dot)com>
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-05 02:52:00
Message-ID: 20090105104408.31DC.52131E4D@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thank you for many works.

Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> 1. The proposed change to track system/user CPU time presents an
> enormous cost, and no argument has been made to show that there is any
> comparable benefit.

> 2. I'm unconvinced by the proposed changes to accumulate backend-local
> I/O counters, too. The fact of the matter is that those counters are
> left over from Berkeley days, a time when PG hackers tended to do their
> performance measurements in standalone backends (!).

Ok, I need to reconsider performance and design of I/O counters.
I think those information is still useful because we can determine
not only which query is bad, but also why the query is bad
*without repeating the query in production servers*.
But the reworking would be done in 8.5...

> * I changed the default track setting to "top". I don't see the
> likelihood that someone would load this module into their server
> and not want it turned on.

Looks good. Setting shared_preload_libraries is enough for normal use.

> * I'm not entirely seeing the point of a server-wide tracking facility
> that only counts SELECT/INSERT/UPDATE/DELETE. ISTM this should be
> modified to count utility commands too; which probably means adding some
> hooks around ProcessUtility (and what about autovacuum?). I left that
> work for someone else to do, though.

Sure, but we should also consider utility commands should not kick out
DML queries because maintenance commands take long time typically.

> * As already mentioned I find the entry_dealloc logic pretty darn
> dubious; but I didn't touch that either in this go-round. If we do
> keep it in its current form, ISTM that usage ought to be proportional
> to total execution time not total call count.

I think the current implementation is not ideal, too.
I also don't like using fixed-size shared memory. I'm thinking
that shared memory used by extended modules to be allocated
from the shared buffer pool in the future. If we could do it,
memory management will be flexible and we can load libraries
after server starts.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org, "Alex Hunsaker" <badalex(at)gmail(dot)com>
Subject: Re: contrib/pg_stat_statements 1226
Date: 2009-01-05 06:06:57
Message-ID: 871vvithfi.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:

> 2. I'm unconvinced by the proposed changes to accumulate backend-local
> I/O counters, too. The fact of the matter is that those counters are
> left over from Berkeley days, a time when PG hackers tended to do their
> performance measurements in standalone backends (!). They're obviously
> not the full story now on write measurements, and I don't have any
> confidence in them as read measurements either, particularly seeing that
> the wave of the future is likely to be asynchronous read operations
> (with the posix_fadvise patch and foreseeable follow-on work). I think
> those counters should more likely be done away with than
> institutionalized in EXPLAIN ANALYZE output. You can get more reliable
> information about what's happening from the existing pgstats system-wide
> I/O counts.

It's clear that these counters are not the whole picture. But I think that
just means we need more counters with more sources of data, not that we need
to get rid of the ones we have. There's no reason we shouldn't have counters
for advised buffers along with reads.

I'm also picturing using dtrace to find out how many reads were satisfied from
cache and how many required physical reads for example.

The system-wide stats satisfy a very different need from this. The sysadmin or
DBA might want to know about system-wide stats but a programmer or a DBA
analyzing a specific query needs to know what that query is doing.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!