Re: Instrument checkpoint sync calls

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Greg Smith <greg(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Instrument checkpoint sync calls
Date: 2010-12-07 17:09:59
Message-ID: AANLkTikTRcnUgN1AhzRwDXTqO=9-MtDQLMzmXG-nw0kH@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, Dec 5, 2010 at 1:23 PM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
> Jeff Janes wrote:
>>
>> I've attached a tiny patch to apply over yours, to deal with this and
>> with the case where no files are synced.
>>
>
> Thanks for that.  That obvious error eluded me because in most of the patch
> update testing I was doing (on ext3), the longest sync was always about the
> same length as the total sync time.
>
> Attached patch (in correct diff form this time!) collects up all changes.
>  That includes elimination of a potential race condition if someone changes
> log_checkpoints while a long sync phase is executing.  I don't know whether
> that can happen, and it obviously won't give accurate stats going back to
> the beginning of the checkpoint in that case, but it  tries to defend aginst
> producing complete garbage if that value changes out from under it.

It looks to me that the checkpoint writer only reloads settings
between checkpoints, so log_checkpoints cannot change under it.
But if it can, the behavior of the patch is about as reasonable as one
can expect, I think.

>
> This is the first version of this patch I feel fairly good about; no open
> concerns left on my side.  Jeff, since you're now the de-facto credited
> reviewer of this one by virtue of suggesting bug fixes, could you take this
> update out for a spin too?

It applies and builds cleanly, and passes make check with -enable-cassert.
The details of log_checkpoints output are not documented, so no
document patches are needed.
I cannot think of any reasonable way to add a check into make check
framework, so I don't think one is needed.
It does what it says and what we want, and I don't see any dangers.

I haven't noticed any slow-downs. It is listed under performance, but
it is intended to diagnose, not fix, performance problems.

It seems to meet the coding style. It uses existing macros, so should
be as portable as the existing PG code is.

Comments seem sufficient. The use of INSTR_TIME_SET_ZERO as a flag
for detecting changing log_checkpoints maybe could use a comment, but
it doesn't take much thought to figure out that that is what is going
on.

I could not make it crash.

I'm marking it as ready for committer.

>
>> Combining this instrumentation patch with the backend sync one already
>> committed, the net result under log_min_messages=debug1is somewhat
>> undesirable in that I can now see the individual sync times for the
>> syncs done by the checkpoint writer, but I do not get times for the
>> syncs done by the backend (I only get informed of their existence).
>>
>
> On a properly working system, backend syncs shouldn't be happening.  So if
> you see them, I think the question shouldn't be "how long are they taking?",
> it's "how do I get rid of them?"  From that perspective, knowing of their
> existence is sufficient to suggest the necessary tuning changes, such as
> dropping bgwriter_delay.

OK. I was trying to figure out if the syncs were entangled (all ended
at about the same time, regardless of when they started). But I can
see that that is basically a hacker thing to do, not production, so I
can add in my own logging at the end of the backend sync for my
purposes.

>
> When you get into a situation where they are showing up, a whole lot of them
> can happen in a very brief period;

In my test cases, the syncs that the backends were doing were almost
always to the same file that the checkpoint writer was already choking
on (so they are entangled simply by virtue of that). So very quickly
all the backends hit the same wall and thunked to a halt. This is
probably a feature of trying to use pgbench as the basis to get a very
artificial model. I need to find a way to get dozens or hundreds of
files going. Maybe pgbench could be hacked so that each connection
used a different schema.

Cheers,

Jeff

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2010-12-07 17:11:46 Re: Final(?) proposal for wal_sync_method changes
Previous Message Tom Lane 2010-12-07 16:49:20 Re: Feature request - CREATE TYPE ... WITH OID = oid_number.