Re: four minor proposals for 9.5

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Josh Berkus <josh(at)agliodbs(dot)com>, Vik Fearing <vik(dot)fearing(at)dalibo(dot)com>
Subject: Re: four minor proposals for 9.5
Date: 2014-04-08 15:37:14
Message-ID: CAFj8pRDJGfqNvwNFRhvY3QK2voSQYwzE108Y4X-qRM2wGG-i5g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

2014-04-08 6:27 GMT+02:00 Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>:

> On Mon, Apr 7, 2014 at 12:16 AM, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
> wrote:
> > 2014-04-04 6:51 GMT+02:00 Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>:
> >> On Tue, Apr 1, 2014 at 11:42 PM, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com
> >
> >> wrote:
> >> > 2014-03-27 17:56 GMT+01:00 Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>:
> >> >> So I'll prepare a some prototypes in next month for
> >> >>
> >> >> 1. log a execution time for cancelled queries,
> >> >> 2. track a query lock time
> > Yes. Initially I though only about cancelled queries, but now O am
> thinking
> > so some more wide solution can be better. Sometimes - some long queries
> can
> > be stopped by Postgres, or by system - and info about duration can be
> same
> > usefull.
>
> Right.
>
> >>
> >> One more thing I think currently also we can find that by crude way
> >> (pg_stat_activity has query_start time and log_line_prefix has end
> time),
> >> but I think the having in one place 'log' will be more useful.
> > ??
>
> I just wanted to say that if someone wants to calculate the duration
> of cancelled query (or other error'd query), you can do that by checking
> the start time from pg_stat_activity and end time from log (using
> log_line_prefix), but this is of course cumbersome.
>
> >> > Same technique I would to
> >> > use for printing lock time - it can be printing instead symbol %L.
> >>
> >> Above you have mentioned that you are planing to have three different
> >> lock times (Table, Tuple and others), so will this one symbol (%L)
> enable
> >> all three lock times?
> >
> >
> > My idea is start with %L as total lock time, what is useful for wide
> group
> > of users, and next or in same time we can enhance it with two chars
> prefix
> > symbols
>
> So do you want to just print lock time for error'd statements, won't
> it better to
> do it for non-error'd statements as well or rather I feel it can be more
> useful
> for non-error statements? Do we already have some easy way to get wait-time
> for non-error statements?
>

There are two points:

a) we have no a some infrastructure how to glue some specific info to any
query other than log_line_prefix. And I have no any idea, how and what
implement better. And I don't think so any new infrastructure (mechanism)
is necessary. log_line_prefix increase log size, but it is very well
parseable - splunk and similar sw has no problem with it.

b) lock time can be interesting on error statements too - for example - you
can cancel locked queries - so you would to see a lock time and duration
for cancelled queries. So this implementation can be sensible too.

>
> > so
> >
> > %L .. total lock time
> > %Lr .. lock relation
> > %Lt .. lock tuples
> > %Lo .. lock others
> >
> > L = Lr + Lt + Lr
> >
> >>
> >> Are you planing to add new logs for logging this or planing to use
> >> existing
> >> infrastructure?
> >
> >
> > I have not a prototype yet, so I don't know what will be necessary
>
> Okay, I think then it's better to discuss after your initial analysis/
> prototype, but I think you might need to add some infrastructure code
> to make it possible, as lock database object (relation, tuple, ..) and lock
> others (buffers, ..) have different locking strategy, so to get total wait
> time
> for a statement due to different kind of locks you need to accumulate
> different wait times.
>

yes, we can wait after prototype will be ready,

>
> >> One general point is that won't it be bit difficult to parse the log
> line
> >> having
> >> so many times, should we consider to log with some special marker for
> >> each time (for example: tup_lock_wait_time:1000ms).
> >
> >
> > We should to optimize a log_line_prefix processing instead.
> >
> > Proposed options are interesting for "enterprise" using, when you have a
> > some more smart tools for log entry processing, and when you need a
> complex
> > view about performance of billions queries - when cancel time and lock
> time
> > is important piece in mosaic of server' fitness.
>
> Exactly, though this might not be directly related to this patch, but
> having
> it would be useful.
>

I don't afraid about impact to performance (surely, it should be tested
first). My previous implementation in GoodData was based on active used
mechanism - it doesn't introduce any new overhead.

But it should be verified on prototype

regards

Pavel

>
> With Regards,
> Amit Kapila.
> EnterpriseDB: http://www.enterprisedb.com
>

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2014-04-08 15:37:31 Re: Fwd: Proposal: variant of regclass
Previous Message Robert Haas 2014-04-08 15:01:41 Re: Fwd: Proposal: variant of regclass