Re: Proposal: Log inability to lock pages during vacuum

From: Andres Freund <andres(at)2ndquadrant(dot)com>
To: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Greg Stark <stark(at)mit(dot)edu>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Proposal: Log inability to lock pages during vacuum
Date: 2014-11-11 08:01:58
Message-ID: 20141111080158.GA18565@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2014-11-10 19:36:18 -0600, Jim Nasby wrote:
> On 11/10/14, 12:56 PM, Andres Freund wrote:
> >On 2014-11-10 12:37:29 -0600, Jim Nasby wrote:
> >>On 11/10/14, 12:15 PM, Andres Freund wrote:
> >>>>>If what we want is to quantify the extent of the issue, would it be more
> >>>>>convenient to save counters to pgstat? Vacuum already sends pgstat
> >>>>>messages, so there's no additional traffic there.
> >>>I'm pretty strongly against that one in isolation. They'd need to be
> >>>stored somewhere and they'd need to be queryable somewhere with enough
> >>>context to make sense. To actually make sense of the numbers we'd also
> >>>need to report all the other datapoints of vacuum in some form. That's
> >>>quite a worthwile project imo - but*much* *much* more work than this.
> >>
> >>We already report statistics on vacuums
> >>(lazy_vacuum_rel()/pgstat_report_vacuum), so this would just be adding
> >>1 or 2 counters to that. Should we add the other counters from vacuum?
> >>That would be significantly more data.
> >
> >At the very least it'd require:
> >* The number of buffers skipped due to the vm
> >* The number of buffers actually scanned
> >* The number of full table in contrast to partial vacuums
>
> If we're going to track full scan vacuums separately, I think we'd
> need two separate scan counters.

Well, we already have the entire number of vacuums, so we'd have that.

> I think (for pgstats) it'd make more sense to just count initial
> failure to acquire the lock in a full scan in the 'skipped page'
> counter. In terms of answering the question "how common is it not to
> get the lock", it's really the same event.

It's absolutely not. You need to correlate the number of skipped pages
to the number of vacuumed pages. If you have 100k skipped in 10 billion
total scanned pages it's something entirely different than 100k in 200k
pages.

> Honestly, my desire at this point is just to see if there's actually a
> problem. Many people are asserting that this should be a very rare
> occurrence, but there's no way to know.

Ok.

> Towards that simple end, I'm a bit torn. My preference would be to
> simply log, and throw a warning if it's over some threshold. I believe
> that would give the best odds of getting feedback from users if this
> isn't as uncommon as we think.

I'm strongly against a warning. We have absolutely no sane way of tuning
that. We'll just create a pointless warning that people will get
confused about and that they'll have to live with till the next release.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2014-11-11 08:10:01 Re: [REVIEW] Re: Compression of full-page-writes
Previous Message Kouhei Kaigai 2014-11-11 07:51:06 Re: using custom scan nodes to prototype parallel sequential scan