From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: really lazy vacuums? |
Date: | 2011-03-15 03:13:13 |
Message-ID: | AANLkTi=yfcFznhmdM+iM_T4pFNKxaYfy2yuSkMnwaSqG@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Mar 14, 2011 at 7:40 PM, Greg Stark <gsstark(at)mit(dot)edu> wrote:
> On Mon, Mar 14, 2011 at 8:33 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> I'm not sure about that either, although I'm not sure of the reverse
>> either. But before I invest any time in it, do you have any other
>> good ideas for addressing the "it stinks to scan the entire index
>> every time we vacuum" problem? Or for generally making vacuum
>> cheaper?
>
> You could imagine an index am that instead of scanning the index just
> accumulated all the dead tuples in a hash table and checked it before
> following any index link. Whenever the hash table gets too big it
> could do a sequential scan and prune any pointers to those tuples and
> start a new hash table.
Hmm. For something like a btree, you could also remove each TID from
the hash table when you kill the corresponding index tuple.
> That would work well if there are frequent vacuums finding a few
> tuples per vacuum. It might even allow us to absorb dead tuples from
> "retail" vacuums so we could get rid of line pointers earlier. But it
> would involve more WAL-logged operations and incur an extra overhead
> on each index lookup.
Yeah, that seems deeply unfortunate. It's hard to imagine us wanting
to go there.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-03-15 05:02:05 | Re: really lazy vacuums? |
Previous Message | Bruce Momjian | 2011-03-15 02:26:31 | Patch to git_changelog for release note creation |