From: | "Simon Riggs" <simon(at)2ndquadrant(dot)com> |
---|---|
To: | "Luke Lonergan" <llonergan(at)greenplum(dot)com> |
Cc: | "ITAGAKI Takahiro" <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, "Sherry Moore" <sherry(dot)moore(at)sun(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Mark Kirkwood" <markir(at)paradise(dot)net(dot)nz>, "Pavan Deolasee" <pavan(at)enterprisedb(dot)com>, "Gavin Sherry" <swm(at)alcove(dot)com(dot)au>, "PGSQL Hackers" <pgsql-hackers(at)postgresql(dot)org>, "Doug Rady" <drady(at)greenplum(dot)com> |
Subject: | Re: Bug: Buffer cache is not scan resistant |
Date: | 2007-03-13 09:37:20 |
Message-ID: | 1173778641.3641.793.camel@silverbirch.site |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 2007-03-12 at 22:16 -0700, Luke Lonergan wrote:
> You may know we've built something similar and have seen similar gains.
Cool
> We're planning a modification that I think you should consider: when there
> is a sequential scan of a table larger than the size of shared_buffers, we
> are allowing the scan to write through the shared_buffers cache.
Write? For which operations?
I was thinking to do this for bulk writes also, but it would require
changes to bgwriter's cleaning sequence. Are you saying to write say ~32
buffers then fsync them, rather than letting bgwriter do that? Then
allow those buffers to be reused?
> The hypothesis is that if a relation is of a size equal to or less than the
> size of shared_buffers, it is "cacheable" and should use the standard LRU
> approach to provide for reuse.
Sounds reasonable. Please say more.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2007-03-13 10:08:04 | Re: Bug: Buffer cache is not scan resistant |
Previous Message | Richard Huxton | 2007-03-13 09:31:45 | Re: My honours project - databases using dynamically attached entity-properties |