From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> |
Cc: | "Jeff Davis" <pgsql(at)j-davis(dot)com>, "Simon Riggs" <simon(at)enterprisedb(dot)com>, "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Sequential scans |
Date: | 2007-05-02 17:54:43 |
Message-ID: | 87ejlz6rh8.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> writes:
> Let's use a normal hash table instead, and use a lock to protect it. If we only
> update it every 10 pages or so, the overhead should be negligible. To further
> reduce contention, we could modify ReadBuffer to let the caller know if the
> read resulted in a physical read or not, and only update the entry when a page
> is physically read in. That way all the synchronized scanners wouldn't be
> updating the same value, just the one performing the I/O. And while we're at
> it, let's use the full relfilenode instead of just the table oid in the hash.
It's probably fine to just do that. But if we find it's a performance
bottleneck we could probably still manage to avoid the lock except when
actually inserting a new hash element. If you just store in the hash an index
into an array stored in global memory then you could get away without a lock
on the element in the array.
It starts to get to be a fair amount of code when you think about how you
would reuse elements of the array. That's why I suggest only looking at this
if down the road we find that it's a bottleneck.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2007-05-02 18:09:01 | Re: Feature freeze progress report |
Previous Message | Jeff Davis | 2007-05-02 17:52:18 | Re: Sequential scans |