From: | Olleg <olleg_s(at)mail(dot)ru> |
---|---|
To: | Ron <rjpeace(at)earthlink(dot)net> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: BLCKSZ |
Date: | 2005-12-06 10:40:47 |
Message-ID: | 43956AAF.7060108@mail.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Ron wrote:
> In general, and in a very fuzzy sense, "bigger is better". pg files are
> laid down in 1GB chunks, so there's probably one limitation.
Hm, expect result of tests on other platforms, but if there theoretical
dispute...
I can't undestand why "bigger is better". For instance in search by
index. Index point to page and I need load page to get one row. Thus I
load 8kb from disk for every raw. And keep it then in cache. You
recommend 64kb. With your recomendation I'll get 8 times more IO
throughput, 8 time more head seek on disk, 8 time more memory cache (OS
cache and postgresql) become busy. I have small row in often loaded
table, 32 bytes. Table is not clustered, used several indices. And you
recommend load 64Kb when I need only 32b, isn't it?
--
Olleg
From | Date | Subject | |
---|---|---|---|
Next Message | Alban Hertroys | 2005-12-06 10:45:48 | Re: need help |
Previous Message | Tino Wildenhain | 2005-12-06 10:32:36 | Re: Can this query go faster??? |