From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Oleg Bartunov <oleg(at)sai(dot)msu(dot)su>, Teodor Sigaev <teodor(at)sigaev(dot)ru>, pgsql-hackers(at)postgresql(dot)org, Marc Mamin <M(dot)Mamin(at)intershop(dot)de> |
Subject: | Re: Qual evaluation cost estimates for GIN indexes |
Date: | 2012-02-17 03:10:38 |
Message-ID: | 10996.1329448238@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> This issue of detoasting costs comes up a lot, specifically in
> reference to @@. I wonder if we shouldn't try to apply some quick and
> dirty hack in time for 9.2, like maybe random_page_cost for every row
> or every attribute we think will require detoasting. That's obviously
> going to be an underestimate in many if not most cases, but it would
> probably still be an improvement over assuming that detoasting is
> free.
Well, you can't theorize without data, to misquote Sherlock. We'd need
to have some stats on which to base "we think this will require
detoasting". I guess we could teach ANALYZE to compute and store
fractions "percent of entries in this column that are compressed"
and "percent that are stored out-of-line", and then hope that those
percentages apply to the subset of entries that a given query will
visit, and thereby derive a number of operations to multiply by whatever
we think the cost-per-detoast-operation is.
It's probably all do-able, but it seems way too late to be thinking
about this for 9.2. We've already got a ton of new stuff that needs
to be polished and tuned...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2012-02-17 05:02:27 | Re: Command Triggers |
Previous Message | Robert Haas | 2012-02-17 02:21:15 | Re: Qual evaluation cost estimates for GIN indexes |