Re: reducing random_page_cost from 4 to 2 to force index scan

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Merlin Moncure" <mmoncure(at)gmail(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>
Cc: "Sok Ann Yap" <sokann(at)gmail(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: reducing random_page_cost from 4 to 2 to force index scan
Date: 2011-05-13 17:54:52
Message-ID: 4DCD2A1C020000250003D6E0@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> We've talked in the past (and I still think it's a good idea, but
> haven't gotten around to doing anything about it) about adjusting
> the planner to attribute to each relation the percentage of its
> pages which we believe we'll find in cache. Although many
> complicated ideas for determining that percentage have been
> proposed, my favorite one is fairly simple: assume that small
> relations will be mostly or entirely cached, and that big ones
> won't be. Allow the administrator to override the result on a
> per-relation basis. It's difficult to imagine a situation where
> the planner should assume that a relation with only handful of
> pages isn't going to be cached. Even if it isn't, as soon as
> someone begins accessing it, it will be.

Simple as the heuristic is, I bet it would be effective. While one
can easily construct a synthetic case where it falls down, the ones
I can think of aren't all that common, and you are suggesting an
override mechanism.

-Kevin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Robert Haas 2011-05-13 19:04:43 Re: [PERFORMANCE] expanding to SAN: which portion best to move
Previous Message Robert Haas 2011-05-13 17:44:24 Re: reducing random_page_cost from 4 to 2 to force index scan