Re: Super Optimizing Postgres

Lists: pgsql-hackers
From: mlw <markw(at)mohawksoft(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Super Optimizing Postgres
Date: 2001-11-16 13:45:27
Message-ID: 3BF51877.AF63B1C@mohawksoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

It is sort of discouraging that Postgres' default configuration is so
conservative. A tweek here and there can make a big difference. It seems to me
that the default postgresql.conf should not be used for a dedicated server. In
fact, it can't because TCP/IP is disabled.

In my projects I have done the default stuff, increase buffers, sort memory,
and so on, however, some of the tunable parameters seem a bit arcane and are
not completely clear what they do or the effect they may have. (some have no
noticable effect, eventhough it looks as if they should.) I think most users,
particularly those new to SQL databases in general, would find it difficult to
tune Postgres.

Does anyone think it is a good idea, to make a postgresql.conf cookbook sort of
thing? Gather a number of tuned config files, annotated as to why the settings
are set the way they are, and the machine on which they run.

Particularly, I'd like to see if someone has been able to really understand and
manipulate the planner COST options successfully.

Alternatively, it should be possible to write a program that analyzes a target
system, asks questions like: "Is this a dedicated server?" "How much ram do you
have?" "On which volume will the database be installed?" Then perform some
tests that mimic the cost values, and create a new postgresql.conf with the
options tuned.


From: Alex Pilosov <alex(at)pilosoft(dot)com>
To: mlw <markw(at)mohawksoft(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-16 16:26:11
Message-ID: Pine.BSO.4.10.10111161125050.31668-100000@spider.pilosoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Sure, that'd be useful. Especially since we all can discuss what things
work, what doesn't work.

On Fri, 16 Nov 2001, mlw wrote:

> It is sort of discouraging that Postgres' default configuration is so
> conservative. A tweek here and there can make a big difference. It seems to me
> that the default postgresql.conf should not be used for a dedicated server. In
> fact, it can't because TCP/IP is disabled.
>
> In my projects I have done the default stuff, increase buffers, sort
> memory, and so on, however, some of the tunable parameters seem a bit
> arcane and are not completely clear what they do or the effect they
> may have. (some have no noticable effect, eventhough it looks as if
> they should.) I think most users, particularly those new to SQL
> databases in general, would find it difficult to tune Postgres.
>
> Does anyone think it is a good idea, to make a postgresql.conf
> cookbook sort of thing? Gather a number of tuned config files,
> annotated as to why the settings are set the way they are, and the
> machine on which they run.
>
> Particularly, I'd like to see if someone has been able to really
> understand and manipulate the planner COST options successfully.
>
> Alternatively, it should be possible to write a program that analyzes
> a target system, asks questions like: "Is this a dedicated server?"
> "How much ram do you have?" "On which volume will the database be
> installed?" Then perform some tests that mimic the cost values, and
> create a new postgresql.conf with the options tuned.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo(at)postgresql(dot)org
>
>


From: matthew(at)zeut(dot)net
To: Alex Pilosov <alex(at)pilosoft(dot)com>, mlw <markw(at)mohawksoft(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-16 19:52:06
Message-ID: 200111161949.fAGJnMA30823@neuromancer.ctlno.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Didn't Bruce do a document on this exact topic? I'm not sure it's everything
you are looking for, but it might help you get started.

> Sure, that'd be useful. Especially since we all can discuss what things
> work, what doesn't work.
>
> On Fri, 16 Nov 2001, mlw wrote:
> > It is sort of discouraging that Postgres' default configuration is so
> > conservative. A tweek here and there can make a big difference. It seems
> > to me that the default postgresql.conf should not be used for a dedicated
> > server. In fact, it can't because TCP/IP is disabled.
> >
> > In my projects I have done the default stuff, increase buffers, sort
> > memory, and so on, however, some of the tunable parameters seem a bit
> > arcane and are not completely clear what they do or the effect they
> > may have. (some have no noticable effect, eventhough it looks as if
> > they should.) I think most users, particularly those new to SQL
> > databases in general, would find it difficult to tune Postgres.
> >
> > Does anyone think it is a good idea, to make a postgresql.conf
> > cookbook sort of thing? Gather a number of tuned config files,
> > annotated as to why the settings are set the way they are, and the
> > machine on which they run.
> >
> > Particularly, I'd like to see if someone has been able to really
> > understand and manipulate the planner COST options successfully.
> >
> > Alternatively, it should be possible to write a program that analyzes
> > a target system, asks questions like: "Is this a dedicated server?"
> > "How much ram do you have?" "On which volume will the database be
> > installed?" Then perform some tests that mimic the cost values, and
> > create a new postgresql.conf with the options tuned.
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 1: subscribe and unsubscribe commands go to majordomo(at)postgresql(dot)org
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo(at)postgresql(dot)org


From: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
To: matthew(at)zeut(dot)net
Cc: Alex Pilosov <alex(at)pilosoft(dot)com>, mlw <markw(at)mohawksoft(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-16 20:43:46
Message-ID: 200111162043.fAGKhkV22333@candle.pha.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> Didn't Bruce do a document on this exact topic? I'm not sure it's everything
> you are looking for, but it might help you get started.

Sure:

http://techdocs.postgresql.org

--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026


From: "Serguei Mokhov" <sa_mokho(at)alcor(dot)concordia(dot)ca>
To: "mlw" <markw(at)mohawksoft(dot)com>, "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-16 21:03:53
Message-ID: 083201c16ee2$34ea0220$5dd9fea9@gunn
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

----- Original Message -----
From: mlw <markw(at)mohawksoft(dot)com>
Sent: Friday, November 16, 2001 8:45 AM

> Does anyone think it is a good idea, to make a postgresql.conf cookbook sort of
> thing? Gather a number of tuned config files, annotated as to why the settings
> are set the way they are, and the machine on which they run.

Sounds like a very good idea, but... who's gonna be in charge of it?

> Alternatively, it should be possible to write a program that analyzes a target
> system, asks questions like: "Is this a dedicated server?" "How much ram do you
> have?" "On which volume will the database be installed?" Then perform some
> tests that mimic the cost values, and create a new postgresql.conf with the
> options tuned.

This program sounds like a little rule-based expert system-like software :)
"Tune your PostgreSQL server with a pgTune expert system!" an ad
would be voicing out loud...

--
Serguei A. Mokhov


From: mlw <markw(at)mohawksoft(dot)com>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
Cc: matthew(at)zeut(dot)net, Alex Pilosov <alex(at)pilosoft(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-16 22:06:38
Message-ID: 3BF58DEE.55C57C2A@mohawksoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Bruce Momjian wrote:
>
> > Didn't Bruce do a document on this exact topic? I'm not sure it's everything
> > you are looking for, but it might help you get started.
>
> Sure:
>
> http://techdocs.postgresql.org

Question:

Does sort memory come out of shared? I don't think so (would it need too?), but
"Cache Size and Sort Size " seems to imply that it does.

Also, you don't go into the COST variables. If what is documented about them is
correct, they are woefully incorrect with a modern machine.

Would a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025? That would
imply that that computer could process 2500 conditionals in the time it would
take to make a sequential read. If Postgres is run on a 10K RPM disk vs a 5.4K
RPM disk on two different machines with the same processor and speed, these
numbers can't hope to be right, one should be about twice as high as the other.

That said, do these numbers really affect the planner all that much?


From: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
To: mlw <markw(at)mohawksoft(dot)com>
Cc: matthew(at)zeut(dot)net, Alex Pilosov <alex(at)pilosoft(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-16 22:13:56
Message-ID: 200111162213.fAGMDuA00496@candle.pha.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> Does sort memory come out of shared? I don't think so (would it
> need too?), but "Cache Size and Sort Size " seems to imply that
> it does.

Sort comes from per-backend memory, not shared. Of course, both
per-backend and shared memory come from the same pool of RAM, if that's
what you mean. Could it be made clearer?

> Also, you don't go into the COST variables. If what is documented
> about them is correct, they are woefully incorrect with a modern
> machine.

You mean:

#random_page_cost = 4
#cpu_tuple_cost = 0.01
#cpu_index_tuple_cost = 0.001
#cpu_operator_cost = 0.0025

Thos are relative, of course. We are always looking for better numbers.

> Would a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025?
> That would imply that that computer could process 2500 conditionals
> in the time it would take to make a sequential read. If Postgres
> is run on a 10K RPM disk vs a 5.4K RPM disk on two different
> machines with the same processor and speed, these numbers can't
> hope to be right, one should be about twice as high as the other.

Again, are the correct relative to each other.

> That said, do these numbers really affect the planner all that
> much?

Sure do effect the planner. That is how index scan vs sequential and
join type are determined.

--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026


From: mlw <markw(at)mohawksoft(dot)com>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
Cc: matthew(at)zeut(dot)net, Alex Pilosov <alex(at)pilosoft(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-16 23:36:11
Message-ID: 3BF5A2EB.801E1711@mohawksoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Bruce Momjian wrote:
>
> > Does sort memory come out of shared? I don't think so (would it
> > need too?), but "Cache Size and Sort Size " seems to imply that
> > it does.
>
> Sort comes from per-backend memory, not shared. Of course, both
> per-backend and shared memory come from the same pool of RAM, if that's
> what you mean. Could it be made clearer?

Actually, in most cases, RAM is ram, and shared ram is just the same ram.
However, on some cluster environments, shared ram is a different memory pool
than process ram.

In your section: "Cache Size and Sort Size" You talk about both and shared
memory, but make no distinction about which uses what. I would suggest an
explicit sentence about how Cache comes from the shared memory pool and Sort
comes from the process memory pool.

>
> > Also, you don't go into the COST variables. If what is documented
> > about them is correct, they are woefully incorrect with a modern
> > machine.
>
> You mean:
>
> #random_page_cost = 4
> #cpu_tuple_cost = 0.01
> #cpu_index_tuple_cost = 0.001
> #cpu_operator_cost = 0.0025
>
> Thos are relative, of course. We are always looking for better numbers.
>
> > Would a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025?
> > That would imply that that computer could process 2500 conditionals
> > in the time it would take to make a sequential read. If Postgres
> > is run on a 10K RPM disk vs a 5.4K RPM disk on two different
> > machines with the same processor and speed, these numbers can't
> > hope to be right, one should be about twice as high as the other.
>
> Again, are the correct relative to each other.

They can't possibly be correct. If We have two identical machines where the
only difference is the disk subsystem, one has a 10K RPM SCSI system, and the
other is a 5.4K RPM IDE disk. There is no way these settings can be accurate.

>
> > That said, do these numbers really affect the planner all that
> > much?
>
> Sure do effect the planner. That is how index scan vs sequential and
> join type are determined.

OK, then it should be fairly straight forward to make a profiler for Postgres
to set these parameters.

Sequential and random read test, these are a no brainer.

The cpu costs are not so easy. I don't have a very good idea about what they
"really" mean. I have a guess, but not enough to make a benchmark routine.

If someone who REALLY knows could detail a test routine for each of the cpu
cost types. I could write a program that will spit out what the numbers should
be.

I envision:

pgprofile /u01/postgres/test.file

And that would output something like:

random_page_cost = 2
cpu_tuple_cost = 0.00344
cpu_index_tuple_cost = 0.00234
cpu_operator_cost = 0.00082


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: mlw <markw(at)mohawksoft(dot)com>
Cc: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, matthew(at)zeut(dot)net, Alex Pilosov <alex(at)pilosoft(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-16 23:45:44
Message-ID: 18056.1005954344@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

mlw <markw(at)mohawksoft(dot)com> writes:
> Also, you don't go into the COST variables. If what is documented
> about them is correct, they are woefully incorrect with a modern
> machine.

The numbers seemed in the right ballpark when I experimented with them
a year or two ago. Keep in mind that all these things are quite fuzzy,
given that we never know for sure whether a read() request to the kernel
is going to cause actual I/O or be satisfied from kernel cache. One
should not mistake "operator" for "addition instruction", either ---
at the very least, there are several levels of function call overhead
involved. And using one cost number for all Postgres operators is
obviously a simplification of reality anyhow.

> Would a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025? That
> would imply that that computer could process 2500 conditionals in the
> time it would take to make a sequential read. If Postgres is run on a
> 10K RPM disk vs a 5.4K RPM disk on two different machines with the
> same processor and speed, these numbers can't hope to be right, one
> should be about twice as high as the other.

We've talked in the past about autoconfiguring these numbers, but I have
not seen any proposals for automatically deriving trustworthy numbers
in a reasonable period of time. There's too much uncertainty and noise
in any simple test. (I spent literally weeks convincing myself that the
current numbers were reasonable.)

But having said all that, it's true that CPU speed has been increasing
much faster than disk speed over the last few years. If you feel like
reducing the CPU cost numbers, try it and see what happens.

regards, tom lane


From: Alex Pilosov <alex(at)pilosoft(dot)com>
To: mlw <markw(at)mohawksoft(dot)com>
Cc: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-16 23:47:47
Message-ID: Pine.BSO.4.10.10111161846060.23456-100000@spider.pilosoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, 16 Nov 2001, mlw wrote:

> Sequential and random read test, these are a no brainer.
>
> The cpu costs are not so easy. I don't have a very good idea about what they
> "really" mean. I have a guess, but not enough to make a benchmark routine.
>
> If someone who REALLY knows could detail a test routine for each of the cpu
> cost types. I could write a program that will spit out what the numbers should
> be.
>
> I envision:
>
> pgprofile /u01/postgres/test.file
>
> And that would output something like:
>
> random_page_cost = 2
> cpu_tuple_cost = 0.00344
> cpu_index_tuple_cost = 0.00234
> cpu_operator_cost = 0.00082

Actually, it could be done if the 'EXPLAIN EXACTLY' was implemented. Such
a command would give you same output as explain plus precise timings each
step took. Idea was floated in the list awhile ago. I think the problem
with it was properly separating borders of queries, but still, it'd cool

-alex


From: Justin Clift <justin(at)postgresql(dot)org>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
Cc: mlw <markw(at)mohawksoft(dot)com>, matthew(at)zeut(dot)net, Alex Pilosov <alex(at)pilosoft(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-17 04:12:39
Message-ID: 3BF5E3B7.87EFD086@postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Bruce Momjian wrote:
>
<snip>
> You mean:
>
> #random_page_cost = 4
> #cpu_tuple_cost = 0.01
> #cpu_index_tuple_cost = 0.001
> #cpu_operator_cost = 0.0025
>
> Thos are relative, of course. We are always looking for better numbers.
>
> > Would a 1.3 ghz Athlon really have a cpu_operator_cost of 0.0025?
> > That would imply that that computer could process 2500 conditionals
> > in the time it would take to make a sequential read. If Postgres
> > is run on a 10K RPM disk vs a 5.4K RPM disk on two different
> > machines with the same processor and speed, these numbers can't
> > hope to be right, one should be about twice as high as the other.
>
> Again, are the correct relative to each other.

I think it's an interesting thought of having a program which will test
a system and work out the Accurate and Correct values for this.

It could become of enormous tuning help. Another thought is to have
PostgreSQL tune these parameters as it goes.

+ Justin

<snip>
> --
> Bruce Momjian | http://candle.pha.pa.us
> pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
> + If your life is a hard drive, | 830 Blythe Avenue
> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster

--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Justin Clift <justin(at)postgresql(dot)org>
Cc: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, mlw <markw(at)mohawksoft(dot)com>, matthew(at)zeut(dot)net, Alex Pilosov <alex(at)pilosoft(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-17 05:14:34
Message-ID: 20174.1005974074@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Justin Clift <justin(at)postgresql(dot)org> writes:
> I think it's an interesting thought of having a program which will test
> a system and work out the Accurate and Correct values for this.

I think if you start out with the notion that there is an Accurate
and Correct value for these parameters, you've already lost the game.
They're inherently fuzzy numbers because they are parameters of an
(over?) simplified model of reality.

It would be interesting to try to fit the model to reality on a wide
variety of queries, machines & operating environments, and see what
numbers we come up with. But there's always going to be a huge fuzz
factor involved. Because of that, I'd be *real* wary of any automated
tuning procedure. Without a good dollop of human judgement in the loop,
an automated parameter-setter will likely go off into never-never land
:-(

regards, tom lane


From: mlw <markw(at)mohawksoft(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Justin Clift <justin(at)postgresql(dot)org>, Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, matthew(at)zeut(dot)net, Alex Pilosov <alex(at)pilosoft(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-17 14:24:58
Message-ID: 3BF6733A.1D8A48BF@mohawksoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
>
> Justin Clift <justin(at)postgresql(dot)org> writes:
> > I think it's an interesting thought of having a program which will test
> > a system and work out the Accurate and Correct values for this.
>
> I think if you start out with the notion that there is an Accurate
> and Correct value for these parameters, you've already lost the game.

Pardon me, but this is a very scary statement. If you believe that this is
true, then the planner/optimizer is inherently flawed.

If the numbers are meaningless, they should not be used. If the numbers are not
meaningless, then they must be able to be tuned.

In my example, two computers with exactly the same hardware, except one has a
5400 RPM IDE drive, the other has a 10,000 RPM IDE drive. These machines should
not use the same settings, it is obvious that a sequential scan block read on
one will be faster than the other.

> They're inherently fuzzy numbers because they are parameters of an
> (over?) simplified model of reality.

Very true, this also scares me. Relating processing time with disk I/O seems
like a very questionable approach these days. Granted, this strategy was
probably devised when computers systems were a lot simpler, but today with
internal disk caching, cpu instruction caches, pipelining, L2 caching
techniques, clock multiplication, RAID controllers, and so on, the picture is
far more complicated.

That being said, a running system should have a "consistent" performance which
should be measurable.

>
> It would be interesting to try to fit the model to reality on a wide
> variety of queries, machines & operating environments, and see what
> numbers we come up with. But there's always going to be a huge fuzz
> factor involved. Because of that, I'd be *real* wary of any automated
> tuning procedure. Without a good dollop of human judgement in the loop,
> an automated parameter-setter will likely go off into never-never land
> :-(

It is an interesting problem:

It appears that "sequential scan" is the root measure for the system.
Everything is biased off that. A good "usable" number for sequential scan would
need to be created.

Working on the assumption that a server will have multiple back-ends running,
we will start (n) threads (Or forks). (n) will be tunable by the admin based on
the expected concurrency of their system. There would two disk I/O test
routines, one which performs a sequential scan, one which performs a series of
random page reads.

(n)/2 threads would perform sequential scans.
(n)/2 threads would perform random page reads.

(Perhaps we can even have the admin select the ratio between random and
sequential? Or let the admin choose how many of each?)

The result of the I/O profiling would be to get a reasonable average number of
microseconds it takes to do a sequential scan and a ratio to random page reads.
This will be done on files who's size is larger than the available memory of
the machine to ensure the files do not stay in the OS cache. Each routine will
make several iterations before quitting.

This should produce a reasonable picture of the user's system in action.

We could then take standard code profiling techniques against representative
test routines to do each of the cpu_xxx modules and compare the profile of the
routines against the evaluated time of a sequential scan.

The real trick is the code profiling "test routines." What kind of processing
do the cpu_xxx settings represent?


From: Justin Clift <justin(at)postgresql(dot)org>
To: mlw <markw(at)mohawksoft(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, matthew(at)zeut(dot)net, Alex Pilosov <alex(at)pilosoft(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-17 15:26:48
Message-ID: 3BF681B8.21DCD91A@postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

mlw wrote:
>
> Tom Lane wrote:
> >
> > Justin Clift <justin(at)postgresql(dot)org> writes:
> > > I think it's an interesting thought of having a program which will test
> > > a system and work out the Accurate and Correct values for this.
> >
> > I think if you start out with the notion that there is an Accurate
> > and Correct value for these parameters, you've already lost the game.

I believe we can evolve and refine the model so it becomes more
accurate. We at least have to be willing to try otherwise I think THAT
is where we lose the game. :)

<snip>
> In my example, two computers with exactly the same hardware, except one has a
> 5400 RPM IDE drive, the other has a 10,000 RPM IDE drive. These machines should
> not use the same settings, it is obvious that a sequential scan block read on
> one will be faster than the other.

If we're going to do this bit properly, then we'll have to take into
consideration many database objects will need their own individual
statistics. For example, lets say we have a database with a bunch of
10k rpm SCSI drives which the tables are on, and the system also has one
or more 15k rpm SCSI drives (lets say a Seagate Cheetah II drives) on
which the indices have been placed. With the 10k rpm drives, the tables
needing the fastest throughput or having the highest usage are put on
the outer edges of the disk media, and the rest of the tables are placed
in the available space.

On this theoretical system, we will be better off measuring the
performance of each table and index in turn then generating and storing
costs for each one which are as "accurate as possible at this point in
time". A model like this would probably have these costs re-calculated
each time the ANALYZE command is run to ensure their accuracy through
database growth and changes.

I think this would be decently accurate, and RAID systems would be
accurately analysed. Don't know how to take into account large cache
sizes though. :)

Regards and best wishes,

Justin Clift

--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi


From: Hannu Krosing <hannu(at)sid(dot)tm(dot)ee>
To: Justin Clift <justin(at)postgresql(dot)org>
Cc: mlw <markw(at)mohawksoft(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, matthew(at)zeut(dot)net, Alex Pilosov <alex(at)pilosoft(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Super Optimizing Postgres
Date: 2001-11-18 06:39:11
Message-ID: 3BF7578F.7000108@sid.tm.ee
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Justin Clift wrote:

>>In my example, two computers with exactly the same hardware, except one has a
>>5400 RPM IDE drive, the other has a 10,000 RPM IDE drive. These machines should
>>not use the same settings, it is obvious that a sequential scan block read on
>>one will be faster than the other.
>>
>
>If we're going to do this bit properly, then we'll have to take into
>consideration many database objects will need their own individual
>statistics. For example, lets say we have a database with a bunch of
>10k rpm SCSI drives which the tables are on, and the system also has one
>or more 15k rpm SCSI drives (lets say a Seagate Cheetah II drives) on
>which the indices have been placed. With the 10k rpm drives, the tables
>needing the fastest throughput or having the highest usage are put on
>the outer edges of the disk media, and the rest of the tables are placed
>in the available space.
>
>On this theoretical system, we will be better off measuring the
>performance of each table and index in turn then generating and storing
>costs for each one which are as "accurate as possible at this point in
>time".
>
That would mean that these statistic values must be stored in pg_class
and not be SET
variables at all.
This will probably have the added benefit that some cacheing effects of
small/big tables
will be accounted for automatically so you dont have to do that in
optimizer.

> A model like this would probably have these costs re-calculated
>each time the ANALYZE command is run to ensure their accuracy through
>database growth and changes.
>
Then the ANALYZE should be run on both tables and indexes. AFAIK we
currently
analyze only real data.

>
>I think this would be decently accurate, and RAID systems would be
>accurately analysed. Don't know how to take into account large cache
>sizes though. :)
>
Maybe some volatile statistict on how much of table "may be" cached in
the disk/fs
caches, assuming that we currently know how much of each is in shared
memory.

-----------
Hannu