Re: Load Distributed Checkpoints test results

Lists: pgsql-hackers
From: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Load Distributed Checkpoints test results
Date: 2007-06-13 11:09:02
Message-ID: 466FD04E.4050505@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Here's results from a batch of test runs with LDC. This patch only
spreads out the writes, fsyncs work as before. This patch also includes
the optimization that we don't write buffers that were dirtied after
starting the checkpoint.

http://community.enterprisedb.com/ldc/

See tests 276-280. 280 is the baseline with no patch attached, the
others are with load distributed checkpoints with different values for
checkpoint_write_percent. But after running the tests I noticed that the
spreading was actually controlled by checkpoint_write_rate, which sets
the minimum rate for the writes, so all those tests with the patch
applied are effectively the same; the writes were spread over a period
of 1 minute. I'll fix that setting and run more tests.

The response time graphs show that the patch reduces the max (new-order)
response times during checkpoints from ~40-60 s to ~15-20 s. The change
in minute by minute average is even more significant.

The change in overall average response times is also very significant.
1.5s without patch, and ~0.3-0.4s with the patch for new-order
transactions. That also means that we pass the TPC-C requirement that
90th percentile of response times must be < average.

All that said, there's still significant checkpoint spikes present, even
though they're much less severe than without the patch. I'm willing to
settle with this for 8.3. Does anyone want to push for more testing and
thinking of spreading the fsyncs as well, and/or adding a delay between
writes and fsyncs?

Attached is the patch used in the tests. It still needs some love..

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

Attachment Content-Type Size
ldc-justwrites-1.patch text/x-diff 27.6 KB

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>
Cc: "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-13 14:28:29
Message-ID: 873b0wylma.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


"Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> writes:

> The response time graphs show that the patch reduces the max (new-order)
> response times during checkpoints from ~40-60 s to ~15-20 s.

I think that's the headline number here. The worst-case response time is
reduced from about 60s to about 17s. That's pretty impressive on its own. It
would be worth knowing if that benefit goes away if we push the machine again
to the edge of its i/o bandwidth.

> The change in overall average response times is also very significant. 1.5s
> without patch, and ~0.3-0.4s with the patch for new-order transactions. That
> also means that we pass the TPC-C requirement that 90th percentile of response
> times must be < average.

Incidentally this is backwards. the 90th percentile response time must be
greater than the average response time for that transaction.

This isn't actually a very stringent test given that most of the data points
in the 90th percentile are actually substantially below the maximum. It's
quite possible to achieve it even with maximum response times above 60s.

However TPC-E has even more stringent requirements:

During Steady State the throughput of the SUT must be sustainable for the
remainder of a Business Day started at the beginning of the Steady State.

Some aspects of the benchmark implementation can result in rather
insignificant but frequent variations in throughput when computed over
somewhat shorter periods of time. To meet the sustainable throughput
requirement, the cumulative effect of these variations over one Business
Day must not exceed 2% of the Reported Throughput.

Comment 1: This requirement is met when the throughput computed over any
period of one hour, sliding over the Steady State by increments of ten
minutes, varies from the Reported Throughput by no more than 2%.

Some aspects of the benchmark implementation can result in rather
significant but sporadic variations in throughput when computed over some
much shorter periods of time. To meet the sustainable throughput
requirement, the cumulative effect of these variations over one Business
Day must not exceed 20% of the Reported Throughput.

Comment 2: This requirement is met when the throughput level computed over
any period of ten minutes, sliding over the Steady State by increments one
minute, varies from the Reported Throughput by no more than 20%.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-13 18:38:04
Message-ID: 200706131138.04097.josh@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Greg,

> However TPC-E has even more stringent requirements:

I'll see if I can get our TPCE people to test this, but I'd say that the
existing patch is already good enough to be worth accepting based on the TPCC
results.

However, I would like to see some community testing on oddball workloads (like
huge ELT operations and read-only workloads) to see if the patch imposes any
extra overhead on non-OLTP databases.

--
Josh Berkus
PostgreSQL @ Sun
San Francisco


From: ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>
To: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-14 07:20:04
Message-ID: 20070614142205.6A5F.ITAGAKI.TAKAHIRO@oss.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


Heikki Linnakangas <heikki(at)enterprisedb(dot)com> wrote:

> Here's results from a batch of test runs with LDC. This patch only
> spreads out the writes, fsyncs work as before.

I saw similar results in my tests. Spreading only writes are enough
for OLTP at least on Linux with middle-or-high-grade storage system.
It also works well on desktop-grade Widnows machine.

However, I don't know how it works on other OSes, including Solaris
and FreeBSD, that have different I/O policies. Would anyone test it
in those environment?

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center


From: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-15 07:19:40
Message-ID: 46723D8C.2090009@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Heikki Linnakangas wrote:
> Here's results from a batch of test runs with LDC. This patch only
> spreads out the writes, fsyncs work as before. This patch also includes
> the optimization that we don't write buffers that were dirtied after
> starting the checkpoint.
>
> http://community.enterprisedb.com/ldc/
>
> See tests 276-280. 280 is the baseline with no patch attached, the
> others are with load distributed checkpoints with different values for
> checkpoint_write_percent. But after running the tests I noticed that the
> spreading was actually controlled by checkpoint_write_rate, which sets
> the minimum rate for the writes, so all those tests with the patch
> applied are effectively the same; the writes were spread over a period
> of 1 minute. I'll fix that setting and run more tests.

I ran another series of tests, with a less aggressive bgwriter_delay
setting, which also affects the minimum rate of the writes in the WIP
patch I used.

Now that the checkpoints are spread out more, the response times are
very smooth.

With the 40% checkpoint_write_percent setting, the checkpoints last ~3
minutes. About 85% of the buffer cache is dirty at the beginning of
checkpoints, and thanks to the optimization of not writing pages dirtied
after checkpoint start, only ~47% of those are actually written by the
checkpoint. That explains why the checkpoints only last ~3 minutes, and
not checkpoint_timeout*checkpoint_write_percent, which would be 6
minutes. The estimation of how much progress has been done and how much
is left doesn't take the gain from that optimization into account.

The sync phase only takes ~5 seconds. I'm very happy with these results.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>
Cc: "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-15 12:26:26
Message-ID: 87ps3xcsjx.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> writes:

> I ran another series of tests, with a less aggressive bgwriter_delay setting,
> which also affects the minimum rate of the writes in the WIP patch I used.
>
> Now that the checkpoints are spread out more, the response times are very
> smooth.

So obviously the reason the results are so dramatic is that the checkpoints
used to push the i/o bandwidth demand up over 100%. By spreading it out you
can see in the io charts that even during the checkpoint the i/o busy rate
stays just under 100% except for a few data points.

If I understand it right Greg Smith's concern is that in a busier system where
even *with* the load distributed checkpoint the i/o bandwidth demand during t
he checkpoint was *still* being pushed over 100% then spreading out the load
would only exacerbate the problem by extending the outage.

To that end it seems like what would be useful is a pair of tests with and
without the patch with about 10% larger warehouse size (~ 115) which would
push the i/o bandwidth demand up to about that level.

It might even make sense to run a test with an outright overloaded to see if
the patch doesn't exacerbate the condition. Something with a warehouse size of
maybe 150. I would expect it to fail the TPCC constraints either way but what
would be interesting to know is whether it fails by a larger margin with the
LDC behaviour or a smaller margin.

Even just the fact that we're passing at 105 warehouses -- and apparently with
quite a bit of headroom too -- whereas previously we were failing at that
level on this hardware is a positive result as far as the TPCC benchmark
methodology.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com


From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-15 17:54:59
Message-ID: Pine.GSO.4.64.0706151317040.5871@westnet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, 15 Jun 2007, Gregory Stark wrote:

> If I understand it right Greg Smith's concern is that in a busier system
> where even *with* the load distributed checkpoint the i/o bandwidth
> demand during t he checkpoint was *still* being pushed over 100% then
> spreading out the load would only exacerbate the problem by extending
> the outage.

Thank you for that very concise summary; that's exactly what I've run
into. DBT2 creates a heavy write load, but it's not testing a real burst
behavior where something is writing as fast as it's possible to.

I've been involved in applications that are more like a data logging
situation, where periodically you get some data source tossing
transactions in as fast as it will hit disk--the upstream source
temporarily becomes faster at generating data during these periods than
the database itself can be. Under normal conditions, the LDC smoothing
would be a win, as it would lower the number of times the entire flow of
operations got stuck. But at these peaks it will, as you say, extend the
outage.

> It might even make sense to run a test with an outright overloaded to
> see if the patch doesn't exacerbate the condition.

Exactly. I expect that it will make things worse, but I'd like to keep an
eye on making sure the knobs are available so that it's only slightly
worse.

I think it's important to at least recognize that someone who wants LDC
normally might occasionally have a period where they're completely
overloaded, and that this new feature doesn't have an unexpected breakdown
when that happens. I'm still stuggling with creating a simple test case
to demonstrate what I'm concerned about. I'm not familiar enough with the
TPC testing to say whether your suggestions for adjusting warehouse size
would accomplish that (because the flow is so different I had to abandon
working with that a while ago as not being representative of what I was
doing), but I'm glad you're thinking about it.

--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD


From: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
To: Gregory Stark <stark(at)enterprisedb(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-15 19:14:18
Message-ID: 4672E50A.3090605@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Gregory Stark wrote:
> "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> writes:
>> Now that the checkpoints are spread out more, the response times are very
>> smooth.
>
> So obviously the reason the results are so dramatic is that the checkpoints
> used to push the i/o bandwidth demand up over 100%. By spreading it out you
> can see in the io charts that even during the checkpoint the i/o busy rate
> stays just under 100% except for a few data points.
>
> If I understand it right Greg Smith's concern is that in a busier system where
> even *with* the load distributed checkpoint the i/o bandwidth demand during t
> he checkpoint was *still* being pushed over 100% then spreading out the load
> would only exacerbate the problem by extending the outage.
>
> To that end it seems like what would be useful is a pair of tests with and
> without the patch with about 10% larger warehouse size (~ 115) which would
> push the i/o bandwidth demand up to about that level.

I still don't see how spreading the writes could make things worse, but
running more tests is easy. I'll schedule tests with more warehouses
over the weekend.

> It might even make sense to run a test with an outright overloaded to see if
> the patch doesn't exacerbate the condition. Something with a warehouse size of
> maybe 150. I would expect it to fail the TPCC constraints either way but what
> would be interesting to know is whether it fails by a larger margin with the
> LDC behaviour or a smaller margin.

I'll do that as well, though experiences with tests like that in the
past have been that it's hard to get repeatable results that way.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Greg Smith" <gsmith(at)gregsmith(dot)com>
Cc: "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-15 20:13:01
Message-ID: 87myz1ou2a.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Greg Smith" <gsmith(at)gregsmith(dot)com> writes:

> On Fri, 15 Jun 2007, Gregory Stark wrote:
>
>> If I understand it right Greg Smith's concern is that in a busier system
>> where even *with* the load distributed checkpoint the i/o bandwidth demand
>> during t he checkpoint was *still* being pushed over 100% then spreading out
>> the load would only exacerbate the problem by extending the outage.
>
> Thank you for that very concise summary; that's exactly what I've run into.
> DBT2 creates a heavy write load, but it's not testing a real burst behavior
> where something is writing as fast as it's possible to.

Ah, thanks, that's precisely the distinction that I was missing. It's funny,
something that was so counter-intuitive initially has become so ingrained in
my thinking that I didn't even notice I was assuming it any more.

DBT2 has "think times" which it uses to limit the flow of transactions. This
is critical to ensuring that you're forced to increase the scale of the
database if you want to report larger transaction rates which of course is
what everyone wants to brag about.

Essentially this is what makes it an OLTP benchmark. You're measuring how well
you can keep up with a flow of transactions which arrive at a fixed speed
independent of the database.

But what you're concerned about is not OLTP performance at all. It's a kind of
DSS system -- perhaps there's another TLA that's more precise. But the point
is you're concerned with total throughput and not response time. You don't
have a fixed rate imposed by outside circumstances with which you have to keep
up all the time. You just want to be have the highest throughput overall.

The good news is that this should be pretty easy to test though. The main
competitor for DBT2 is BenchmarkSQL whose main deficiency is precisely the
lack of support for the think times. We can run BenchmarkSQL runs to see if
the patch impacts performance when it's set to run as fast as possible with no
think times.

While in theory spreading out the writes could have a detrimental effect I
think we should wait until we see actual numbers. I have a pretty strong
suspicion that the effect would be pretty minimal. We're still doing the same
amount of i/o total, just with a slightly less chance for the elevator
algorithm to optimize the pattern.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com


From: "Gregory Maxwell" <gmaxwell(at)gmail(dot)com>
To: "Gregory Stark" <stark(at)enterprisedb(dot)com>
Cc: "Greg Smith" <gsmith(at)gregsmith(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-15 20:28:34
Message-ID: e692861c0706151328g1b7a097j4817042760e9dcc0@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 6/15/07, Gregory Stark <stark(at)enterprisedb(dot)com> wrote:
> While in theory spreading out the writes could have a detrimental effect I
> think we should wait until we see actual numbers. I have a pretty strong
> suspicion that the effect would be pretty minimal. We're still doing the same
> amount of i/o total, just with a slightly less chance for the elevator
> algorithm to optimize the pattern.

..and the sort patching suggests that the OS's elevator isn't doing a
great job for large flushes in any case. I wouldn't be shocked to see
load distributed checkpoints cause an unconditional improvement since
they may do better at avoiding the huge burst behavior that is
overrunning the OS elevator in any case.


From: PFC <lists(at)peufeu(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-15 20:42:33
Message-ID: op.ttzc47iicigqcu@apollo13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, 15 Jun 2007 22:28:34 +0200, Gregory Maxwell <gmaxwell(at)gmail(dot)com>
wrote:

> On 6/15/07, Gregory Stark <stark(at)enterprisedb(dot)com> wrote:
>> While in theory spreading out the writes could have a detrimental
>> effect I
>> think we should wait until we see actual numbers. I have a pretty strong
>> suspicion that the effect would be pretty minimal. We're still doing
>> the same
>> amount of i/o total, just with a slightly less chance for the elevator
>> algorithm to optimize the pattern.
>
> ..and the sort patching suggests that the OS's elevator isn't doing a
> great job for large flushes in any case. I wouldn't be shocked to see
> load distributed checkpoints cause an unconditional improvement since
> they may do better at avoiding the huge burst behavior that is
> overrunning the OS elevator in any case.

...also consider that if someone uses RAID5, sorting the writes may
produce more full-stripe writes, which don't need the read-then-write
RAID5 performance killer...


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-16 19:17:03
Message-ID: 200706161217.04045.josh@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

All,

Where is the most current version of this patch? I want to test it on TPCE,
but there seem to be 4-5 different versions floating around, and the patch
tracker hasn't been updated.

--
Josh Berkus
PostgreSQL @ Sun
San Francisco


From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-17 05:36:33
Message-ID: Pine.GSO.4.64.0706160129370.10398@westnet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, 15 Jun 2007, Gregory Stark wrote:

> But what you're concerned about is not OLTP performance at all.

It's an OLTP system most of the time that periodically gets unexpectedly
high volume. The TPC-E OLTP test suite actually has a MarketFeed
component to in it that has similar properties to what I was fighting
with. In a real-world Market Feed, you spec the system to survive a very
high volume day of trades. But every now and then there's some event that
causes volumes to spike way outside of any you would ever be able to plan
for, and much data ends up getting lost as a result from systems not being
able to keep up. A look at the 1987 "Black Monday" crash is informative
here: http://en.wikipedia.org/wiki/Black_Monday_(1987)

> But the point is you're concerned with total throughput and not response
> time. You don't have a fixed rate imposed by outside circumstances with
> which you have to keep up all the time. You just want to be have the
> highest throughput overall.

Actually, I think I care about reponse time more than you do. In a
typical data logging situation, there is some normal rate at which you
expect transactions to arrive. There's usually something memory-based
upsteam that can buffer a small amount of delay, so an occasional short
checkpoint blip can be tolerated. But if there's ever a really extended
one, you actually start losing data when the buffers overflow.

The last project I was working on, any checkpoint that caused a
transaction to slip for more than 5 seconds would cause a data loss. One
of the defenses against that happening is that you have a wicked fast
transaction rate to clear the buffer out when thing are going well, but by
no means is that rate the important thing--never having the response time
halt for so long that transactions get lost is.

> The good news is that this should be pretty easy to test though. The
> main competitor for DBT2 is BenchmarkSQL whose main deficiency is
> precisely the lack of support for the think times.

Maybe you can get something useful out of that one. I found the
performance impact of the JDBC layer in the middle so lowered overall
throughput and distanced me from what was happening that it blurred what
was going on.

--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD


From: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-17 06:43:43
Message-ID: 4674D81F.3080009@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Josh Berkus wrote:
> Where is the most current version of this patch? I want to test it on TPCE,
> but there seem to be 4-5 different versions floating around, and the patch
> tracker hasn't been updated.

It would be the ldc-justwrites-2.patch:
http://archives.postgresql.org/pgsql-patches/2007-06/msg00149.php

Thanks in advance for the testing!

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: "Simon Riggs" <simon(at)2ndquadrant(dot)com>
To: "Greg Smith" <gsmith(at)gregsmith(dot)com>
Cc: "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-18 08:42:25
Message-ID: 1182156146.6855.88.camel@silverbirch.site
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, 2007-06-17 at 01:36 -0400, Greg Smith wrote:

> The last project I was working on, any checkpoint that caused a
> transaction to slip for more than 5 seconds would cause a data loss. One
> of the defenses against that happening is that you have a wicked fast
> transaction rate to clear the buffer out when thing are going well, but by
> no means is that rate the important thing--never having the response time
> halt for so long that transactions get lost is.

You would want longer checkpoints in that case.

You're saying you don't want long checkpoints because they cause an
effective outage. The current situation is that checkpoints are so
severe that they cause an effective halt to processing, even though
checkpoints allow processing to continue. Checkpoints don't hold any
locks that prevent normal work from occurring but they did cause an
unthrottled burst of work to occur that raised expected service times
dramatically on an already busy server.

There were a number of effects contributing to the high impact of
checkpointing. Heikki's recent changes reduce the impact of checkpoints
so that they do *not* halt other processing. Longer checkpoints do *not*
mean longer halts in processing, they actually reduce the halt in
processing. Smoother checkpoints mean smaller resource queues when a
burst coincides with a checkpoint, so anybody with throughput-maximised
or bursty apps should want longer, smooth checkpoints.

You're right to ask for a minimum write rate, since this allows very
small checkpoints to complete in reduced times. There's no gain from
having long checkpoints per se, just the reduction in peak write rate
they typically cause.

--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com


From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-18 13:38:47
Message-ID: Pine.GSO.4.64.0706180843360.4392@westnet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, 18 Jun 2007, Simon Riggs wrote:

> Smoother checkpoints mean smaller resource queues when a burst coincides
> with a checkpoint, so anybody with throughput-maximised or bursty apps
> should want longer, smooth checkpoints.

True as long as two conditions hold:

1) Buffers needed to fill allocation requests are still being written fast
enough. The buffer allocation code starts burning a lot of CPU+lock
resources when many clients are all searching the pool looking for a
buffers and there aren't many clean ones to be found. The way the current
checkpoint code starts at the LRU point and writes everything dirty in the
order new buffers will be allocating in as fast as possible means it's
doing the optimal procedure to keep this from happening. It's being
presumed that making the LRU writer active will mitigate this issue, my
experience suggests that may not be as effective as hoped--unless it gets
changed so that it's allowed to decrement usage_count.

To pick one example of a direction I'm a little concerned about related to
this, Itagaki's sorted writes results look very interesting. But as his
test system is such that the actual pgbench TPS numbers are 1/10 of the
ones I was seeing when I started having ugly buffer allocation issues, I'm
real sure the particular test he's running isn't sensitive to issues in
this area at all; there's just not enough buffer cache churn if you're
only doing a couple of hundred TPS for this to happen.

2) The checkpoint still finishes in time.

The thing you can't forget about when dealing with an overloaded system is
that there's no such thing as lowering the load of the checkpoint such
that it doesn't have a bad impact. Assume new transactions are being
generated by an upstream source such that the database itself is the
bottleneck, and you're always filling 100% of I/O capacity. All I'm
trying to get everyone to consider is that if you have a large pool of
dirty buffers to deal with in this situation, it's possible (albeit
difficult) to get into a situation where if the checkpoint doesn't write
out the dirty buffers fast enough, the client backends will evacuate them
instead in a way that makes the whole process less efficient than the
current behavior.

--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD


From: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Cc: Gregory Stark <stark(at)enterprisedb(dot)com>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-20 17:58:14
Message-ID: 46796AB6.8060009@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I've uploaded the latest test results to the results page at
http://community.enterprisedb.com/ldc/

The test results on the index page are not in a completely logical
order, sorry about that.

I ran a series of tests with 115 warehouses, and no surprises there. LDC
smooths the checkpoints nicely.

Another series with 150 warehouses is more interesting. At that # of
warehouses, the data disks are 100% busy according to iostat. The 90%
percentile response times are somewhat higher with LDC, though the
variability in both the baseline and LDC test runs seem to be pretty
high. Looking at the response time graphs, even with LDC there's clear
checkpoint spikes there, but they're much less severe than without.

Another series was with 90 warehouses, but without think times, driving
the system to full load. LDC seems to smooth the checkpoints very nicely
in these tests.

Heikki Linnakangas wrote:
> Gregory Stark wrote:
>> "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> writes:
>>> Now that the checkpoints are spread out more, the response times are
>>> very
>>> smooth.
>>
>> So obviously the reason the results are so dramatic is that the
>> checkpoints
>> used to push the i/o bandwidth demand up over 100%. By spreading it
>> out you
>> can see in the io charts that even during the checkpoint the i/o busy
>> rate
>> stays just under 100% except for a few data points.
>>
>> If I understand it right Greg Smith's concern is that in a busier
>> system where
>> even *with* the load distributed checkpoint the i/o bandwidth demand
>> during t
>> he checkpoint was *still* being pushed over 100% then spreading out
>> the load
>> would only exacerbate the problem by extending the outage.
>>
>> To that end it seems like what would be useful is a pair of tests with
>> and
>> without the patch with about 10% larger warehouse size (~ 115) which
>> would
>> push the i/o bandwidth demand up to about that level.
>
> I still don't see how spreading the writes could make things worse, but
> running more tests is easy. I'll schedule tests with more warehouses
> over the weekend.
>
>> It might even make sense to run a test with an outright overloaded to
>> see if
>> the patch doesn't exacerbate the condition. Something with a warehouse
>> size of
>> maybe 150. I would expect it to fail the TPCC constraints either way
>> but what
>> would be interesting to know is whether it fails by a larger margin
>> with the
>> LDC behaviour or a smaller margin.
>
> I'll do that as well, though experiences with tests like that in the
> past have been that it's hard to get repeatable results that way.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-20 20:07:02
Message-ID: Pine.GSO.4.64.0706201512070.2198@westnet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, 20 Jun 2007, Heikki Linnakangas wrote:

> Another series with 150 warehouses is more interesting. At that # of
> warehouses, the data disks are 100% busy according to iostat. The 90%
> percentile response times are somewhat higher with LDC, though the
> variability in both the baseline and LDC test runs seem to be pretty high.

Great, this the exactly the behavior I had observed and wanted someone
else to independantly run into. When you're in 100% disk busy land, LDC
can shift the distribution of bad transactions around in a way that some
people may not be happy with, and that might represent a step backward
from the current code for them. I hope you can understand now why I've
been so vocal that it must be possible to pull this new behavior out so
the current form of checkpointing is still available.

While it shows up in the 90% figure, what happens is most obvious in the
response time distribution graphs. Someone who is currently getting a run
like #295 right now: http://community.enterprisedb.com/ldc/295/rt.html

Might be really unhappy if they turn on LDC expecting to smooth out
checkpoints and get the shift of #296 instead:
http://community.enterprisedb.com/ldc/296/rt.html

That is of course cherry-picking the most extreme examples. But it
illustrates my concern about the possibility for LDC making things worse
on a really overloaded system, which is kind of counter-intuitive because
you might expect that would be the best case for its improvements.

When I summarize the percentile behavior from your results with 150
warehouses in a table like this:

Test LDC % 90%
295 None 3.703
297 None 4.432
292 10 3.432
298 20 5.925
296 30 5.992
294 40 4.132

I think it does a better job of showing how LDC can shift the top
percentile around under heavy load, even though there are runs where it's
a clear improvement. Since there is so much variability in results when
you get into this territory, you really need to run a lot of these tests
to get a feel for the spread of behavior. I spent about a week of
continuously running tests stalking this bugger before I felt I'd mapped
out the boundaries with my app. You've got your own priorities, but I'd
suggest you try to find enough time for a more exhaustive look at this
area before nailing down the final form for the patch.

--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD


From: Bruce Momjian <bruce(at)momjian(dot)us>
To: Greg Smith <gsmith(at)gregsmith(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-20 20:44:14
Message-ID: 200706202044.l5KKiER13680@momjian.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Greg Smith wrote:
> I think it does a better job of showing how LDC can shift the top
> percentile around under heavy load, even though there are runs where it's
> a clear improvement. Since there is so much variability in results when
> you get into this territory, you really need to run a lot of these tests
> to get a feel for the spread of behavior. I spent about a week of
> continuously running tests stalking this bugger before I felt I'd mapped
> out the boundaries with my app. You've got your own priorities, but I'd
> suggest you try to find enough time for a more exhaustive look at this
> area before nailing down the final form for the patch.

OK, I have hit my limit on people asking for more testing. I am not
against testing, but I don't want to get into a situation where we just
keep asking for more tests and not move forward. I am going to rely on
the patch submitters to suggest when enough testing has been done and
move on.

I don't expect this patch to be perfect when it is applied. I do expect
to be a best effort, and it will get continual real-world testing during
beta and we can continue to improve this. Right now, we know we have a
serious issue with checkpoint I/O, and this patch is going to improve
that in most cases. I don't want to see us reject it or greatly delay
beta as we try to make it perfect.

My main point is that should keep trying to make the patch better, but
the patch doesn't have to be perfect to get applied. I don't want us to
get into a death-by-testing spiral.

--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://www.enterprisedb.com

+ If your life is a hard drive, Christ can be your backup. +


From: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
To: Greg Smith <gsmith(at)gregsmith(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-20 20:55:34
Message-ID: 46799446.60301@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Greg Smith wrote:
> While it shows up in the 90% figure, what happens is most obvious in the
> response time distribution graphs. Someone who is currently getting a
> run like #295 right now: http://community.enterprisedb.com/ldc/295/rt.html
>
> Might be really unhappy if they turn on LDC expecting to smooth out
> checkpoints and get the shift of #296 instead:
> http://community.enterprisedb.com/ldc/296/rt.html

You mean the shift and "flattening" of the graph to the right in the
delivery response time distribution graph? Looking at the other runs,
that graph looks sufficiently different between the two baseline runs
and the patched runs that I really wouldn't draw any conclusion from that.

In any case you *can* disable LDC if you want to.

> That is of course cherry-picking the most extreme examples. But it
> illustrates my concern about the possibility for LDC making things worse
> on a really overloaded system, which is kind of counter-intuitive
> because you might expect that would be the best case for its improvements.

Well, it is indeed cherry-picking, so I still don't see how LDC could
make things worse on a really overloaded system. I grant you there might
indeed be one, but I'd like to understand the underlaying mechanism, or
at least see one.

> Since there is so much variability in results
> when you get into this territory, you really need to run a lot of these
> tests to get a feel for the spread of behavior.

I think that's the real lesson from this. In any case, at least LDC
doesn't seem to hurt much in any of the test configurations tested this
far, and smooths the checkpoints a lot in most configurations.

> I spent about a week of
> continuously running tests stalking this bugger before I felt I'd mapped
> out the boundaries with my app. You've got your own priorities, but I'd
> suggest you try to find enough time for a more exhaustive look at this
> area before nailing down the final form for the patch.

I don't have any good simple ideas on how to make it better in 8.3
timeframe, so I don't think there's much to learn from repeating these
tests.

That said, running tests is easy and doesn't take much effort. If you
have suggestions for configurations or workloads to test, I'll be happy
to do that.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>
To: Bruce Momjian <bruce(at)momjian(dot)us>
Cc: Greg Smith <gsmith(at)gregsmith(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-20 21:00:40
Message-ID: 46799578.1070207@commandprompt.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Bruce Momjian wrote:
> Greg Smith wrote:

> I don't expect this patch to be perfect when it is applied. I do expect
> to be a best effort, and it will get continual real-world testing during
> beta and we can continue to improve this. Right now, we know we have a
> serious issue with checkpoint I/O, and this patch is going to improve
> that in most cases. I don't want to see us reject it or greatly delay
> beta as we try to make it perfect.
>
> My main point is that should keep trying to make the patch better, but
> the patch doesn't have to be perfect to get applied. I don't want us to
> get into a death-by-testing spiral.

Death by testing? The only comment I have is that is could be useful to
be able to turn this feature off via GUC. Other than that, I think it is
great.

Joshua D. Drake

>

--

=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 1997
http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


From: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>
To: "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>
Cc: Bruce Momjian <bruce(at)momjian(dot)us>, Greg Smith <gsmith(at)gregsmith(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-20 21:03:58
Message-ID: 4679963E.2050809@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Joshua D. Drake wrote:
> The only comment I have is that is could be useful to
> be able to turn this feature off via GUC. Other than that, I think it is
> great.

Yeah, you can do that.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com


From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-20 21:38:18
Message-ID: Pine.GSO.4.64.0706201725190.5280@westnet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, 20 Jun 2007, Heikki Linnakangas wrote:

> You mean the shift and "flattening" of the graph to the right in the delivery
> response time distribution graph?

Right, that's what ends up happening during the problematic cases. To
pick numbers out of the air, instead of 1% of the transactions getting
nailed really hard, by spreading things out you might have 5% of them get
slowed considerably but not awfully. For some applications, that might be
considered a step backwards.

> I'd like to understand the underlaying mechanism

I had to capture regular snapshots of the buffer cache internals via
pg_buffercache to figure out where the breakdown was in my case.

> I don't have any good simple ideas on how to make it better in 8.3 timeframe,
> so I don't think there's much to learn from repeating these tests.

Right now, it's not clear which of the runs represent normal behavior and
which might be anomolies. That's the thing you might learn if you had 10
at each configuration instead of just 1. The goal for the 8.3 timeframe
in my mind would be to perhaps have enough data to give better guidelines
for defaults and a range of useful settings in the documentation.

The only other configuration I'd be curious to see is pushing the number
of warehouses even more to see if the 90% numbers spread further from
current behavior.

--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD


From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Load Distributed Checkpoints test results
Date: 2007-06-20 21:43:24
Message-ID: Pine.GSO.4.64.0706201738560.5280@westnet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, 20 Jun 2007, Bruce Momjian wrote:

> I don't expect this patch to be perfect when it is applied. I do expect
> to be a best effort, and it will get continual real-world testing during
> beta and we can continue to improve this.

This is completely fair. Consider my suggestions something that people
might want look out for during beta rather than a task Heikki should worry
about before applying the patch.

--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD