lazy vxid locks, v1

Lists: pgsql-hackers
From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: lazy vxid locks, v1
Date: 2011-06-12 21:39:35
Message-ID: BANLkTikp4EGbfw9xDx9bQ_vK8DQa11WbPg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Here is a patch that applies over the "reducing the overhead of
frequent table locks" (fastlock-v3) patch and allows heavyweight VXID
locks to spring into existence only when someone wants to wait on
them. I believe there is a large benefit to be had from this
optimization, because the combination of these two patches virtually
eliminates lock manager traffic on "pgbench -S" workloads. However,
there are several flies in the ointment.

1. It's a bit of a kludge. I leave it to readers of the patch to
determine exactly what about this patch they think is kludgey, but
it's likely not the empty set. I suspect that MyProc->fpLWLock needs
to be renamed to something a bit more generic if we're going to use it
like this, but I don't immediately know what to call it. Also, the
mechanism whereby we take SInvalWriteLock to work out the mapping from
BackendId to PGPROC * is not exactly awesome. I don't think it
matters from a performance point of view, because operations that need
VXID locks are sufficiently rare that the additional lwlock traffic
won't matter a bit. However, we could avoid this altogether if we
rejiggered the mechanism for allocating PGPROCs and backend IDs.
Right now, we allocate PGPROCs off of linked lists, except for
auxiliary procs which allocate them by scanning a three-element array
for an empty slot. Then, when the PGPROC subscribes to sinval, the
sinval mechanism allocates a backend ID by scanning for the lowest
unused backend ID in the ProcState array. If we changed the logic for
allocating PGPROCs to mimic what the sinval queue currently does, then
the backend ID could be defined as the offset into the PGPROC array.
Translating between a backend ID and a PGPROC * now becomes a matter
of pointer arithmetic. Not sure if this is worth doing.

2. Bad thing happen with large numbers of connections. This patch
increases peak performance, but as you increase the number of
concurrent connections beyond the number of CPU cores, performance
drops off faster with the patch than without it. For example, on the
32-core loaner from Nate Boley, using 80 pgbench -S clients, unpatched
HEAD runs at ~36K TPS; with fastlock, it jumps up to about ~99K TPS;
with this patch also applied, it drops down to about ~64K TPS, despite
the fact that nearly all the contention on the lock manager locks has
been eliminated. On Stefan Kaltenbrunner's 40-core box, he was
actually able to see performance drop down below unpatched HEAD with
this applied! This is immensely counterintuitive. What is going on?

Profiling reveals that the system spends enormous amounts of CPU time
in s_lock. LWLOCK_STATS reveals that the only lwlock with significant
amounts of blocking is the BufFreelistLock; but that doesn't explain
the high CPU utilization. In fact, it appears that the problem is
with the LWLocks that are frequently acquired in *shared* mode. There
is no actual lock conflict, but each LWLock is protected by a spinlock
which must be acquired and released to bump the shared locker counts.
In HEAD, everything bottlenecks on the lock manager locks and so it's
not really possible for enough traffic to build up on any single
spinlock to have a serious impact on performance. The locks being
sought there are exclusive, so when they are contended, processes just
get descheduled. But with the exclusive locks out of the way,
everyone very quickly lines up to acquire shared buffer manager locks,
buffer content locks, etc. and large pile-ups ensue, leaving to
massive cache line contention and tons of CPU usage. My initial
thought was that this was contention over the root block of the index
on the pgbench_accounts table and the buf mapping lock protecting it,
but instrumentation showed otherwise. I hacked up the system to
report how often each lwlock spinlock exceeded spins_per_delay. The
following is the end of a report showing the locks with the greatest
amounts of excess spinning:

lwlock 0: shacq 0 exacq 191032 blk 42554 spin 272
lwlock 41: shacq 5982347 exacq 11937 blk 1825 spin 4217
lwlock 38: shacq 6443278 exacq 11960 blk 1726 spin 4440
lwlock 47: shacq 6106601 exacq 12096 blk 1555 spin 4497
lwlock 34: shacq 6423317 exacq 11896 blk 1863 spin 4776
lwlock 45: shacq 6455173 exacq 12052 blk 1825 spin 4926
lwlock 39: shacq 6867446 exacq 12067 blk 1899 spin 5071
lwlock 44: shacq 6824502 exacq 12040 blk 1655 spin 5153
lwlock 37: shacq 6727304 exacq 11935 blk 2077 spin 5252
lwlock 46: shacq 6862206 exacq 12017 blk 2046 spin 5352
lwlock 36: shacq 6854326 exacq 11920 blk 1914 spin 5441
lwlock 43: shacq 7184761 exacq 11874 blk 1863 spin 5625
lwlock 48: shacq 7612458 exacq 12109 blk 2029 spin 5780
lwlock 35: shacq 7150616 exacq 11916 blk 2026 spin 5782
lwlock 33: shacq 7536878 exacq 11985 blk 2105 spin 6273
lwlock 40: shacq 7199089 exacq 12068 blk 2305 spin 6290
lwlock 456: shacq 36258224 exacq 0 blk 0 spin 54264
lwlock 42: shacq 43012736 exacq 11851 blk 10675 spin 62017
lwlock 4: shacq 72516569 exacq 190 blk 196 spin 341914
lwlock 5: shacq 145042917 exacq 0 blk 0 spin 798891
grand total: shacq 544277977 exacq 181886079 blk 82371 spin 1338135

So, the majority (60%) of the excess spinning appears to be due to
SInvalReadLock. A good chunk are due to ProcArrayLock (25%). And
everything else is peanuts by comparison, though I am guessing the
third and fourth places (5% and 4%, respectively) are in fact the
buffer mapping lock that covers the pgbench_accounts_pkey root index
block, and the content lock on that buffer.

What is to be done?

The SInvalReadLock acquisitions are all attributable, I believe, to
AcceptInvalidationMessages(), which is called in a number of places,
but in particular, after every heavyweight lock acquisition. I think
we need a quick way to short-circuit the lock acquisition there when
no work is to be done, which is to say, nearly always. Indeed, Noah
Misch just proposed something along these lines on another thread
("Make relation_openrv atomic wrt DDL"), though I think this data may
cast a new light on the details.

I haven't tracked down where the ProcArrayLock acquisitions are coming
from. The realistic possibilities appear to be
TransactionIdIsInProgress(), TransactionIdIsActive(), GetOldestXmin(),
and GetSnapshotData(). Nor do I have a clear idea what to do about
this.

The remaining candidates are mild by comparison, so I won't analyze
them further here for the moment.

Another way to attack this problem would be to come up with some more
general mechanism to make shared-lwlock acquisition cheaper, such as
having 3 or 4 shared-locker counts per lwlock, each with a separate
spinlock. Then, at least in the case where there's no real lwlock
contention, the spin-waiters can spread out across all of them. But
I'm not sure it's really worth it, considering that we have only a
handful of cases where this problem appears to be severe. But we
probably need to see what happens when we fix some of the current
cases where this is happening. If throughput goes up, then we're
good. If it just shifts the spin lock pile-up to someplace where it's
not so easily eliminated, then we might need to either eliminate all
the problem cases one by one, or else come up with some more general
mechanism.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment Content-Type Size
lazyvxid-v1.patch application/octet-stream 16.1 KB

From: Greg Stark <stark(at)mit(dot)edu>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: lazy vxid locks, v1
Date: 2011-06-12 21:58:31
Message-ID: BANLkTimM3RA-TOxJ1qroo0jH-DWTP_YtWA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Jun 12, 2011 at 10:39 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I hacked up the system to
> report how often each lwlock spinlock exceeded spins_per_delay.

I don't doubt the rest of your analysis but one thing to note, number
of spins on a spinlock is not the same as the amount of time spent
waiting for it.

When there's contention on a spinlock the actual test-and-set
instruction ends up taking a long time while cache lines are copied
around. In theory you could have processes spending an inordinate
amount of time waiting on a spinlock even though they never actually
hit spins_per_delay or you could have processes that quickly exceed
spins_per_delay.

I think in practice the results are the same because the code the
spinlocks protect is always short so it's hard to get the second case
on a multi-core box without actually having contention anyways.

--
greg


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Greg Stark <stark(at)mit(dot)edu>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: lazy vxid locks, v1
Date: 2011-06-13 01:06:02
Message-ID: BANLkTimO602NJbRuawbB6u2tvpuR+w=h4w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Jun 12, 2011 at 5:58 PM, Greg Stark <stark(at)mit(dot)edu> wrote:
> On Sun, Jun 12, 2011 at 10:39 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> I hacked up the system to
>> report how often each lwlock spinlock exceeded spins_per_delay.
>
> I don't doubt the rest of your analysis but one thing to note, number
> of spins on a spinlock is not the same as the amount of time spent
> waiting for it.
>
> When there's contention on a spinlock the actual test-and-set
> instruction ends up taking a long time while cache lines are copied
> around. In theory you could have processes spending an inordinate
> amount of time waiting on a spinlock even though they never actually
> hit spins_per_delay or you could have processes that quickly exceed
> spins_per_delay.
>
> I think in practice the results are the same because the code the
> spinlocks protect is always short so it's hard to get the second case
> on a multi-core box without actually having contention anyways.

All good points. I don't immediately have a better way of measuring
what's going on. Maybe dtrace could do it, but I don't really know
how to use it and am not sure it's set up on any of the boxes I have
for testing. Throwing gettimeofday() calls into SpinLockAcquire()
seems likely to change the overall system behavior enough to make the
results utterly meaningless. It wouldn't be real difficult to count
the number of times that we TAS() rather than just counting the number
of times we TAS() more than spins-per-delay, but I'm not sure whether
that would really address your concern. Hopefully, further
experimentation will make things more clear.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: lazy vxid locks, v1
Date: 2011-06-13 11:55:28
Message-ID: 4DF5FAB0.8060605@kaltenbrunner.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 06/12/2011 11:39 PM, Robert Haas wrote:
> Here is a patch that applies over the "reducing the overhead of
> frequent table locks" (fastlock-v3) patch and allows heavyweight VXID
> locks to spring into existence only when someone wants to wait on
> them. I believe there is a large benefit to be had from this
> optimization, because the combination of these two patches virtually
> eliminates lock manager traffic on "pgbench -S" workloads. However,
> there are several flies in the ointment.
>
> 1. It's a bit of a kludge. I leave it to readers of the patch to
> determine exactly what about this patch they think is kludgey, but
> it's likely not the empty set. I suspect that MyProc->fpLWLock needs
> to be renamed to something a bit more generic if we're going to use it
> like this, but I don't immediately know what to call it. Also, the
> mechanism whereby we take SInvalWriteLock to work out the mapping from
> BackendId to PGPROC * is not exactly awesome. I don't think it
> matters from a performance point of view, because operations that need
> VXID locks are sufficiently rare that the additional lwlock traffic
> won't matter a bit. However, we could avoid this altogether if we
> rejiggered the mechanism for allocating PGPROCs and backend IDs.
> Right now, we allocate PGPROCs off of linked lists, except for
> auxiliary procs which allocate them by scanning a three-element array
> for an empty slot. Then, when the PGPROC subscribes to sinval, the
> sinval mechanism allocates a backend ID by scanning for the lowest
> unused backend ID in the ProcState array. If we changed the logic for
> allocating PGPROCs to mimic what the sinval queue currently does, then
> the backend ID could be defined as the offset into the PGPROC array.
> Translating between a backend ID and a PGPROC * now becomes a matter
> of pointer arithmetic. Not sure if this is worth doing.
>
> 2. Bad thing happen with large numbers of connections. This patch
> increases peak performance, but as you increase the number of
> concurrent connections beyond the number of CPU cores, performance
> drops off faster with the patch than without it. For example, on the
> 32-core loaner from Nate Boley, using 80 pgbench -S clients, unpatched
> HEAD runs at ~36K TPS; with fastlock, it jumps up to about ~99K TPS;
> with this patch also applied, it drops down to about ~64K TPS, despite
> the fact that nearly all the contention on the lock manager locks has
> been eliminated. On Stefan Kaltenbrunner's 40-core box, he was
> actually able to see performance drop down below unpatched HEAD with
> this applied! This is immensely counterintuitive. What is going on?

just to add actual new numbers to the discussion(pgbench -n -S -T 120 -c
X -j X) on that particular 40cores/80 threads box:

unpatched:

c1: tps = 7808.098053 (including connections establishing)
c4: tps = 29941.444359 (including connections establishing)
c8: tps = 58930.293850 (including connections establishing)
c16: tps = 106911.385826 (including connections establishing)
c24: tps = 117401.654430 (including connections establishing)
c32: tps = 110659.627803 (including connections establishing)
c40: tps = 107689.945323 (including connections establishing)
c64: tps = 104835.182183 (including connections establishing)
c80: tps = 101885.549081 (including connections establishing)
c160: tps = 92373.395791 (including connections establishing)
c200: tps = 90614.141246 (including connections establishing)

fast locks:

c1: tps = 7710.824723 (including connections establishing)
c4: tps = 29653.578364 (including connections establishing)
c8: tps = 58827.195578 (including connections establishing)
c16: tps = 112814.382204 (including connections establishing)
c24: tps = 154559.012960 (including connections establishing)
c32: tps = 189281.391250 (including connections establishing)
c40: tps = 215807.263233 (including connections establishing)
c64: tps = 180644.527322 (including connections establishing)
c80: tps = 118266.615543 (including connections establishing)
c160: tps = 68957.999922 (including connections establishing)
c200: tps = 68803.801091 (including connections establishing)

fast locks + lazy vxid:

c1: tps = 7828.644389 (including connections establishing)
c4: tps = 30520.558169 (including connections establishing)
c8: tps = 60207.396385 (including connections establishing)
c16: tps = 117923.775435 (including connections establishing)
c24: tps = 158775.317590 (including connections establishing)
c32: tps = 195768.530589 (including connections establishing)
c40: tps = 223308.779212 (including connections establishing)
c64: tps = 152848.742883 (including connections establishing)
c80: tps = 65738.046558 (including connections establishing)
c160: tps = 57075.304457 (including connections establishing)
c200: tps = 59107.675182 (including connections establishing)

so my reading of that is that we currently "only" scale well to ~12
physical cores, the fast locks patch gets us pretty nicely past that
point to a total scale of a bit better than 2x. but it degrades fairly
quick after that point and at a level of 2x the number of threads in the
box we are only able to get 2/3 of unpatched -HEAD(!).

with the lazy vxid patch on top the curve looks even more interesting,
we are scaling to an even higher peak but we degrade even worse and at
c80 (which equals the number of threads in the box) we are already only
able to get the amount of tps that unpatched -HEAD would give at ~10 cores..
Another thing worth noting is that with the patches we have MUCH less
idle - which is good for the cases where we are getting a benefit for
(as in higher throughput) - but the extrem case now is fast locks + lazy
which manages to get us less than 8% idle @c160 BUT only 57000 tps while
unpatched -HEAD is 75% idle and doing 92000 tps, or said otherwise - we
need almost 4x the computing resoures to get only 2/3 of the performance
(so a ~7x WORSE on a CPU/tps scale).

all those tests are done with pgbench running on the same box - which
has a noticable impact on the results because pgbench is using ~1 core
per 8 cores of the backend tested in cpu resoures - though I don't think
it causes any changes in the results that would show the performance
behaviour in a different light.

Stefan


From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: lazy vxid locks, v1
Date: 2011-06-13 13:51:26
Message-ID: 4DF615DE.3070907@kaltenbrunner.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 06/12/2011 11:39 PM, Robert Haas wrote:
> Here is a patch that applies over the "reducing the overhead of
> frequent table locks" (fastlock-v3) patch and allows heavyweight VXID
> locks to spring into existence only when someone wants to wait on
> them. I believe there is a large benefit to be had from this
> optimization, because the combination of these two patches virtually
> eliminates lock manager traffic on "pgbench -S" workloads. However,
> there are several flies in the ointment.
>
> 1. It's a bit of a kludge. I leave it to readers of the patch to
> determine exactly what about this patch they think is kludgey, but
> it's likely not the empty set. I suspect that MyProc->fpLWLock needs
> to be renamed to something a bit more generic if we're going to use it
> like this, but I don't immediately know what to call it. Also, the
> mechanism whereby we take SInvalWriteLock to work out the mapping from
> BackendId to PGPROC * is not exactly awesome. I don't think it
> matters from a performance point of view, because operations that need
> VXID locks are sufficiently rare that the additional lwlock traffic
> won't matter a bit. However, we could avoid this altogether if we
> rejiggered the mechanism for allocating PGPROCs and backend IDs.
> Right now, we allocate PGPROCs off of linked lists, except for
> auxiliary procs which allocate them by scanning a three-element array
> for an empty slot. Then, when the PGPROC subscribes to sinval, the
> sinval mechanism allocates a backend ID by scanning for the lowest
> unused backend ID in the ProcState array. If we changed the logic for
> allocating PGPROCs to mimic what the sinval queue currently does, then
> the backend ID could be defined as the offset into the PGPROC array.
> Translating between a backend ID and a PGPROC * now becomes a matter
> of pointer arithmetic. Not sure if this is worth doing.
>
> 2. Bad thing happen with large numbers of connections. This patch
> increases peak performance, but as you increase the number of
> concurrent connections beyond the number of CPU cores, performance
> drops off faster with the patch than without it. For example, on the
> 32-core loaner from Nate Boley, using 80 pgbench -S clients, unpatched
> HEAD runs at ~36K TPS; with fastlock, it jumps up to about ~99K TPS;
> with this patch also applied, it drops down to about ~64K TPS, despite
> the fact that nearly all the contention on the lock manager locks has
> been eliminated. On Stefan Kaltenbrunner's 40-core box, he was
> actually able to see performance drop down below unpatched HEAD with
> this applied! This is immensely counterintuitive. What is going on?
>
> Profiling reveals that the system spends enormous amounts of CPU time
> in s_lock.

just to reiterate that with numbers - at 160 threads with both patches
applied the profile looks like:

samples % image name symbol name
828794 75.8662 postgres s_lock
51672 4.7300 postgres LWLockAcquire
51145 4.6817 postgres LWLockRelease
17636 1.6144 postgres GetSnapshotData
7521 0.6885 postgres hash_search_with_hash_value
6193 0.5669 postgres AllocSetAlloc
4527 0.4144 postgres SearchCatCache
4521 0.4138 postgres PinBuffer
3385 0.3099 postgres SIGetDataEntries
3160 0.2893 postgres PostgresMain
2706 0.2477 postgres _bt_compare
2687 0.2460 postgres fmgr_info_cxt_security
1963 0.1797 postgres UnpinBuffer
1846 0.1690 postgres LockAcquireExtended
1770 0.1620 postgres exec_bind_message
1730 0.1584 postgres hash_any
1644 0.1505 postgres ExecInitExpr

even at the peak performance spot of the combined patch-set (-c40) the
contention is noticable in the profile:

samples % image name symbol name
1497826 22.0231 postgres s_lock
592104 8.7059 postgres LWLockAcquire
512213 7.5313 postgres LWLockRelease
230050 3.3825 postgres GetSnapshotData
176252 2.5915 postgres AllocSetAlloc
155122 2.2808 postgres hash_search_with_hash_value
116235 1.7091 postgres SearchCatCache
110197 1.6203 postgres _bt_compare
94101 1.3836 postgres PinBuffer
80119 1.1780 postgres PostgresMain
65584 0.9643 postgres fmgr_info_cxt_security
55198 0.8116 postgres hash_any
52872 0.7774 postgres exec_bind_message
48438 0.7122 postgres LockReleaseAll
46631 0.6856 postgres MemoryContextAlloc
45909 0.6750 postgres ExecInitExpr
42293 0.6219 postgres AllocSetFree

Stefan


From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
To: pgsql-hackers(at)postgresql(dot)org
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-06-13 14:03:58
Message-ID: 4DF618CE.8000300@kaltenbrunner.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 06/13/2011 01:55 PM, Stefan Kaltenbrunner wrote:

[...]

> all those tests are done with pgbench running on the same box - which
> has a noticable impact on the results because pgbench is using ~1 core
> per 8 cores of the backend tested in cpu resoures - though I don't think
> it causes any changes in the results that would show the performance
> behaviour in a different light.

actuall testing against sysbench with the very same workload shows the
following performance behaviour:

with 40 threads(aka the peak performance point):

pgbench: 223308 tps
sysbench: 311584 tps

with 160 threads (backend contention dominated):

pgbench: 57075
sysbench: 43437

so it seems that sysbench is actually significantly less overhead than
pgbench and the lower throughput at the higher conncurency seems to be
cause by sysbench being able to stress the backend even more than
pgbench can.

for those curious - the profile for pgbench looks like:

samples % symbol name
29378 41.9087 doCustom
17502 24.9672 threadRun
7629 10.8830 pg_strcasecmp
5871 8.3752 compareVariables
2568 3.6633 getVariable
2167 3.0913 putVariable
2065 2.9458 replaceVariable
1971 2.8117 parseVariable
534 0.7618 xstrdup
278 0.3966 xrealloc
137 0.1954 xmalloc

Stefan


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: lazy vxid locks, v1
Date: 2011-06-13 14:29:28
Message-ID: 16711.1307975368@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> writes:
> On 06/12/2011 11:39 PM, Robert Haas wrote:
>> Profiling reveals that the system spends enormous amounts of CPU time
>> in s_lock.

> just to reiterate that with numbers - at 160 threads with both patches
> applied the profile looks like:

> samples % image name symbol name
> 828794 75.8662 postgres s_lock

Do you know exactly which spinlocks are being contended on here?
The next few entries

> 51672 4.7300 postgres LWLockAcquire
> 51145 4.6817 postgres LWLockRelease
> 17636 1.6144 postgres GetSnapshotData

suggest that it might be the ProcArrayLock as a result of a huge amount
of snapshot-fetching, but this is very weak evidence for that theory.

regards, tom lane


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: lazy vxid locks, v1
Date: 2011-06-13 15:06:46
Message-ID: BANLkTikQUPYMTvk1CS94kGb6t=PgKZDp2Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jun 13, 2011 at 10:29 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> writes:
>> On 06/12/2011 11:39 PM, Robert Haas wrote:
>>> Profiling reveals that the system spends enormous amounts of CPU time
>>> in s_lock.
>
>> just to reiterate that with numbers - at 160 threads with both patches
>> applied the profile looks like:
>
>> samples  %        image name               symbol name
>> 828794   75.8662  postgres                 s_lock
>
> Do you know exactly which spinlocks are being contended on here?
> The next few entries
>
>> 51672     4.7300  postgres                 LWLockAcquire
>> 51145     4.6817  postgres                 LWLockRelease
>> 17636     1.6144  postgres                 GetSnapshotData
>
> suggest that it might be the ProcArrayLock as a result of a huge amount
> of snapshot-fetching, but this is very weak evidence for that theory.

I don't know for sure what is happening on Stefan's system, but I did
post the results of some research on this exact topic in my original
post.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: lazy vxid locks, v1
Date: 2011-06-14 00:10:58
Message-ID: BANLkTi=qdxa-Hhrpf8KKCW3YbZPiv2cTKQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Jun 12, 2011 at 2:39 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
...
>
> Profiling reveals that the system spends enormous amounts of CPU time
> in s_lock.  LWLOCK_STATS reveals that the only lwlock with significant
> amounts of blocking is the BufFreelistLock;

This is curious. Clearly the entire working set fits in RAM, or you
wouldn't be getting number like this. But does the entire working set
fit in shared_buffers? If so, you shouldn't see any traffic on
BufFreelistLock once all the data is read in. I've only seen
contention here when all data fits in OS cache memory but not in
shared_buffers.

Cheers,

Jeff


From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
Cc: pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-06-14 00:27:15
Message-ID: BANLkTi=A-TD9qhGjuYDCiav8QfhZwKPsNA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jun 13, 2011 at 7:03 AM, Stefan Kaltenbrunner
<stefan(at)kaltenbrunner(dot)cc> wrote:
...
>
>
> so it seems that sysbench is actually significantly less overhead than
> pgbench and the lower throughput at the higher conncurency seems to be
> cause by sysbench being able to stress the backend even more than
> pgbench can.

Hi Stefan,

pgbench sends each query (per connection) and waits for the reply
before sending another.

Do we know whether sysbench does that, or if it just stuffs the
kernel's IPC buffer full of queries without synchronously waiting for
individual replies?

I can't get sysbench to "make" for me, or I'd strace in single client
mode and see what kind of messages are going back and forth.

Cheers,

Jeff


From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-06-14 01:46:20
Message-ID: BANLkTim5QtvwQ_mPRSUHr07Hn7o2wr03gA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jun 14, 2011 at 09:27, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> pgbench sends each query (per connection) and waits for the reply
> before sending another.

We can use -j option to run pgbench in multiple threads to avoid
request starvation. What setting did you use, Stefan?

>> for those curious - the profile for pgbench looks like:
>> samples % symbol name
>> 29378 41.9087 doCustom
>> 17502 24.9672 threadRun
>> 7629 10.8830 pg_strcasecmp

If the bench is bottleneck, it would be better to reduce pg_strcasecmp
calls by holding meta command names as integer values of sub-META_COMMAND
instead of string comparison for each loop.

--
Itagaki Takahiro


From: Greg Smith <greg(at)2ndQuadrant(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-06-14 01:56:13
Message-ID: 4DF6BFBD.9040801@2ndQuadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 06/13/2011 08:27 PM, Jeff Janes wrote:
> pgbench sends each query (per connection) and waits for the reply
> before sending another.
>
> Do we know whether sysbench does that, or if it just stuffs the
> kernel's IPC buffer full of queries without synchronously waiting for
> individual replies?
>

sysbench creates a thread for each client and lets them go at things at
whatever speed they can handle. You have to setup pgbench with a worker
per core to get them even close to level footing. And even in that
case, sysbench has a significant advantage, because it's got the
commands it runs more or less hard-coded in the program. pgbench is
constantly parsing things in its internal command language and then
turning them into SQL requests. That's flexible and allows it to be
used for some neat custom things, but it uses a lot more resources to
drive the same number of clients.

> I can't get sysbench to "make" for me, or I'd strace in single client
> mode and see what kind of messages are going back and forth.
>

If you're using a sysbench tarball, no surprise. It doesn't build on
lots of platforms now. If you grab
http://projects.2ndquadrant.it/sites/default/files/bottom-up-benchmarking.pdf
it has my sysbench notes starting on page 34. I had to checkout the
latest version from their development repo to get it to compile on any
recent system. The attached wrapper script may be helpful to you as
well to help get over the learning curve for how to run the program; it
iterates sysbench over a number of database sizes and thread counts
running the complicated to setup OLTP test.

--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

Attachment Content-Type Size
oltp-read text/plain 1.1 KB

From: Greg Smith <greg(at)2ndQuadrant(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: lazy vxid locks, v1
Date: 2011-06-14 02:09:07
Message-ID: 4DF6C2C3.8080303@2ndQuadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 06/13/2011 07:55 AM, Stefan Kaltenbrunner wrote:
> all those tests are done with pgbench running on the same box - which
> has a noticable impact on the results because pgbench is using ~1 core
> per 8 cores of the backend tested in cpu resoures - though I don't think
> it causes any changes in the results that would show the performance
> behaviour in a different light.
>

Yeah, this used to make a much bigger difference, but nowadays it's not
so important. So long as you have enough cores that you can spare a
chunk of them to drive the test with, and you crank "-j" up to a lot,
there doesn't seem to be much of an advantage to moving the clients to a
remote system now. You end up trading off CPU time for everything going
through the network stack, which adds yet another set of uncertainty to
the whole thing anyway.

I'm glad to see so many people have jumped onto doing these SELECT-only
tests now. The performance farm idea I've been working on runs a test
just like what's proven useful here. I'd suggested that because it's
been really sensitive to changes in locking and buffer management for me.

--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us


From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-06-14 04:09:49
Message-ID: 1308024332-sup-3315@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Excerpts from Jeff Janes's message of lun jun 13 20:27:15 -0400 2011:
> On Mon, Jun 13, 2011 at 7:03 AM, Stefan Kaltenbrunner
> <stefan(at)kaltenbrunner(dot)cc> wrote:
> ...
> >
> >
> > so it seems that sysbench is actually significantly less overhead than
> > pgbench and the lower throughput at the higher conncurency seems to be
> > cause by sysbench being able to stress the backend even more than
> > pgbench can.
>
> Hi Stefan,
>
> pgbench sends each query (per connection) and waits for the reply
> before sending another.

I noticed that pgbench's doCustom (the function highest in the profile
posted) returns doing nothing if the connection is supposed to be
"sleeping"; seems an open door for busy waiting. I didn't check the
rest of the code to see if there's something avoiding that condition. I
also noticed that it seems to be very liberal about calling
INSTR_TIME_SET_CURRENT in the same function which perhaps could be
optimizing by calling it a single time at entry and reusing the value,
but I guess that would show up in the profile as a kernel call so it's
maybe not a problem.

--
Álvaro Herrera <alvherre(at)commandprompt(dot)com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-06-14 04:21:50
Message-ID: BANLkTikQMQ3Lf0SdmWP=ywKqRi_eszZMAQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Jun 14, 2011 at 13:09, Alvaro Herrera
<alvherre(at)commandprompt(dot)com> wrote:
> I noticed that pgbench's doCustom (the function highest in the profile
> posted) returns doing nothing if the connection is supposed to be
> "sleeping"; seems an open door for busy waiting.

pgbench uses select() with/without timeout in the cases, no?

--
Itagaki Takahiro


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: lazy vxid locks, v1
Date: 2011-06-14 12:02:10
Message-ID: BANLkTikYSXoez7b7qC8Bqfpgr0kxJEFs1A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jun 13, 2011 at 8:10 PM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> On Sun, Jun 12, 2011 at 2:39 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> ...
>>
>> Profiling reveals that the system spends enormous amounts of CPU time
>> in s_lock.  LWLOCK_STATS reveals that the only lwlock with significant
>> amounts of blocking is the BufFreelistLock;
>
> This is curious.  Clearly the entire working set fits in RAM, or you
> wouldn't be getting number like this.  But does the entire working set
> fit in shared_buffers?  If so, you shouldn't see any traffic on
> BufFreelistLock once all the data is read in.  I've only seen
> contention here when all data fits in OS cache memory but not in
> shared_buffers.

Yeah, that does seem odd:

rhaas=# select pg_size_pretty(pg_database_size(current_database()));
pg_size_pretty
----------------
1501 MB
(1 row)

rhaas=# select pg_size_pretty(pg_table_size('pgbench_accounts'));
pg_size_pretty
----------------
1281 MB
(1 row)

rhaas=# select pg_size_pretty(pg_table_size('pgbench_accounts_pkey'));
pg_size_pretty
----------------
214 MB
(1 row)

rhaas=# show shared_buffers;
shared_buffers
----------------
8GB
(1 row)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-06-14 19:56:56
Message-ID: BANLkTikCcLyKyMPMhksMJnH=x0VV60hGfw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jun 13, 2011 at 9:09 PM, Alvaro Herrera
<alvherre(at)commandprompt(dot)com> wrote:

> I noticed that pgbench's doCustom (the function highest in the profile
> posted) returns doing nothing if the connection is supposed to be
> "sleeping"; seems an open door for busy waiting.  I didn't check the
> rest of the code to see if there's something avoiding that condition.

Yes, there is a "select" in threadRun that avoids that. Also, I don't
think anyone would but in a "sleep" in this particular type of pgbench
run.

> I
> also noticed that it seems to be very liberal about calling
> INSTR_TIME_SET_CURRENT in the same function which perhaps could be
> optimizing by calling it a single time at entry and reusing the value,
> but I guess that would show up in the profile as a kernel call so it's
> maybe not a problem.

I think that only gets called when you specifically asked for
latencies or for logging, or when making new connection (which should
be rare)

Cheers,

Jeff


From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-06-14 20:05:58
Message-ID: 4DF7BF26.8070101@kaltenbrunner.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 06/14/2011 02:27 AM, Jeff Janes wrote:
> On Mon, Jun 13, 2011 at 7:03 AM, Stefan Kaltenbrunner
> <stefan(at)kaltenbrunner(dot)cc> wrote:
> ...
>>
>>
>> so it seems that sysbench is actually significantly less overhead than
>> pgbench and the lower throughput at the higher conncurency seems to be
>> cause by sysbench being able to stress the backend even more than
>> pgbench can.
>
> Hi Stefan,
>
> pgbench sends each query (per connection) and waits for the reply
> before sending another.
>
> Do we know whether sysbench does that, or if it just stuffs the
> kernel's IPC buffer full of queries without synchronously waiting for
> individual replies?
>
> I can't get sysbench to "make" for me, or I'd strace in single client
> mode and see what kind of messages are going back and forth.

yeah sysbench compiled from a release tarball needs some
autoconf/makefile hackery to get running on a modern system - but I can
provide you with the data you are interested in if you tell me exactly
what you are looking for...

Stefan


From: Florian Pflug <fgp(at)phlo(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: lazy vxid locks, v1
Date: 2011-06-22 21:43:12
Message-ID: 8348A657-D387-4A6D-9511-B13EE3321C0D@phlo.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Jun12, 2011, at 23:39 , Robert Haas wrote:
> So, the majority (60%) of the excess spinning appears to be due to
> SInvalReadLock. A good chunk are due to ProcArrayLock (25%).

Hm, sizeof(LWLock) is 24 on X86-64, making sizeof(LWLockPadded) 32.
However, cache lines are 64 bytes large on recent Intel CPUs AFAIK,
so I guess that two adjacent LWLocks currently share one cache line.

Currently, the ProcArrayLock has index 4 while SInvalReadLock has
index 5, so if I'm not mistaken exactly the two locks where you saw
the largest contention on are on the same cache line...

Might make sense to try and see if these numbers change if you
either make LWLockPadded 64bytes or arrange the locks differently...

best regards,
Florian Pflug


From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
Cc: pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-07-24 01:50:34
Message-ID: CAMkU=1xn2okGupQF390xT+pCOG5VZaTKOjSd9fzJLjJxNWjYeg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Jun 13, 2011 at 7:03 AM, Stefan Kaltenbrunner
<stefan(at)kaltenbrunner(dot)cc> wrote:
> On 06/13/2011 01:55 PM, Stefan Kaltenbrunner wrote:
>
> [...]
>
>> all those tests are done with pgbench running on the same box - which
>> has a noticable impact on the results because pgbench is using ~1 core
>> per 8 cores of the backend tested in cpu resoures - though I don't think
>> it causes any changes in the results that would show the performance
>> behaviour in a different light.
>
> actuall testing against sysbench with the very same workload shows the
> following performance behaviour:
>
> with 40 threads(aka the peak performance point):
>
> pgbench:        223308 tps
> sysbench:       311584 tps
>
> with 160 threads (backend contention dominated):
>
> pgbench:        57075
> sysbench:       43437
>
>
> so it seems that sysbench is actually significantly less overhead than
> pgbench and the lower throughput at the higher conncurency seems to be
> cause by sysbench being able to stress the backend even more than
> pgbench can.
>
>
> for those curious - the profile for pgbench looks like:
>
> samples  %        symbol name
> 29378    41.9087  doCustom
> 17502    24.9672  threadRun
> 7629     10.8830  pg_strcasecmp
> 5871      8.3752  compareVariables
> 2568      3.6633  getVariable
> 2167      3.0913  putVariable
> 2065      2.9458  replaceVariable
> 1971      2.8117  parseVariable
> 534       0.7618  xstrdup
> 278       0.3966  xrealloc
> 137       0.1954  xmalloc

Hi Stefan,

How was this profile generated? I get a similar profile using
--enable-profiling and gprof, but I find it not believable. The
complete absence of any calls to libpq is not credible. I don't know
about your profiler, but with gprof they should be listed in the call
graph even if they take a negligible amount of time. So I think
pgbench is linking to libpq libraries that do not themselves support
profiling (I have no idea how that could happen though). If the calls
graphs are not getting recorded correctly, surely the timing can't be
reliable either.

(I also tried profiling pgbench with "perf", but in that case I get
nothing other than kernel and libc calls showing up. I don't know
what that means)

To support this, I've dummied up doCustom so that does all the work of
deciding what needs to happen, executing the metacommands,
interpolating the variables into the SQL string, but then simply
refrains from calling the PQ functions to send and receive the query
and response. (I had to make a few changes around the select loop in
threadRun to support this).

The result is that the dummy pgbench is charged with only 57% more CPU
time than the stock one, but it gets over 10 times as many
"transactions" done. I think this supports the notion that the CPU
bottleneck is not in pgbench.c, but somewhere in the libpq or the
kernel.

Cheers,

Jeff


From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-07-24 10:50:37
Message-ID: 4E2BF8FD.6060505@kaltenbrunner.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 07/24/2011 03:50 AM, Jeff Janes wrote:
> On Mon, Jun 13, 2011 at 7:03 AM, Stefan Kaltenbrunner
> <stefan(at)kaltenbrunner(dot)cc> wrote:
>> On 06/13/2011 01:55 PM, Stefan Kaltenbrunner wrote:
>>
>> [...]
>>
>>> all those tests are done with pgbench running on the same box - which
>>> has a noticable impact on the results because pgbench is using ~1 core
>>> per 8 cores of the backend tested in cpu resoures - though I don't think
>>> it causes any changes in the results that would show the performance
>>> behaviour in a different light.
>>
>> actuall testing against sysbench with the very same workload shows the
>> following performance behaviour:
>>
>> with 40 threads(aka the peak performance point):
>>
>> pgbench: 223308 tps
>> sysbench: 311584 tps
>>
>> with 160 threads (backend contention dominated):
>>
>> pgbench: 57075
>> sysbench: 43437
>>
>>
>> so it seems that sysbench is actually significantly less overhead than
>> pgbench and the lower throughput at the higher conncurency seems to be
>> cause by sysbench being able to stress the backend even more than
>> pgbench can.
>>
>>
>> for those curious - the profile for pgbench looks like:
>>
>> samples % symbol name
>> 29378 41.9087 doCustom
>> 17502 24.9672 threadRun
>> 7629 10.8830 pg_strcasecmp
>> 5871 8.3752 compareVariables
>> 2568 3.6633 getVariable
>> 2167 3.0913 putVariable
>> 2065 2.9458 replaceVariable
>> 1971 2.8117 parseVariable
>> 534 0.7618 xstrdup
>> 278 0.3966 xrealloc
>> 137 0.1954 xmalloc
>
> Hi Stefan,
>
> How was this profile generated? I get a similar profile using
> --enable-profiling and gprof, but I find it not believable. The
> complete absence of any calls to libpq is not credible. I don't know
> about your profiler, but with gprof they should be listed in the call
> graph even if they take a negligible amount of time. So I think
> pgbench is linking to libpq libraries that do not themselves support
> profiling (I have no idea how that could happen though). If the calls
> graphs are not getting recorded correctly, surely the timing can't be
> reliable either.

hmm - the profile was generated using oprofile, but now that you are
mentioning this aspect, I suppose that this was a profile run without
opcontrol --seperate=lib...
I'm not currently in a position to retest that - but maybe you could do
a run?

>
> (I also tried profiling pgbench with "perf", but in that case I get
> nothing other than kernel and libc calls showing up. I don't know
> what that means)
>
> To support this, I've dummied up doCustom so that does all the work of
> deciding what needs to happen, executing the metacommands,
> interpolating the variables into the SQL string, but then simply
> refrains from calling the PQ functions to send and receive the query
> and response. (I had to make a few changes around the select loop in
> threadRun to support this).
>
> The result is that the dummy pgbench is charged with only 57% more CPU
> time than the stock one, but it gets over 10 times as many
> "transactions" done. I think this supports the notion that the CPU
> bottleneck is not in pgbench.c, but somewhere in the libpq or the
> kernel.

interesting - iirc we actually had some reports about current libpq
behaviour causing scaling issues on some OSes - see
http://archives.postgresql.org/pgsql-hackers/2009-06/msg00748.php and
some related threads. Iirc the final patch for that was never applied
though and the original author lost interest, I think that I was able to
measure some noticable performance gains back in the days but I don't
think I still have the numbers somewhere.

Stefan


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-07-24 15:46:49
Message-ID: 12391.1311522409@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Jeff Janes <jeff(dot)janes(at)gmail(dot)com> writes:
> How was this profile generated? I get a similar profile using
> --enable-profiling and gprof, but I find it not believable. The
> complete absence of any calls to libpq is not credible. I don't know
> about your profiler, but with gprof they should be listed in the call
> graph even if they take a negligible amount of time. So I think
> pgbench is linking to libpq libraries that do not themselves support
> profiling (I have no idea how that could happen though). If the calls
> graphs are not getting recorded correctly, surely the timing can't be
> reliable either.

Last I checked, gprof simply does not work for shared libraries on
Linux --- is that what you're testing on? If so, try oprofile or
some other Linux-specific solution.

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-07-24 15:55:04
Message-ID: 12562.1311522904@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> writes:
> interesting - iirc we actually had some reports about current libpq
> behaviour causing scaling issues on some OSes - see
> http://archives.postgresql.org/pgsql-hackers/2009-06/msg00748.php and
> some related threads. Iirc the final patch for that was never applied
> though and the original author lost interest, I think that I was able to
> measure some noticable performance gains back in the days but I don't
> think I still have the numbers somewhere.

Huh? That patch did get applied in some form or other -- at least,
libpq does contain references to both SO_NOSIGPIPE and MSG_NOSIGNAL
these days.

regards, tom lane


From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: pgbench cpu overhead (was Re: lazy vxid locks, v1)
Date: 2011-07-24 17:53:25
Message-ID: 4E2C5C15.2080705@kaltenbrunner.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 07/24/2011 05:55 PM, Tom Lane wrote:
> Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> writes:
>> interesting - iirc we actually had some reports about current libpq
>> behaviour causing scaling issues on some OSes - see
>> http://archives.postgresql.org/pgsql-hackers/2009-06/msg00748.php and
>> some related threads. Iirc the final patch for that was never applied
>> though and the original author lost interest, I think that I was able to
>> measure some noticable performance gains back in the days but I don't
>> think I still have the numbers somewhere.
>
> Huh? That patch did get applied in some form or other -- at least,
> libpq does contain references to both SO_NOSIGPIPE and MSG_NOSIGNAL
> these days.

hmm yeah - your are right, when I looked that up a few hours ago I
failed to find the right commit but it was indeed commited:

http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=cea80e726edd42a39bb0220290738f7825de8e57

I think I mentally mixed that up with "compare word-at-a-time in
bcTruelen" patch that was also discussed for affecting query rates for
trivial queries.
I actually wonder if -HEAD would show that issue even more clearly now
that we have parts of roberts performance work in the tree...

Stefan