Re: slow commits with heavy temp table usage in 8.4.0

Lists: pgsql-hackers
From: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
To: <pgsql-hackers(at)postgresql(dot)org>
Subject: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-05 17:54:25
Message-ID: 4A79C751.4030704@blackducksoftware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

I've noticed that on 8.4.0, commits can take a long time when a temp table is repeatedly
filled and truncated within a loop. A very contrived example is

begin;
create or replace function commit_test_with_truncations()
returns void
language 'plpgsql'
as $_func_$
declare
i integer;
begin
create temp table t1 (x integer) on commit drop ;
for i in 1 .. 22000 loop
insert into t1 select s from generate_series(1,1000) s ;
truncate t1 ;
end loop;
end;
$_func_$;
select commit_test_with_truncations() ;
commit ;

On may laptop (Core2 Duo with 3.5GB and a disk dedicated to PG), the function call takes
about 124 seconds, and the commit takes about 43 seconds. The function execution generates
a lot of I/O activity, but the commit is entirely CPU bound.

By contrast, the same test on an 8.2.13 system (2 older Xeons and 8GB) had times of 495
and 19 seconds. In this case, both the function execution and the commit were entirely
CPU bound.

The overall process in 8.4 is much faster than 8.2.13, but the commit time is somewhat
surprising to me. Is that to be expected?

8.4 version(): PostgreSQL 8.4.0 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-27), 64-bit
8.2.13 version(): PostgreSQL 8.2.13 on x86_64-suse-linux-gnu, compiled by GCC gcc (GCC) 3.3.5 20050117 (prerelease) (SUSE Linux)

-- todd


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-05 20:02:17
Message-ID: 28632.1249502537@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Todd A. Cook" <tcook(at)blackducksoftware(dot)com> writes:
> I've noticed that on 8.4.0, commits can take a long time when a temp table is repeatedly
> filled and truncated within a loop. A very contrived example is

Hmm. I tweaked the function to allow varying the number of truncates:

regression=# begin;
BEGIN
Time: 1.037 ms
regression=# select commit_test_with_truncations(10000) ;
commit_test_with_truncations
------------------------------

(1 row)

Time: 9466.060 ms
regression=# commit;
COMMIT
Time: 1095.946 ms
regression=# begin;
BEGIN
Time: 1.002 ms
regression=# select commit_test_with_truncations(30000) ;
commit_test_with_truncations
------------------------------

(1 row)

Time: 93492.874 ms
regression=# commit;
COMMIT
Time: 3184.248 ms

The commit time doesn't seem tremendously out of line, but it looks
like there's something O(N^2)-ish in the function execution. Do
you see a similar pattern? With so many temp files there could well
be some blame on the kernel side. (This is a Fedora 10 box.)

regards, tom lane


From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-05 20:29:21
Message-ID: 4A79A551020000250002947C@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> "Todd A. Cook" <tcook(at)blackducksoftware(dot)com> writes:
>> I've noticed that on 8.4.0, commits can take a long time when a
>> temp table is repeatedly filled and truncated within a loop.

> The commit time doesn't seem tremendously out of line, but it looks
> like there's something O(N^2)-ish in the function execution. Do
> you see a similar pattern? With so many temp files there could well
> be some blame on the kernel side. (This is a Fedora 10 box.)

This sounds very similar to my experience here:

http://archives.postgresql.org/pgsql-hackers/2008-11/msg01472.php

Depending on what sort of RAID controller caching is present, a BBU
cache might be containing the problem up to some threshold. Perhaps
it's not so much O(N^2) as O(N)-<someconstant>, with a min of zero?

-Kevin


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-05 20:39:09
Message-ID: 29318.1249504749@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> "Todd A. Cook" <tcook(at)blackducksoftware(dot)com> writes:
>>> I've noticed that on 8.4.0, commits can take a long time when a
>>> temp table is repeatedly filled and truncated within a loop.

>> The commit time doesn't seem tremendously out of line, but it looks
>> like there's something O(N^2)-ish in the function execution. Do
>> you see a similar pattern? With so many temp files there could well
>> be some blame on the kernel side. (This is a Fedora 10 box.)

> This sounds very similar to my experience here:
> http://archives.postgresql.org/pgsql-hackers/2008-11/msg01472.php

I did some more poking with oprofile, and got this:

samples % image name symbol name
559375 39.9848 postgres index_getnext
167626 11.9821 postgres TransactionIdIsCurrentTransactionId
107421 7.6786 postgres HeapTupleSatisfiesNow
65689 4.6955 postgres HeapTupleHeaderGetCmin
47220 3.3753 postgres HeapTupleHeaderGetCmax
46799 3.3452 postgres hash_search_with_hash_value
29331 2.0966 postgres heap_hot_search_buffer
23737 1.6967 postgres CatalogCacheFlushRelation
20562 1.4698 postgres LWLockAcquire
19838 1.4180 postgres heap_page_prune_opt
19044 1.3613 postgres _bt_checkkeys
17400 1.2438 postgres LWLockRelease
12993 0.9288 postgres PinBuffer

So what I'm seeing is entirely explained by the buildup of dead versions
of the temp table's pg_class row --- the index_getnext time is spent
scanning over dead HOT-chain members. It might be possible to avoid
that by special-casing temp tables in TRUNCATE to recycle the existing
file instead of assigning a new one. However, there is no reason to
think that 8.3 would be any better than 8.4 on that score. Also, I'm
not seeing the very long CPU-bound commit phase that Todd is seeing.
So I think there's something happening on his box that's different from
what I'm measuring.

I'm actually testing CVS HEAD, not 8.4.0, but I don't recall that we've
done anything in the past month that would be likely to affect this ...

regards, tom lane


From: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-05 21:46:34
Message-ID: 4A79FDBA.5050503@blackducksoftware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
>
> So what I'm seeing is entirely explained by the buildup of dead versions
> of the temp table's pg_class row --- the index_getnext time is spent
> scanning over dead HOT-chain members. It might be possible to avoid
> that by special-casing temp tables in TRUNCATE to recycle the existing
> file instead of assigning a new one. However, there is no reason to
> think that 8.3 would be any better than 8.4 on that score. Also, I'm
> not seeing the very long CPU-bound commit phase that Todd is seeing.

The commit looks CPU-bound when I let the residual I/O from the function
execution die out before I issue the commit.

>
> I'm actually testing CVS HEAD, not 8.4.0, but I don't recall that we've
> done anything in the past month that would be likely to affect this ...
>
> regards, tom lane
> .
>

Tom's theory may explain the different commit results I get when
testing on two different databases:

db truncations function commit
------------ ----------- ----------- ----------
test 10000 29603.624 6054.889
test 10000 34740.167 14551.177
test 10000 30608.260 11144.503
test 10000 32239.049 9846.676

test 30000 227115.850 50206.947
test 30000 201859.698 46083.222
test 30000 231926.642 46681.009
test 30000 235665.970 47113.137

production 10000 32982.069 17654.772
production 10000 33297.524 17396.792
production 10000 35503.185 18343.045
production 10000 34251.753 18284.725

production 30000 200899.786 75480.448
production 30000 206793.209 73316.405
production 30000 260491.759 72570.297
production 30000 191363.168 66659.129

The "test" DB is nearly empty with 251 entries in pg_class, whereas
"production" has real data with 9981 entries in pg_class.

-- todd


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-05 23:51:50
Message-ID: 3447.1249516310@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Todd A. Cook" <tcook(at)blackducksoftware(dot)com> writes:
> Tom Lane wrote:
>> I'm not seeing the very long CPU-bound commit phase that Todd is seeing.

> The commit looks CPU-bound when I let the residual I/O from the function
> execution die out before I issue the commit.

Well, mine is CPU-bound too, it just is much shorter relative to the
function execution time than you're showing. My test showed about
a 9x ratio at 10000 truncates and a 30x ratio at 30000; you've got
numbers in the range of 2x to 4x. So something is behaving
differently between your machines and mine.

I took a look through the CVS history and verified that there were
no post-8.4 commits that looked like they'd affect performance in
this area. So I think it's got to be a platform difference not a
PG version difference. In particular I think we are probably looking
at a filesystem issue: how fast can you delete 10000 or 30000 files.
(I'm testing an ext3 filesystem on a plain ol consumer-grade drive
that is probably lying about write complete, so I'd not be surprised
if deletions go a lot faster than they "ought to" ... except that
the disk issue shouldn't affect things if it's CPU-bound anyway.)

As I said, my inclination for improving this area, if someone wanted
to work on it, would be to find a way to do truncate-in-place on
temp tables. ISTM that in the case you're showing --- truncate that's
not within a subtransaction, on a table that's drop-on-commit anyway
--- we should not need to keep around the pre-truncation data. So we
could just do ftruncate instead of creating a new file, and we'd not
need a new copy of the pg_class row either. So that should make both
the function time and the commit time a lot better. But I'm not sure
if the use-case is popular enough to deserve such a hack.

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-06 06:24:12
Message-ID: 26227.1249539852@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I wrote:
> As I said, my inclination for improving this area, if someone wanted
> to work on it, would be to find a way to do truncate-in-place on
> temp tables. ISTM that in the case you're showing --- truncate that's
> not within a subtransaction, on a table that's drop-on-commit anyway
> --- we should not need to keep around the pre-truncation data. So we
> could just do ftruncate instead of creating a new file, and we'd not
> need a new copy of the pg_class row either. So that should make both
> the function time and the commit time a lot better. But I'm not sure
> if the use-case is popular enough to deserve such a hack.

Actually, this is easier than I thought, because there is already
bookkeeping being done that (in effect) tracks whether a table has
already been truncated in the current transaction. So we can rely
on that, and with only a very few lines of code added, ensure that
a situation like this does only one full-scale transaction-safe
truncation per transaction. The attached prototype patch does this
and seems to fix the speed problem nicely. It's not tremendously
well tested, but perhaps you'd like to test? Should work in 8.4.

regards, tom lane

Attachment Content-Type Size
unknown_filename text/plain 6.3 KB

From: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-06 17:32:44
Message-ID: 4A7B13BC.6020606@blackducksoftware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
>
> I took a look through the CVS history and verified that there were
> no post-8.4 commits that looked like they'd affect performance in
> this area. So I think it's got to be a platform difference not a
> PG version difference. In particular I think we are probably looking
> at a filesystem issue: how fast can you delete [...] 30000 files.

I'm still on Fedora 7, so maybe this will be motivation to upgrade.

FYI, on my 8.2.13 system, the test created 30001 files which were all
deleted during the commit. On my 8.4.0 system, the test created 60001
files, of which 30000 were deleted at commit and 30001 disappeared
later (presumably during a checkpoint?).

> But I'm not sure
> if the use-case is popular enough to deserve such a hack.

FWIW, the full app was looping over a set of datasets. On each iteration,
it computed some intermediate results into a temp table, generated several
reports from those intermediate results, and finally truncated the table
for the next iteration.

-- todd


From: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-06 17:34:04
Message-ID: 4A7B140C.3070101@blackducksoftware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
>
> The attached prototype patch does this
> and seems to fix the speed problem nicely. It's not tremendously
> well tested, but perhaps you'd like to test? Should work in 8.4.

I'll give it a try and report back (though probably not until tomorrow).

-- todd


From: Alex Hunsaker <badalex(at)gmail(dot)com>
To: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-06 18:13:41
Message-ID: 34d269d40908061113j5c883d91s5633a35e1e38bf87@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Aug 6, 2009 at 11:32, Todd A. Cook<tcook(at)blackducksoftware(dot)com> wrote:
> Tom Lane wrote:
>>
>> I took a look through the CVS history and verified that there were
>> no post-8.4 commits that looked like they'd affect performance in
>> this area.  So I think it's got to be a platform difference not a
>> PG version difference.  In particular I think we are probably looking
>> at a filesystem issue: how fast can you delete [...] 30000 files.
>
> I'm still on Fedora 7, so maybe this will be motivation to upgrade.
>
> FYI, on my 8.2.13 system, the test created 30001 files which were all
> deleted during the commit.  On my 8.4.0 system, the test created 60001
> files, of which 30000 were deleted at commit and 30001 disappeared
> later (presumably during a checkpoint?).

Smells like fsm? With double the number of files maybe something
simple like turning on dir_index if you are ext3 will help?


From: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-07 16:07:41
Message-ID: 4A7C514D.6080101@blackducksoftware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
>
> Actually, this is easier than I thought, because there is already
> bookkeeping being done that (in effect) tracks whether a table has
> already been truncated in the current transaction. So we can rely
> on that, and with only a very few lines of code added, ensure that
> a situation like this does only one full-scale transaction-safe
> truncation per transaction. The attached prototype patch does this
> and seems to fix the speed problem nicely. It's not tremendously
> well tested, but perhaps you'd like to test? Should work in 8.4.

I downloaded the 8.4 source, built it unmodified, created a new cluster,
and ran the test in an empty DB there. Function execution took about
230 seconds, and commit took about 6 seconds.

With the patch applied, the test only took 35 seconds, and the commit
was practically instant (30ms). I monitored the database directory,
and the test execution only created 2 files (down from 60000).

Thanks for the patch; it looks great. :)

Is there any chance that it will be backpatched to 8.4?

-- todd


From: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
To: Alex Hunsaker <badalex(at)gmail(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-07 16:10:47
Message-ID: 4A7C5207.4030801@blackducksoftware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Alex Hunsaker wrote:
>
>> FYI, on my 8.2.13 system, the test created 30001 files which were all
>> deleted during the commit. Â On my 8.4.0 system, the test created 60001
>> files, of which 30000 were deleted at commit and 30001 disappeared
>> later (presumably during a checkpoint?).
>
> Smells like fsm?

Yes, that was it. 30000 of the filenames ended with "_fsm".

> With double the number of files maybe something
> simple like turning on dir_index if you are ext3 will help?

Thanks for the tip. Doing "tune2fs -O +dir_index" didn't seem to make
a difference, which is kinda expected for an existing directory. When
I get a chance, I'll try to recreate the filesystem.

-- todd


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-07 16:16:08
Message-ID: 23440.1249661768@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Todd A. Cook" <tcook(at)blackducksoftware(dot)com> writes:
> Tom Lane wrote:
>> The attached prototype patch does this
>> and seems to fix the speed problem nicely. It's not tremendously
>> well tested, but perhaps you'd like to test? Should work in 8.4.

> With the patch applied, the test only took 35 seconds, and the commit
> was practically instant (30ms).

I know it's faster, what I meant by testing was does it *work*. If you
roll back a truncate, do you get the expected state? How about after a
database crash?

> Is there any chance that it will be backpatched to 8.4?

Not a lot.

regards, tom lane


From: Alex Hunsaker <badalex(at)gmail(dot)com>
To: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-07 16:23:27
Message-ID: 34d269d40908070923w4728a21aybb5b0b34c71dd508@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Aug 7, 2009 at 10:10, Todd A. Cook<tcook(at)blackducksoftware(dot)com> wrote:
> Alex Hunsaker wrote:

>> With double the number of files maybe something
>>
>> simple like turning on dir_index if you are ext3 will help?
>
> Thanks for the tip.  Doing "tune2fs -O +dir_index" didn't seem to make
> a difference, which is kinda expected for an existing directory.  When
> I get a chance, I'll try to recreate the filesystem.

e2fsck -D should also do the trick.

> -- todd
>


From: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-07 17:29:56
Message-ID: 4A7C6494.7050700@blackducksoftware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
>
> If you roll back a truncate, do you get the expected state?

I did a number of variations on the test below, with and without "on drop commit",
and similar tests where the "create table" is done before the "begin". After the
checkpoint, the number of files in the database directory always returned to the
value before the "begin" (210 in this case). Everything behaved as expected.

test=# begin;
BEGIN
test=# create temp table t1 (x integer) ;
CREATE TABLE
test=# insert into t1 select s from generate_series(1,1000) s ;
INSERT 0 1000
test=# select count(*) from t1 ;
1000
test=# savepoint s1;
SAVEPOINT
test=# truncate t1;
TRUNCATE TABLE
test=# select count(*) from t1 ;
0
test=# insert into t1 select s from generate_series(10000,11000) s ;
INSERT 0 1001
test=# select count(*) from t1 ;
1001
test=# rollback to savepoint s1 ;
ROLLBACK
test=# select count(*) from t1 ;
1000
test=# commit ;
COMMIT
test=# select count(*) from t1 ;
1000
test=# checkpoint;
CHECKPOINT
test=#

> How about after a database crash?

Repeating the same test as above, after the second insert, I did "killall -9 postgres".
Restarting generated the expected messages in the log:

2009-08-07 13:09:56 EDT LOG: database system was interrupted; last known up at 2009-08-07 13:06:01 EDT
2009-08-07 13:09:56 EDT LOG: database system was not properly shut down; automatic recovery in progress
2009-08-07 13:09:56 EDT LOG: redo starts at 0/1F8D6D0
2009-08-07 13:09:56 EDT LOG: invalid magic number 0000 in log file 0, segment 1, offset 16392192
2009-08-07 13:09:56 EDT LOG: redo done at 0/1F9F3B8
2009-08-07 13:09:56 EDT LOG: autovacuum launcher started
2009-08-07 13:09:56 EDT LOG: database system is ready to accept connections

However, the DB directory now has 214 files (up from 210); I have no idea whether this
means anything or not. Repeating the previous tests gives expected results.

-- todd


From: James Mansion <james(at)mansionfamily(dot)plus(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-09 11:09:26
Message-ID: 4A7EAE66.9060601@mansionfamily.plus.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane wrote:
> the function time and the commit time a lot better. But I'm not sure
> if the use-case is popular enough to deserve such a hack.
>
>
For some OLTP workloads, it makes a lot of sense to spool tuples of
primary key plus new fields into a temp table and then doing a single update
or delete operation referencing the temp table. Perhaps not so much
for code designed for postgres where there is some extra flexibility
with array params and the like, but for code that targets other systems
as well.

Having temp tables be as fast as possible is quite a big win in this case.


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: slow commits with heavy temp table usage in 8.4.0
Date: 2009-08-23 19:24:50
Message-ID: 2814.1251055490@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

"Todd A. Cook" <tcook(at)blackducksoftware(dot)com> writes:
> Tom Lane wrote:
>>> If you roll back a truncate, do you get the expected state?

> I did a number of variations on the test below, with and without "on drop commit",
> and similar tests where the "create table" is done before the "begin". After the
> checkpoint, the number of files in the database directory always returned to the
> value before the "begin" (210 in this case). Everything behaved as expected.

Thanks for doing the testing. I've applied this patch to CVS HEAD.

regards, tom lane