Memory exhausted (leak?)

From: Aaron Birkland <birkie(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Memory exhausted (leak?)
Date: 2004-09-09 19:32:24
Message-ID: 19ab0ccd040909123277239ef2@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I have a long but straightforward query (on some very large tables)
that always ends in 'Memory exhausted in AllocSetAlloc(108)'. Even
stranger are some messages that appear in the logfile, such as the
following (edited for length, repetitions, etc):

TopMemoryContext: 32792 total in 4 blocks; 9712 free (2 chunks); 23080 used
TopTransactionContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
TransactionCommandContext: 1040384 total in 7 blocks; 246544 free (11
chunks); 793840 used
HashTableContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
HashBatchContext: 616608 total in 4 blocks; 27784 free (1 chunks); 588824 used
HashTableContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
HashBatchContext: 1951696808 total in 244 blocks; 26536 free (473
chunks); 1951670272 used
HashTableContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
HashBatchContext: 287840 total in 2 blocks; 8176 free (0 chunks); 279664 used
PlanExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
MdSmgr: 24576 total in 2 blocks; 12696 free (0 chunks); 11880 used
DynaHash: 8192 total in 1 blocks; 4376 free (0 chunks); 3816 used
DynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used
DynaHashTable: 8192 total in 1 blocks; 6112 free (0 chunks); 2080 used
DynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ERROR: Memory exhausted in AllocSetAlloc(108)
ERROR: Query was cancelled.

When running the query, I observe the following:
- memory usage pf postgres stays stable for a few minutes, then
reaches a point where quickly grows to about 2GB, at which point the
error occurs
- no space in pgsql_temp is used at any time! even decreating the
sort_mem to 4MB

I saw a message previously in the lists where a similar problem
ocurred, that that appeared to stem from a "variable-length datum
whose length word is munged to look like a
ridicuously large value". I don't think that's happening here, given
the steady growth to 2GB memory usage that occurs and given the fact
that I don't select or join on any varlena types.

Some facts about the nature of the query:
- All joins are done on Oid types, and only one column (an Oid type) is SELECTed
- the 'r1' table is huge, about 2.5x10^9 rows (and over 500 GB large)
- I expect it to return no more than a few hundred results

So.. Is there anything inherent in this execution plan that that would
require high memory usage outside of sort_mem? The plan is below and
although it's big, it doesn't look too exotic or brain damaged to me
(Yes, those two index scans on 'r1' in the inner nested loops are
intentional).

I tried executing this query disabling hash or merge joins, and not
doing 'unique' and the same error ocurred.. (note: there was a
matrrialize step there)

postgresql version is 7.3.6

-Aaron

EXPLAIN output below
--------------------------
Unique (cost=278362.15..278362.26 rows=12 width=64)
-> Sort (cost=278362.15..278362.20 rows=117 width=64)
Sort Key: obj1.docid
-> Hash Join (cost=278039.06..278361.34 rows=117 width=64)
Hash Cond: ("outer".docid = "inner".refers)
-> Seq Scan on obj1 (cost=0.00..299.20 rows=22440 width=4)
-> Hash (cost=278039.00..278039.00 rows=117 width=60)
-> Hash Join (cost=277702.73..278039.00
rows=117 width=60)
Hash Cond: ("outer".referrer = "inner".refers)
-> Seq Scan on r2 r_1_2
(cost=0.00..308.81 rows=27161 width=8)
-> Hash (cost=277702.53..277702.53
rows=395 width=52)
-> Nested Loop
(cost=343.71..277702.53 rows=395 width=52)
Join Filter: ("outer".refers =
"inner".referrer)
-> Nested Loop
(cost=343.71..258374.97 rows=1 width=44)
-> Hash Join
(cost=343.71..258371.80 rows=1 width=40)
Hash Cond:
("outer".refers = "inner".refers)
-> Nested Loop
(cost=0.00..258011.22 rows=16859 width=28)
-> Nested
Loop (cost=0.00..238683.66 rows=1 width=20)
->
Nested Loop (cost=0.00..219356.09 rows=1 width=12)

-> Index Scan using obj4_word_map_wid_idx on obj4_word_map
(cost=0.00..213099.71 rows=1600 width=4)

Index Cond: (wid = 19458::oid)

Filter: ("type" = 4)

-> Index Scan using obj4_doc_idx on obj4 (cost=0.00..3.90 rows=1
width=8)

Index Cond: ("outer".doc = obj4.descriptor)

Filter: (text_field ~* 'ABC'::text)
->
Index Scan using r1_referrer_idx on r1 (cost=0.00..19234.83
rows=16860 width=8)

Index Cond: (r1.referrer = "outer".objobj3t)
-> Index
Scan using r1_referrer_idx on r1 r1_2_3 (cost=0.00..19234.83
rows=16860 width=8)
Index
Cond: ("outer".refers = r1_2_3.referrer)
-> Hash
(cost=343.70..343.70 rows=23 width=12)
-> Nested
Loop (cost=0.00..343.70 rows=23 width=12)
->
Index Scan using obj3_field1_idx on obj3 (cost=0.00..3.33 rows=1
width=4)

Index Cond: (field1 = 'XYZ'::text)
->
Index Scan using r3_referrer_idx on r3 r_2_3 (cost=0.00..339.86
rows=94 width=8)

Index Cond: ("outer".docid = r_2_3.referrer)
-> Index Scan using
obj2_docid_idx on obj2 (cost=0.00..3.16 rows=1 width=4)
Index Cond:
(obj2.docid = "outer".referrer)
-> Index Scan using
r1_referrer_idx on r1 r1_1_2 (cost=0.00..19234.83 rows=16860 width=8)
Index Cond:
("outer".referrer = r1_1_2.referrer)

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Joshua D. Drake 2004-09-09 20:01:03 Re: Memory exhausted (leak?)
Previous Message Bruno Wolff III 2004-09-09 19:11:06 Re: Obtaining the Julian Day from a date