From: | tfinneid(at)student(dot)matnat(dot)uio(dot)no |
---|---|
To: | "Alvaro Herrera" <alvherre(at)commandprompt(dot)com> |
Cc: | tfinneid(at)student(dot)matnat(dot)uio(dot)no, "Gregory Stark" <stark(at)enterprisedb(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: select count() out of memory |
Date: | 2007-10-25 12:28:04 |
Message-ID: | 42923.134.32.140.234.1193315284.squirrel@webmail.uio.no |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> tfinneid(at)student(dot)matnat(dot)uio(dot)no wrote:
>
>> > are a dump of Postgres's current memory allocations and could be
>> useful in
>> > showing if there's a memory leak causing this.
>>
>> The file is 20M, these are the last lines: (the first line continues
>> unttill ff_26000)
>>
>>
>> idx_attributes_g1_seq_1_ff_4_value7: 1024 total in 1 blocks; 392 free (0
>> chunks); 632 used
>
> You have 26000 partitions???
At the moment the db has 55000 partitions, and thats only a fifth of the
complete volume the system will have in production. The reason I chose
this solution is that a partition will be loaded with new data every 3-30
seconds, and all that will be read by up to 15 readers every time new data
is available. The data will be approx 2-4TB in production in total. So it
will be too slow if I put it in a single table with permanent indexes.
I did a test previously, where I created 1 million partitions (without
data) and I checked the limits of pg, so I think it should be ok.
thomas
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2007-10-25 12:41:04 | Re: select count() out of memory |
Previous Message | Evandro Andersen | 2007-10-25 12:25:14 | Delete/Update with ORDER BY |