From: | Frans Hals <fhals7(at)googlemail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Paul Ramsey <pramsey(at)cleverelephant(dot)ca>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Large index operation crashes postgres |
Date: | 2010-03-26 23:43:04 |
Message-ID: | 39af1ed21003261643k676b96c3s11bfeb9ecbc875d4@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
The index mentioned below has been created in some minutes without problems.
Dropped it and created it again. Uses around 36 % of memorywhile
creating, after completion postmaster stays at 26 %.
> I'm not sure, what you're thinking about generating a self-contained
> test that exhibits similar bloat.
> I have started an index creation using my data without calling postgis
> functions. Just to make it busy:
> <CREATE INDEX idx_placex_sector ON placex USING btree
> (substring(geometry,1,100), rank_address, osm_type, osm_id);>
> This is now running against the 50.000.000 rows in placex. I will
> update you about the memory usage it takes.
>
>> Can you generate a self-contained test case that exhibits similar bloat?
>> I would think it's probably not very dependent on the specific data in
>> the column, so a simple script that constructs a lot of random data
>> similar to yours might be enough, if you would rather not show us your
>> real data.
>>
>> regards, tom lane
>>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Allan Kamau | 2010-03-27 06:34:03 | Re: Connection Pooling |
Previous Message | Frans Hals | 2010-03-26 23:16:53 | Re: Large index operation crashes postgres |