Re: Partitioning into thousands of tables?

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Joshua Tolley <eggyknap(at)gmail(dot)com>
Cc: Data Growth Pty Ltd <datagrowth(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Partitioning into thousands of tables?
Date: 2010-08-18 19:51:06
Message-ID: AANLkTinn-WJgTVHRdqiBG-Y8Q2JM+v2PcnqEpq10L92Y@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Aug 6, 2010 at 8:08 AM, Joshua Tolley <eggyknap(at)gmail(dot)com> wrote:
> On Fri, Aug 06, 2010 at 03:10:30PM +1000, Data Growth Pty Ltd wrote:
>>    Is there any significant performance problem associated with partitioning
>>    a table into 2500 sub-tables?  I realise a table scan would be horrendous,
>>    but what if all accesses specified the partitioning criteria "sid".  Such
>>    a scheme would be the simplest to maintain (I think) with the best
>>    localisation of writes.
>
> I seem to remember some discussion on pgsql-hackers recently about the number
> of partitions and its effect on performance, especially planning time.
> Unfortunately I can't find it right now, but in general the conclusion was
> it's bad to have lots of partitions, where "lots" is probably 100 or more.

When it comes to planning time, it's a trade off. If you have a
reporting database that routinely runs queries that take 30 seconds to
30 minutes, an extra 10 seconds planning is no big deal. If you need
to have your queries run in sub-second times, then an extra 10 seconds
is a very big deal.

We partition our stats data at work by day, and keep it around for
years. So, we have 600 to 1000 partitions there. But any query we
run takes minutes to run, so a little extra planning time is no big
deal there.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Merlin Moncure 2010-08-18 21:02:30 Re: pg 9.0, streaming replication, fail over and fail back strategies
Previous Message Peter C. Lai 2010-08-18 16:14:22 Re: MySQL versus Postgres