Re: Performance on Bulk Insert to Partitioned Table

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Charles Gomes <charlesrg(at)outlook(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance on Bulk Insert to Partitioned Table
Date: 2012-12-20 17:39:25
Message-ID: CAOR=d=3_R5DFrN1r06M986L7patsGWMJ-u+xwkaTm6bnhH6CoA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, Dec 20, 2012 at 10:29 AM, Charles Gomes <charlesrg(at)outlook(dot)com> wrote:
> Hello guys
>
> I’m doing 1.2 Billion inserts into a table partitioned in
> 15.
>
> When I target the MASTER table on all the inserts and let
> the trigger decide what partition to choose from it takes 4 hours.
>
> If I target the partitioned table directly during the
> insert I can get 4 times better performance. It takes 1 hour.
>
> I’m trying to get more performance while still using the
> trigger to choose the table, so partitions can be changed without changing the
> application that inserts the data.
>
> What I noticed that iostat is not showing an I/O bottle
> neck.

SNIP

> I also don’t see a CPU bottleneck or context switching
> bottle neck.

Are you sure? How are you measuring CPU usage? If you've got > 1
core, you might need to look at individual cores in which case you
should see a single core maxed out.

Without writing your trigger in C you're not likely to do much better
than you're doing now.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Charles Gomes 2012-12-20 18:55:29 Re: Performance on Bulk Insert to Partitioned Table
Previous Message Charles Gomes 2012-12-20 17:29:19 Performance on Bulk Insert to Partitioned Table