From: | joël Winteregg <joel(dot)winteregg(at)gmail(dot)com> |
---|---|
To: | Richard Huxton <dev(at)archonet(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Insert performance |
Date: | 2007-03-06 07:53:27 |
Message-ID: | 1173167607.4917.8.camel@hatman |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi Richard,
> >
> > Here is my problem. With some heavy insert into a simple BD (one
> > table, no indexes) i can't get better perf than 8000 inserts/sec. I'm
> > testing it using a simple C software which use libpq and which use:
> > - Insert prepared statement (to avoid too many request parsing on the
> > server)
> > - transaction of 100000 inserts
>
> Are each of the INSERTs in their own transaction?
>
No, as said above transactions are made of 100000 inserts...
> If so, you'll be limited by the speed of the disk the WAL is running on.
>
> That means you have two main options:
> 1. Have multiple connections inserting simultaneously.
Yes, you're right. That what i have been testing and what provide the
best performance ! I saw that postgresql frontend was using a lot of CPU
and not both of them (i'm using a pentium D, dual core). To the opposit,
the postmaster process use not much resources. Using several client,
both CPU are used and i saw an increase of performance (about 18000
inserts/sec).
So i think my bottle neck is more the CPU speed than the disk speed,
what do you think ?
I use 2 disks (raid 0) for the data and a single disk for pg_xlog.
> 2. Batch your inserts together, from 10 to 10,000 per transaction.
>
Yes, that's what i'm doing.
Thanks a lot for the advices !
regards,
Joël
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2007-03-06 08:08:29 | Re: Insert performance |
Previous Message | Richard Huxton | 2007-03-06 07:37:39 | Re: Hibernate left join |