Re: RÈp. : How to opimize the insertion of

Lists: pgsql-novice
From: "M(dot) Bastin" <marcbastin(at)mindspring(dot)com>
To: pgsql-novice(at)postgresql(dot)org
Subject: Re: RÈp. : How to opimize the insertion of
Date: 2003-09-10 17:37:14
Message-ID: a06002009bb8511a78b81@[192.168.0.14]
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-novice

Hi Juan,

Why don't you do a COPY FROM STDIN? You could import these records
in a few minutes time over the LAN or even the internet. I do 5
million records in less than 3 minutes this way over localhost on a
PowerBook G4 550 MHz.

It's also recommended to drop your indexes while you do such large
inserts/imports. (Create them again afterwards of course.)

Marc

> >>> Juan Francisco Diaz <j-diaz(at)publicar(dot)com> 09/09/2003 23:05:54 >>>
>Hi, have tried by all means to optimize the insertion of data in my db
>but
>it has been impossible.
>Righto now to insert around 300 thou records it takes soemthing like 50
>to
>65 minutes (too long).
>Im using a Mac powerpc g4 533Mhz with 256 RAM.
>I would relly appreciate that the insertion process is done in like 30
>or 35
>minutes TOPS. So far it is impossible.
>My db right now has no FK, no indexes, the insertions is being done in
>batch
>(19 thou records).
>Is it possible with my current machine to achieve the level of
>performance
>i've metioned?
>Any help would be greatly appreciated, by the way the same insertion
>takes
>25 mins in ms sqlserver2000 in a p3 1.4ghz 1gig ram.
>Thanks
>
>JuanF
>
>
>---------------------------(end of
>broadcast)---------------------------
>TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
>
>---------------------------(end of broadcast)---------------------------
>TIP 4: Don't 'kill -9' the postmaster


From: Juan Francisco Diaz <j-diaz(at)publicar(dot)com>
To: "M(dot) Bastin" <marcbastin(at)mindspring(dot)com>, <pgsql-novice(at)postgresql(dot)org>
Subject: Re: R=?ISO-8859-1?B?yA==?=p. : How to opimize the
Date: 2003-09-10 19:13:44
Message-ID: BB84E218.549%j-diaz@publicar.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-novice

Thanks everyone for your tips, i forgot to tell that the load is being done
via jdbc... So if you have any other tips please tell me!
Thanks a lot again, it manged to reduce the load time to 37 mins, really
close to the 30 mins i need.

JuanF

El 9/10/03 12:37 PM, "M. Bastin" <marcbastin(at)mindspring(dot)com> escribió:

> Hi Juan,
>
> Why don't you do a COPY FROM STDIN? You could import these records
> in a few minutes time over the LAN or even the internet. I do 5
> million records in less than 3 minutes this way over localhost on a
> PowerBook G4 550 MHz.
>
> It's also recommended to drop your indexes while you do such large
> inserts/imports. (Create them again afterwards of course.)
>
> Marc
>
>>>>> Juan Francisco Diaz <j-diaz(at)publicar(dot)com> 09/09/2003 23:05:54 >>>
>> Hi, have tried by all means to optimize the insertion of data in my db
>> but
>> it has been impossible.
>> Righto now to insert around 300 thou records it takes soemthing like 50
>> to
>> 65 minutes (too long).
>> Im using a Mac powerpc g4 533Mhz with 256 RAM.
>> I would relly appreciate that the insertion process is done in like 30
>> or 35
>> minutes TOPS. So far it is impossible.
>> My db right now has no FK, no indexes, the insertions is being done in
>> batch
>> (19 thou records).
>> Is it possible with my current machine to achieve the level of
>> performance
>> i've metioned?
>> Any help would be greatly appreciated, by the way the same insertion
>> takes
>> 25 mins in ms sqlserver2000 in a p3 1.4ghz 1gig ram.
>> Thanks
>>
>> JuanF
>>
>>
>> ---------------------------(end of
>> broadcast)---------------------------
>> TIP 5: Have you checked our extensive FAQ?
>>
>> http://www.postgresql.org/docs/faqs/FAQ.html
>>
>> ---------------------------(end of broadcast)---------------------------
>> TIP 4: Don't 'kill -9' the postmaster
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo(at)postgresql(dot)org so that your
> message can get through to the mailing list cleanly
>