Re: copy table from file: with row replacement?

From: "Florian G(dot) Pflug" <fgp(at)phlo(dot)org>
To: Michael Enke <michael(dot)enke(at)wincor-nixdorf(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: copy table from file: with row replacement?
Date: 2007-01-13 02:10:24
Message-ID: 45A83F90.4090008@phlo.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Michael Enke wrote:
> This works for small amount of data. But for large amount of data
> the join takes a lot of time.

It certainly is faster then anly algorithm that checks for duplicates
for each lines of copy input could ever be. Especially for joins, doing
them in one large batch allows you to use better algorithms then looping
over one table, and searching for matching rows in the other - which is
exactly what copy would need to do if it had an "replace on duplicate"
flag.

I think the fastest way to join two large tables would be a mergejoin.
Try doing an "explain select" (or "explain delete") to see what algorithm
postgresc chooses. Check if you actually declared your primary key
in both tables - it might help postgres to know that the column you're joining
in is unique. Also check your work_mem setting - if this is set too low,
it often forces postgres to use inferior plans becaues it tries to save memory.

greetings, Florian Pflug

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tatsuo Ishii 2007-01-13 03:09:47 Re: Request for review: tsearch2 patch
Previous Message Simon Riggs 2007-01-12 23:36:30 Re: Autovacuum Improvements