Re: A better way than tweaking NTUP_PER_BUCKET

From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: A better way than tweaking NTUP_PER_BUCKET
Date: 2013-06-23 06:52:34
Message-ID: CA+U5nM+iGtM6gQ_mC3nXSGgEnj-X5W2ixPM8F7RTKGcCDbNOdw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 23 June 2013 03:16, Stephen Frost <sfrost(at)snowman(dot)net> wrote:

> Still doesn't really address the issue of dups though.

Checking for duplicates in all cases would be wasteful, since often we
are joining to the PK of a smaller table.

If duplicates are possible at all for a join, then it would make sense
to build the hash table more carefully to remove dupes. I think we
should treat that as a separate issue.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Fabien COELHO 2013-06-23 07:00:45 Re: [PATCH] pgbench --throttle (submission 7 - with lag measurement)
Previous Message Michael Paquier 2013-06-23 06:34:03 Re: Support for REINDEX CONCURRENTLY