From: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | David Rowley <dgrowleyml(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Functional dependency in GROUP BY through JOINs |
Date: | 2012-12-06 17:57:33 |
Message-ID: | CA+U5nMKrAD0V6+pZNQZmR_O8UgEUF5ZdCssc1ZH2K3v4Df62wg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 6 December 2012 17:21, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Simon Riggs <simon(at)2ndQuadrant(dot)com> writes:
>> On 5 December 2012 23:37, David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
>>> Though this plan might not be quite as optimal as it could be as it performs
>>> the grouping after the join.
>
>> PostgreSQL always calculates aggregation as the last step.
>
>> It's a well known optimisation to push-down GROUP BY clauses to the
>> lowest level, but we don't do that, yet.
>
>> You're right that it can make a massive difference to many queries.
>
> In the case being presented here, it's not apparent to me that there's
> any advantage to be had at all. You still need to aggregate over the
> rows joining to each uniquely-keyed row. So how exactly are you going
> to "push down the GROUP BY", and where does the savings come from?
David presents SQL that shows how that is possible.
In terms of operators, after push down we aggregate 1 million rows and
then join 450. Which seems cheaper than join 1 million rows and
aggregate 1 million. So we're passing nearly 1 million fewer rows into
the join.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2012-12-06 18:00:08 | Re: Functional dependency in GROUP BY through JOINs |
Previous Message | Jeff Davis | 2012-12-06 17:56:53 | Re: Commits 8de72b and 5457a1 (COPY FREEZE) |