Re: possible optimization: push down aggregates

From: Claudio Freire <klaussfreire(at)gmail(dot)com>
To: Merlin Moncure <mmoncure(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: possible optimization: push down aggregates
Date: 2014-08-27 21:51:13
Message-ID: CAGTBQpZ0Op_r6e_1Pci9fMf-ksgMAeL6wu_q_ob2vKGNGj7aXg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Aug 27, 2014 at 6:46 PM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
>
> Yeah: I was overthinking it. My mind was on parallel processing of
> the aggregate (which is not what Pavel was proposing) because that
> just happens to be what I'm working on currently -- using dblink to
> decompose various aggregates and distribute the calculation across
> servers. "Woudn't it nice to have to the server to that itself", I
> impulsively thought.

But you'd have part of it too. Because then you'd have semantically
independent parallel nodes in the plan that do some meaningful data
wrangling and spit little output, whereas the previous plan did not do
much with the data and spit loads of rows as a result. This is a big
previous step for parallel execution really.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Julien Rouhaud 2014-08-27 22:37:37 Re: [TODO] Track number of files ready to be archived in pg_stat_archiver
Previous Message Merlin Moncure 2014-08-27 21:46:03 Re: possible optimization: push down aggregates