Re: PseudoPartitioning and agregates

Lists: pgsql-hackers
From: Sokolov Yura <falcon(at)intercable(dot)ru>
To: pgsql-hackers(at)postgresql(dot)org
Subject: PseudoPartitioning and agregates
Date: 2005-04-29 11:54:27
Message-ID: 1611209878.20050429155427@intercable.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hello, pgsql-hackers.

I have an idea ( :-) ) about
SELECT field1,agregate(field2) FROM view GROUP BY field1;
(and its variant SELECT agragate(field2) FROM view)
where view is SELECT ... UNION ALL ... :

As i understood from thread
http://archives.postgresql.org/pgsql-performance/2004-12/msg00101.php
there is a problem, that APPEND allways perfoms before aggregate;

I think, postgres can perfoms aggregate on each table in union first,
and then merge results.
For this, for aggregate we must define function for merging, which should
accept two values of type STYPE.

for example:
CREATE AGGREGATE max(
BASETYPE=float4,
SFUNC=float4larger,
STYPE=float4,
MERGEFUNC=float4larger,
FINALFUNC="-"
);
CREATE AGGREGATE sum(
BASETYPE=float4,
SFUNC=float4pl,
STYPE=float4,
MERGEFUNC=float4pl,
FINALFUNC="-"
);
CREATE AGGREGATE count(
BASETYPE="any",
SFUNC=int8inc,
STYPE=int8,
FINALFUNC="-",
MERGEFUNC=int8pl, -- special case
INITCOND='0'
);

CREATE TABLE t1
(
id INT PRIMARY KEY,
grp INT,
amount FLOAT4
);
CREATE INDEX ix_t1_grp ON t1 (grp);

CREATE TABLE t2
(
id INT PRIMARY KEY,
grp INT,
amount FLOAT4
);
CREATE INDEX ix_t2_grp ON t2 (grp);

insert into t1 select tt.i,tt.i/100,tt.i*sin(tt.i) from generate_series(1,262144) tt(i);
insert into t2 select tt.i,tt.i/100,tt.i*sin(tt.i) from generate_series(262145,524288) tt(i);

VACUUM FULL ANALIZE t1;
VACUUM FULL ANALIZE t2;

CREATE VIEW union_table AS
SELECT id,grp,amount FROM t1
UNION ALL
SELECT id,grp, amount FROM t2;

So, now t1 and t2 both contain 262144 rows ( summary 524288)
max(t1.grp)=min(t2.grp)=2621

Now, for perfoming query

SELECT group,count(*) AS c,sum(amount) AS s,max(amount) AS m FROM union_table GROUP BY grp;

Postgres selects rows from t1 and t2, APPENDs it together, and then perfoming HASH.

HashAggregate (cost=23830.52..23832.02 rows=200 width=8) (actual time=22547.272..22586.130 rows=5243 loops=1)
-> Subquery Scan t_union (cost=0.00..18587.64 rows=524288 width=8) (actual time=0.204..17863.444 rows=524288 loops=1)
-> Append (cost=0.00..13344.76 rows=524288 width=12) (actual time=0.193..12990.177 rows=524288 loops=1)
-> Subquery Scan "*SELECT* 1" (cost=0.00..6684.88 rows=262144 width=12) (actual time=0.186..4488.981 rows=262144 loops=1)
-> Seq Scan on t (cost=0.00..4063.44 rows=262144 width=12) (actual time=0.163..1915.213 rows=262144 loops=1)
-> Subquery Scan "*SELECT* 2" (cost=0.00..6659.88 rows=262144 width=12) (actual time=0.225..4558.788 rows=262144 loops=1)
-> Seq Scan on t1 (cost=0.00..4038.44 rows=262144 width=12) (actual time=0.208..1798.410 rows=262144 loops=1)
Total runtime: 22634.454 ms
(well, actual time is 2375 ms Postgres 8.0.1 Slackware 10.0)

But it would be quicker agregates first table, then second and merge it.
For example, here is a query, that do it explicitly:

SELECT COALESCE(t1.grp,t2.grp) as grp,
CASE WHEN t1.grp IS NOT NULL AND t2.grp IS NOT NULL THEN
int8pl(t1.c,t2.c)
when t1.grp IS NOT NULL THEN
t1.c
ELSE
t2.c
END AS c,
CASE WHEN t1.grp IS NOT NULL AND t2.grp IS NOT NULL THEN
float4pl(t1.s,t2.s)
WHEN t1.grp IS NOT NULL THEN
t1.s
ELSE
t2.s
END AS s,
CASE WHEN t1.grp IS NOT NULL AND t2.grp IS NOT NULL THEN
float4larger(t1.m,t2.m)
WHEN t1.grp IS NOT NULL THEN
t1.m
ELSE
t2.m
END AS m
FROM
(SELECT grp,count(*) AS c,sum(amount) AS s,max(amount) AS m from t1 group by grp) as t1
FULL JOIN
(SELECT grp,count(*) AS c,sum(amount) AS s,max(amount) AS m from t2 group by grp) as t2
ON t1.grp=t2.grp;

Here is an explain analize:

Merge Full Join (cost=13737.48..14535.48 rows=34885 width=40) (actual time=7908.438..7989.105 rows=5243 loops=1)
Merge Cond: ("outer".grp = "inner".grp)
-> Sort (cost=6854.32..6860.87 rows=2618 width=20) (actual time=4070.833..4083.687 rows=2622 loops=1)
Sort Key: t2.grp
-> Subquery Scan t2 (cost=6659.88..6705.70 rows=2618 width=20) (actual time=4005.290..4057.131 rows=2622 loops=1)
-> HashAggregate (cost=6659.88..6679.52 rows=2618 width=8) (actual time=4005.273..4025.564 rows=2622 loops=1)
-> Seq Scan on t1 (cost=0.00..4038.44 rows=262144 width=8) (actual time=0.094..1712.362 rows=262144 loops=1)
-> Sort (cost=6883.15..6889.82 rows=2665 width=20) (actual time=3837.433..3845.754 rows=2622 loops=1)
Sort Key: t1.grp
-> Subquery Scan t1 (cost=6684.88..6731.52 rows=2665 width=20) (actual time=3771.661..3822.520 rows=2622 loops=1)
-> HashAggregate (cost=6684.88..6704.87 rows=2665 width=8) (actual time=3771.564..3793.023 rows=2622 loops=1)
-> Seq Scan on t (cost=0.00..4063.44 rows=262144 width=8) (actual time=0.076..1594.755 rows=262144 loops=1)
Total runtime: 8014.739 ms
(actual time is 1760ms - first run, 1468 ms - second Postgres 8.0.1 Slackware 10.0)

Also, we can apply WHERE conditions on each branch of union:

select grp,count(*) as c,sum(amount) as s,max(amount) as m from t_union where grp<2621 group by grp ;

HashAggregate (cost=12589.40..12590.90 rows=200 width=8) (actual time=11288.297..11307.966 rows=2621 loops=1)
-> Subquery Scan t_union (cost=0.00..9967.95 rows=262145 width=8) (actual time=0.126..8918.971 rows=262099 loops=1)
-> Append (cost=0.00..7346.50 rows=262145 width=12) (actual time=0.115..6415.779 rows=262099 loops=1)
-> Subquery Scan "*SELECT* 1" (cost=0.00..7339.99 rows=262119 width=12) (actual time=0.108..4494.540 rows=262099 loops=1)
-> Seq Scan on t (cost=0.00..4718.80 rows=262119 width=12) (actual time=0.092..1978.520 rows=262099 loops=1)
Filter: (grp < 2621)
-> Subquery Scan "*SELECT* 2" (cost=0.00..6.51 rows=26 width=12) (actual time=0.036..0.036 rows=0 loops=1)
-> Index Scan using ix_t1_grp on t1 (cost=0.00..6.25 rows=26 width=12) (actual time=0.027..0.027 rows=0 loops=1)
Index Cond: (grp < 2621)
Total runtime: 11317.777 ms
(actal time ~1300ms)

AND

select coalesce(t1.grp,t2.grp) as grp,
case when t1.grp IS NOT NULL AND t2.grp IS NOT NULL THEN
int8pl(t1.c,t2.c)
when t1.grp IS NOT NULL THEN
t1.c
else
t2.c
end as c,
case when t1.grp IS NOT NULL AND t2.grp IS NOT NULL THEN
float4pl(t1.s,t2.s)
when t1.grp IS NOT NULL THEN
t1.s
else
t2.s
end as s,
case when t1.grp IS NOT NULL AND t2.grp IS NOT NULL THEN
float4larger(t1.m,t2.m)
when t1.grp IS NOT NULL THEN
t1.m
else
t2.m
end as m
from
(select grp,count(*) as c,sum(amount) as s,max(amount) as m from t1 where grp<2621 group by grp) as t1
FULL JOIN
(select grp,count(*) as c,sum(amount) as s,max(amount) as m from t2 where grp<2621 group by grp) as t2
ON t1.grp=t2.grp;

Merge Full Join (cost=7544.80..7578.26 rows=2665 width=40) (actual time=4237.580..4274.845 rows=2621 loops=1)
Merge Cond: ("outer".grp = "inner".grp)
-> Sort (cost=6.54..6.54 rows=1 width=20) (actual time=0.095..0.095 rows=0 loops=1)
Sort Key: t2.grp
-> Subquery Scan t2 (cost=0.00..6.53 rows=1 width=20) (actual time=0.050..0.050 rows=0 loops=1)
-> GroupAggregate (cost=0.00..6.52 rows=1 width=8) (actual time=0.040..0.040 rows=0 loops=1)
-> Index Scan using ix_t1_grp on t1 (cost=0.00..6.25 rows=26 width=8) (actual time=0.030..0.030 rows=0 loops=1)
Index Cond: (grp < 2621)
-> Sort (cost=7538.26..7544.93 rows=2665 width=20) (actual time=4237.451..4246.096 rows=2621 loops=1)
Sort Key: t1.grp
-> Subquery Scan t1 (cost=7339.99..7386.63 rows=2665 width=20) (actual time=4178.483..4223.372 rows=2621 loops=1)
-> HashAggregate (cost=7339.99..7359.98 rows=2665 width=8) (actual time=4178.468..4195.689 rows=2621 loops=1)
-> Seq Scan on t (cost=0.00..4718.80 rows=262119 width=8) (actual time=0.096..1944.151 rows=262099 loops=1)
Filter: (grp < 2621)
Total runtime: 4286.724 ms
(actal time ~812ms)

So, in case of union two equivalent tables we have 66% short time.
What will be in case of three, four ... ?
--
Sokolov Yura mailto:falcon(at)intercable(dot)ru


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: falcon <falcon(at)intercable(dot)ru>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: PseudoPartitioning and agregates
Date: 2005-05-22 22:46:51
Message-ID: 18502.1116802011@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Sokolov Yura <falcon(at)intercable(dot)ru> writes:
> I think, postgres can perfoms aggregate on each table in union first,
> and then merge results.

I looked into this because it did not make a lot of sense to me. The
aggregates you are testing with (count, sum, max) are all perfectly
linear in the number of input values, so it absolutely should not save
any time to divide up and then recombine the input, especially not if
the recombination takes some actual work (like a join).

What I found was that the reason for the difference was the overhead of
SubqueryScan and Append nodes:

> HashAggregate (cost=23830.52..23832.02 rows=200 width=8) (actual time=22547.272..22586.130 rows=5243 loops=1)
> -> Subquery Scan t_union (cost=0.00..18587.64 rows=524288 width=8) (actual time=0.204..17863.444 rows=524288 loops=1)
> -> Append (cost=0.00..13344.76 rows=524288 width=12) (actual time=0.193..12990.177 rows=524288 loops=1)
> -> Subquery Scan "*SELECT* 1" (cost=0.00..6684.88 rows=262144 width=12) (actual time=0.186..4488.981 rows=262144 loops=1)
> -> Seq Scan on t (cost=0.00..4063.44 rows=262144 width=12) (actual time=0.163..1915.213 rows=262144 loops=1)
> -> Subquery Scan "*SELECT* 2" (cost=0.00..6659.88 rows=262144 width=12) (actual time=0.225..4558.788 rows=262144 loops=1)
> -> Seq Scan on t1 (cost=0.00..4038.44 rows=262144 width=12) (actual time=0.208..1798.410 rows=262144 loops=1)
> Total runtime: 22634.454 ms
> (well, actual time is 2375 ms Postgres 8.0.1 Slackware 10.0)

EXPLAIN ANALYZE overstates the penalty because its per-plan-node
instrumentation overhead is pretty high, but nonetheless it's
clear that the actually useful work (the two seqscans and the
HashAggregate) is only accounting for a portion of the runtime.
The reason your join query wins is that only a much smaller number of
tuples have to pass through multiple levels of plan nodes.

In the above example the Subquery Scan nodes aren't really doing
anything useful at all: they have neither any selection (filter
conditions) nor any projection (the output columns are the same
as the input, though this is not shown by EXPLAIN). They are put there
by the planner because there are cases where they *are* needed,
eg to do type conversion when UNION'ing unlike column types.
But we could try harder to optimize them out.

I have committed some changes in CVS tip to get rid of useless
Subquery Scan nodes. Your example now looks like

save=# explain analyze SELECT grp,count(*) AS c,sum(amount) AS s,max(amount) AS m FROM union_table GROUP BY grp;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=24311.52..24313.02 rows=200 width=8) (actual time=15201.186..15261.184 rows=5243 loops=1)
-> Append (cost=0.00..13825.76 rows=524288 width=12) (actual time=0.236..7519.033 rows=524288 loops=1)
-> Seq Scan on t1 (cost=0.00..4291.44 rows=262144 width=12) (actual time=0.205..2071.102 rows=262144 loops=1)
-> Seq Scan on t2 (cost=0.00..4291.44 rows=262144 width=12) (actual time=0.095..1743.434 rows=262144 loops=1)
Total runtime: 15292.082 ms
(5 rows)

The Subquery Scans also disappear from your more complex query,
but since they weren't processing nearly as many tuples, there's
not much improvement there:

QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
Merge Full Join (cost=14207.24..14964.22 rows=33081 width=40) (actual time=11339.442..11457.087 rows=5243 loops=1)
Merge Cond: ("outer".grp = "inner".grp)
-> Sort (cost=7100.32..7106.65 rows=2532 width=20) (actual time=5672.726..5682.937 rows=2622 loops=1)
Sort Key: t1.grp
-> HashAggregate (cost=6912.88..6931.87 rows=2532 width=8) (actual time=5591.574..5624.523 rows=2622 loops=1)
-> Seq Scan on t1 (cost=0.00..4291.44 rows=262144 width=8) (actual time=0.212..1707.122 rows=262144 loops=1)
-> Sort (cost=7106.91..7113.45 rows=2613 width=20) (actual time=5666.598..5676.735 rows=2622 loops=1)
Sort Key: t2.grp
-> HashAggregate (cost=6912.88..6932.48 rows=2613 width=8) (actual time=5584.098..5616.810 rows=2622 loops=1)
-> Seq Scan on t2 (cost=0.00..4291.44 rows=262144 width=8) (actual time=0.139..1707.351 rows=262144 loops=1)
Total runtime: 11501.805 ms
(11 rows)

The EXPLAIN ANALYZE overhead for the Append is still pretty heavy,
but when comparing actual runtimes for the two queries, they are
now very nearly the same.

regards, tom lane


From: Greg Stark <gsstark(at)mit(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: falcon <falcon(at)intercable(dot)ru>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: PseudoPartitioning and agregates
Date: 2005-05-25 06:53:03
Message-ID: 87oeaz94q8.fsf@stark.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:

> The EXPLAIN ANALYZE overhead for the Append is still pretty heavy,
> but when comparing actual runtimes for the two queries, they are
> now very nearly the same.

How hard would it be to have Postgres actually remove the gettimeofday
overhead from the EXPLAIN ANALYZE output?

It seems like it ought to be able to time a couple hundred gettimeofday calls
and get a perfectly usable figure. The actual amount of overhead per call
should be very consistent and it should be easy to keep track of how many
gettimeofday calls were needed.

For queries that don't do much i/o, especially on loaded machines, there could
still be a problem in that the added syscalls would cause most unix schedulers
to behave differently. But at least it would be basically right.

--
greg


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Greg Stark <gsstark(at)mit(dot)edu>
Cc: falcon <falcon(at)intercable(dot)ru>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: PseudoPartitioning and agregates
Date: 2005-05-25 13:37:05
Message-ID: 11537.1117028225@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Greg Stark <gsstark(at)mit(dot)edu> writes:
> How hard would it be to have Postgres actually remove the gettimeofday
> overhead from the EXPLAIN ANALYZE output?

Personally, I dislike measurement tools that lie to you under the flag
of producing more-easily-interpreted results.

As an example of why this would be a bad idea, the total time would no
longer be closely related to the actual elapsed time (as measured by
psql's \timing for instance) so you would be entirely unable to tell
whether there was some significant factor not being measured.

regards, tom lane


From: Greg Stark <gsstark(at)mit(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Greg Stark <gsstark(at)mit(dot)edu>, falcon <falcon(at)intercable(dot)ru>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: PseudoPartitioning and agregates
Date: 2005-05-25 15:21:03
Message-ID: 87fywb8h7k.fsf@stark.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:

> Greg Stark <gsstark(at)mit(dot)edu> writes:
>
> > How hard would it be to have Postgres actually remove the gettimeofday
> > overhead from the EXPLAIN ANALYZE output?
>
> Personally, I dislike measurement tools that lie to you under the flag
> of producing more-easily-interpreted results.

This is pretty standard practice for profilers in other contexts.

> As an example of why this would be a bad idea, the total time would no
> longer be closely related to the actual elapsed time (as measured by
> psql's \timing for instance) so you would be entirely unable to tell
> whether there was some significant factor not being measured.

Well that would be easily remedied by printing the total overhead subtracted
from all the nodes after the plan.

--
greg