Re: GEQO vs join order restrictions

Lists: pgsql-hackers
From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgreSQL(dot)org
Cc: Andres Freund <andres(at)anarazel(dot)de>
Subject: GEQO vs join order restrictions
Date: 2009-07-18 15:48:14
Message-ID: 17807.1247932094@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I spent a bit of time looking into why GEQO behaves so miserably on the
test case Andres Freund presented here:
http://archives.postgresql.org/message-id/200907091700.43411.andres@anarazel.de

The difficulty seems to be that the problem query is full of join order
restrictions; in particular lots of joins inside the right side of a
LEFT JOIN. So those sub-joins have to occur before their relations
can be joined to anything else. When GEQO generates a random join
sequence, it is very likely indeed that such pairs of relations will
not be adjacent in the raw join sequence. There is some code in
gimme_tree() (the "stack" business I added here:
http://archives.postgresql.org/pgsql-committers/2004-01/msg00186.php
) that was meant to try to deal with that, but I now realize that it
entirely fails to do so. Given two relations that have to be joined
to each other first, if they are not already adjacent in the input
then they just create two separate stack entries and the algorithm
never tries to join them to each other. So the failure rate in the
presence of join order restrictions is very high, and it gets rapidly
worse as the join problem size increases. This explains Andres'
observation of random_init_pool() reporting complete failure at high
collapse_limit settings. We can't really expect a random search process
to be efficient at discovering that two out of a hundred items must be
adjacent --- especially not if the problem has multiple restrictions
like that and the only feedback it gets is overall success or failure.

I'm inclined to address this by rewriting gimme_tree so that it *always*
finds a valid join order based on the given tour. This would involve
searching the whole stack for a possible join partner instead of
considering only the topmost stack entry. It sounds inefficient, but
it's not nearly as bad as wasting the entire cycle by failing near
the end, which is what happens now.

A different line of thought is to add some knowledge about join order
restrictions to tour generation, such that the code never generates an
invalid join order to start with. Unfortunately it's not at all clear
how to do that.

Ideas, comments?

regards, tom lane


From: Andres Freund <andres(at)anarazel(dot)de>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: GEQO vs join order restrictions
Date: 2009-07-18 16:19:05
Message-ID: 200907181819.05570.andres@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Saturday 18 July 2009 17:48:14 Tom Lane wrote:
> I spent a bit of time looking into why GEQO behaves so miserably on the
> test case Andres Freund presented here:
> http://archives.postgresql.org/message-id/200907091700.43411.andres@anaraze
>l.de
>
> The difficulty seems to be that the problem query is full of join order
> restrictions; in particular lots of joins inside the right side of a
> LEFT JOIN. So those sub-joins have to occur before their relations
> can be joined to anything else. When GEQO generates a random join
> sequence, it is very likely indeed that such pairs of relations will
> not be adjacent in the raw join sequence. There is some code in
> gimme_tree() (the "stack" business I added here:
> http://archives.postgresql.org/pgsql-committers/2004-01/msg00186.php
> ) that was meant to try to deal with that, but I now realize that it
> entirely fails to do so. Given two relations that have to be joined
> to each other first, if they are not already adjacent in the input
> then they just create two separate stack entries and the algorithm
> never tries to join them to each other. So the failure rate in the
> presence of join order restrictions is very high, and it gets rapidly
> worse as the join problem size increases. This explains Andres'
> observation of random_init_pool() reporting complete failure at high
> collapse_limit settings. We can't really expect a random search process
> to be efficient at discovering that two out of a hundred items must be
> adjacent --- especially not if the problem has multiple restrictions
> like that and the only feedback it gets is overall success or failure.
Yes, this sound sensible and follows the same line of thought I started to
have after you suggested the problem is in random_init_pool.

This also explains why I saw nearly no improvement during the genetic search
itself. The paths out of random_init_pool were already hugely selected, so
there were not that many improvements to find and a change was relatively like
to yield a impossible ordering.
> I'm inclined to address this by rewriting gimme_tree so that it *always*
> finds a valid join order based on the given tour. This would involve
> searching the whole stack for a possible join partner instead of
> considering only the topmost stack entry. It sounds inefficient, but
> it's not nearly as bad as wasting the entire cycle by failing near
> the end, which is what happens now.
I guess this should be relatively easy to implement and test?

> A different line of thought is to add some knowledge about join order
> restrictions to tour generation, such that the code never generates an
> invalid join order to start with. Unfortunately it's not at all clear
> how to do that.
I haven't really thought much about that yet, but wouldn't it be possible to
iteratively check the current path during tour generation for every relation
added?
If the next relation yields a impossible ordering, choose another relation as
next, if no more left, restart?

I do even less know how feasible this is, but given that joins in the right
hand side of a LEFT JOIN are not really useful to study from the outside in
the general case, would it be possible to "hide" them below the join during
join order planning? If yes, that should be applicable for the normal planner
as well.
I do realize that this is nothing one could fastly cook up and that it would
require changes in lots of places, but as a general idea...

Andres


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: GEQO vs join order restrictions
Date: 2009-07-18 16:49:09
Message-ID: 18823.1247935749@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andres Freund <andres(at)anarazel(dot)de> writes:
> This also explains why I saw nearly no improvement during the genetic search
> itself. The paths out of random_init_pool were already hugely selected, so
> there were not that many improvements to find and a change was relatively like
> to yield a impossible ordering.

Yeah, I suspect most of the variants tried during that phase simply
failed.

> I do even less know how feasible this is, but given that joins in the right
> hand side of a LEFT JOIN are not really useful to study from the outside in
> the general case, would it be possible to "hide" them below the join during
> join order planning?

We could refrain from collapsing the sub-problem during joinlist
formation. But the trouble with that is it creates a "hard" join order
restriction. Most of the restrictions are "soft" to some extent, ie,
you can do some rearrangements but not others. It might be worth
looking at though; in the cases where there is no chance of a
rearrangement, it would save cycles for either regular or GEQO planning.

regards, tom lane


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: GEQO vs join order restrictions
Date: 2009-07-19 04:28:20
Message-ID: 603c8f070907182128r5e98aca9g44ba02a8103bb9f3@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Jul 18, 2009 at 12:49 PM, Tom Lane<tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> We could refrain from collapsing the sub-problem during joinlist
> formation.  But the trouble with that is it creates a "hard" join order
> restriction.  Most of the restrictions are "soft" to some extent, ie,
> you can do some rearrangements but not others.  It might be worth
> looking at though; in the cases where there is no chance of a
> rearrangement, it would save cycles for either regular or GEQO planning.

This seems very promising, although the proof of the pudding is in the
eating. As a slight refinement of this idea, it's not exactly that
the join order is totally fixed, but rather that in a large join nest
there may be groups of tables that can be planned as independent
subproblems. The obvious example of this is something like:

(...lots of stuff...) FULL JOIN (...lots of things...)

...where there isn't any point in considering any joins between stuff
and things, regardless of whether you are using GEQO or the standard
planner (and maybe we handle this case already?). My brain is a
little foggy at the moment, but I think this would also apply to
Andres' example query, because I believe with anything of the form...

A LEFT JOIN (B1 INNER JOIN B2 INNER JOIN B3 ... INNER JOIN Bn) LEFT JOIN C

...the set of all Bi can be planned as an independent subproblem and
then joined to A either before or after C. But (I think):

A LEFT JOIN (B1 INNER JOIN B2 LEFT JOIN B3) LEFT JOIN C
= A LEFT JOIN (B1 INNER JOIN B2) LEFT JOIN C LEFT JOIN B3

Here {B1, B2} is an independent subproblem, but {B1, B2, B3} is not.
Still, given that the planning time for the standard planner at least
can grow exponentially in the number of relations, even a small
reduction in the problem size is potentially a big win.

...Robert


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: GEQO vs join order restrictions
Date: 2009-07-19 17:23:18
Message-ID: 1451.1248024198@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andres Freund <andres(at)anarazel(dot)de> writes:
> On Saturday 18 July 2009 17:48:14 Tom Lane wrote:
>> I'm inclined to address this by rewriting gimme_tree so that it *always*
>> finds a valid join order based on the given tour. This would involve
>> searching the whole stack for a possible join partner instead of
>> considering only the topmost stack entry. It sounds inefficient, but
>> it's not nearly as bad as wasting the entire cycle by failing near
>> the end, which is what happens now.

> I guess this should be relatively easy to implement and test?

I wrote a prototype patch for this (attached --- it works, but isn't
terribly well commented). It definitely helps, but it's not the whole
answer. I did some testing using your example query. This table shows
PREPARE times in seconds on a 2.8GHz Xeon EM64T for various settings of
join_collapse_limit and from_collapse_limit (I kept them the same for
each test):

COLLAPSE_LIMITS
8 10 12 14 20 100

GEQO off 16.6 32.1 70.6 (starts to swap)
GEQO, CVS HEAD 2641.6 2849.7 >42000* (failed to make a valid plan)
GEQO, with patch 589.3 596.9 736.1 800.8 1052.4 3735.2

* I left the HEAD/12 case running overnight and it still isn't done...

The regular planner becomes essentially useless at collapse limit 14 or
more; at 14 its memory consumption approaches 2GB which is more than
this box has got, and drives the machine into swap hell. I didn't
try larger settings, but they'd be worse.

The CVS-HEAD version of GEQO dies with "failed to make a valid plan"
at collapse limit 14, because random_init_pool gives up after 10000
failed attempts. It would get worse with higher limits because of
having more join order constraints in the same problem. I think the
reason the runtime goes to the moon at 12 is that a very large
percentage of tours fail, but not quite enough to trigger the 10000-
failures sanity check.

With the patch, GEQO manages to bumble through and produce a plan
even at high collapse limits. It's a whole lot slower than the
regular planner, and I found out that this time consists largely
of desirable_join() checks in gimme_tree(). I said earlier that
the regular planner does that too ... but GEQO is doing it a lot
more, because it repeats the whole planning cycle 500 times.
In previous tests we've seen regular planner runtime cross over
GEQO time around collapse_limit 12. It seems the reason that
this case is so different is that the total problem is so much
larger, and so there is a very large equivalence-class list
(1289 items!) that gets trawled through in each desirable_join check.
That's why have_relevant_eclass_joinclause shows up so high in oprofile.

A very important property not directly exposed in the above table
is that GEQO manages to produce the plan with bounded memory usage.
The process size was consistently 160-200MB for all of the bottom
table row. This is because it throws away the joinrel information
for all the join paths explored and not taken.

My conclusions are:

1. I should clean up and apply the attached patch. Even though it's
not the whole answer, it clearly makes things a good deal better.

2. We need to look into a more efficient representation for making
have_relevant_joinclause and have_join_order_restriction tests.
This is obviously not material for this commitfest, though. An
important point is that we can't just throw memory at the problem,
or we'll be giving up one of GEQO's advantages.

3. I think this puts the final nail in the coffin of the idea that
we can get rid of the collapse_limits altogether. I confess to having
forgotten that one of their major functions was to bound memory usage in
the regular planner, but this set of test runs adequately reminded me.
I still think that we could look at revisiting their default values,
but we'll probably want to fix GEQO per point 2 before we do any more
testing.

Barring objections, I'm going to mark the "remove collapse limits"
patch as rejected.

regards, tom lane

Attachment Content-Type Size
unknown_filename text/plain 9.7 KB

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: GEQO vs join order restrictions
Date: 2009-07-19 18:35:05
Message-ID: 603c8f070907191135u73b82e5fg33396be95d5c84c5@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Jul 19, 2009 at 1:23 PM, Tom Lane<tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> My conclusions are:
>
> 1.  I should clean up and apply the attached patch.  Even though it's
> not the whole answer, it clearly makes things a good deal better.
>
> 2. We need to look into a more efficient representation for making
> have_relevant_joinclause and have_join_order_restriction tests.
> This is obviously not material for this commitfest, though.  An
> important point is that we can't just throw memory at the problem,
> or we'll be giving up one of GEQO's advantages.
>
> 3. I think this puts the final nail in the coffin of the idea that
> we can get rid of the collapse_limits altogether.  I confess to having
> forgotten that one of their major functions was to bound memory usage in
> the regular planner, but this set of test runs adequately reminded me.
> I still think that we could look at revisiting their default values,
> but we'll probably want to fix GEQO per point 2 before we do any more
> testing.
>
> Barring objections, I'm going to mark the "remove collapse limits"
> patch as rejected.

Yeah, I think that's probably right. I still have some hope that we
will be able to get rid of these limits or dramatically raise them at
some point in the future, but it sounds like we have a good chunk of
work to do first. I really appreciate the testing that went into
finding and starting to fix these problems. Many thanks to both
Andres and Tom.

Tom, do you think the "independent subproblem" stuff from last night
would be worth pursuing? I haven't really looked at the code yet but
I would be willing to work on it when I have time; it would be useful
to have your thoughts on the approach before starting, though.

...Robert


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: GEQO vs join order restrictions
Date: 2009-07-19 19:08:23
Message-ID: 4152.1248030503@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> Tom, do you think the "independent subproblem" stuff from last night
> would be worth pursuing?

It's worth looking into. I'm not certain if it will end up being a good
idea or not. Right now the joinlist collapse code is pretty stupid
(as you know --- it only thinks about the collapse_limit variables,
plus IIRC it knows about FULL JOIN). Making it smarter might result in
duplication of logic, or require unpleasant refactoring to avoid such
duplication, or even add more cycles than it's likely to save later on.
Another issue is order of operations: I'm not sure if all the
information needed to make such decisions has been computed at that
point. But we won't know unless we try it. It seems at least
potentially useful.

regards, tom lane


From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-hackers(at)postgresql(dot)org
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: GEQO vs join order restrictions
Date: 2009-07-22 20:09:05
Message-ID: 200907222209.05489.andres@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Tom, Robert, Hi all,
nks,
On Sunday 19 July 2009 19:23:18 Tom Lane wrote:
> Andres Freund <andres(at)anarazel(dot)de> writes:
> > On Saturday 18 July 2009 17:48:14 Tom Lane wrote:
> >> I'm inclined to address this by rewriting gimme_tree so that it *always*
> >> finds a valid join order based on the given tour. This would involve
> >> searching the whole stack for a possible join partner instead of
> >> considering only the topmost stack entry. It sounds inefficient, but
> >> it's not nearly as bad as wasting the entire cycle by failing near
> >> the end, which is what happens now.
> >
> > I guess this should be relatively easy to implement and test?
> With the patch, GEQO manages to bumble through and produce a plan
> even at high collapse limits. It's a whole lot slower than the
> regular planner, and I found out that this time consists largely
> of desirable_join() checks in gimme_tree(). I said earlier that
> the regular planner does that too ... but GEQO is doing it a lot
> more, because it repeats the whole planning cycle 500 times.
> In previous tests we've seen regular planner runtime cross over
> GEQO time around collapse_limit 12. It seems the reason that
> this case is so different is that the total problem is so much
> larger, and so there is a very large equivalence-class list
> (1289 items!) that gets trawled through in each desirable_join check.
> That's why have_relevant_eclass_joinclause shows up so high in oprofile.

> My conclusions are:
> 1. I should clean up and apply the attached patch. Even though it's
> not the whole answer, it clearly makes things a good deal better.
I did some testing with the original queries and the unsurprising result is,
that the planning time is *hugely* smaller (multiple orders of magnitude) and
the execution time is bigger (~15% ) with the few variation of settings I
tested.

Many unplanable queries are now planable.

Thanks,

Andres


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: GEQO vs join order restrictions
Date: 2009-08-08 13:39:48
Message-ID: 603c8f070908080639l3b0e15fes6cd8802e85860e@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Jul 19, 2009 at 3:08 PM, Tom Lane<tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> Tom, do you think the "independent subproblem" stuff from last night
>> would be worth pursuing?
>
> It's worth looking into.  I'm not certain if it will end up being a good
> idea or not.  Right now the joinlist collapse code is pretty stupid
> (as you know --- it only thinks about the collapse_limit variables,
> plus IIRC it knows about FULL JOIN).  Making it smarter might result in
> duplication of logic, or require unpleasant refactoring to avoid such
> duplication, or even add more cycles than it's likely to save later on.
> Another issue is order of operations: I'm not sure if all the
> information needed to make such decisions has been computed at that
> point.  But we won't know unless we try it.  It seems at least
> potentially useful.

I thought about this a little more and I don't think it's going to
work. The problem is that LEFT joins can be reordered VERY freely.
So if you have something like:

A LJ (B IJ C IJ D)

Then, sure, {B C D} is a subproblem. But if you have:

A LJ (B IJ C IJ D) LJ E

...then it's not, any more. Suppose E is being joined against B, for
example. You could decide to do this:

A LJ (B LJ E IJ C IJ D)

So I think this idea has crashed and burned, as Tom would say, unless
someone has an idea for resurrecting it.

...Robert