Fix for seg picksplit function

Lists: pgsql-hackers
From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Fix for seg picksplit function
Date: 2010-11-03 21:23:41
Message-ID: AANLkTimeWpY8smv1RnO9ByPjgx-1utq=PWC=-g1e_V28@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hackers,

Seg contrib module contains the same bug in picksplit function as cube
contrib module.
Also, Guttman's split algorithm is not needed in unidimensional case,
because sorting based algorithm is good in this case. I propose the patch
which replace current picksplit implementation with sorting based algorithm.

test=# create table seg_test(a seg);
test=# insert into seg_test (select (a || ' .. ' || a + 0.00005*b)::seg from
(select random() as a, random() as b from generate_series(1,1000000)) x);

Before the patch.

test=# create index seg_test_idx on seg_test using gist (a);
CREATE INDEX
Time: 263981,639 ms

test=# explain (buffers, analyze) select * from seg_test where a @> '0.5 ..
0.5'::seg;
QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on seg_test (cost=36.28..2508.41 rows=1000 width=12)
(actual time=36.909..36.981 rows=23 loops=1)
Recheck Cond: (a @> '0.5'::seg)
Buffers: shared hit=1341
-> Bitmap Index Scan on seg_test_idx (cost=0.00..36.03 rows=1000
width=0) (actual time=36.889..36.889 rows=23 loops=1)
Index Cond: (a @> '0.5'::seg)
Buffers: shared hit=1318
Total runtime: 37.066 ms
(7 rows)

Time: 37,842 ms

After the patch.

test=# create index seg_test_idx on seg_test using gist (a);
CREATE INDEX
Time: 205476,854 ms

test=# explain (buffers, analyze) select * from seg_test where a @> '0.5 ..
0.5'::seg;
QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on seg_test (cost=28.18..2500.31 rows=1000 width=12)
(actual time=0.283..0.397 rows=23 loops=1)
Recheck Cond: (a @> '0.5'::seg)
Buffers: shared hit=27
-> Bitmap Index Scan on seg_test_idx (cost=0.00..27.93 rows=1000
width=0) (actual time=0.261..0.261 rows=23 loops=1)
Index Cond: (a @> '0.5'::seg)
Buffers: shared hit=4
Total runtime: 0.503 ms
(7 rows)

Time: 1,530 ms

Number of pages, which was used for index scan, decreased from 1318 to 4.
I'm going to add this patch to commitfest.
Pg_temporal project contain same bug. If this patch will be accepted by
community, then I'll prepare similar patch for pg_temporal.

----
With best regards,
Alexander Korotkov.

Attachment Content-Type Size
seg_picksplit_fix-0.1.patch text/x-patch 6.8 KB

From: Yeb Havinga <yebhavinga(at)gmail(dot)com>
To: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-05 15:53:04
Message-ID: 4CD42860.4070706@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hello Alexander,

Here follows a review of your patch.
> Hackers,
>
> Seg contrib module contains the same bug in picksplit function as
> cube contrib module.
Good catch! :-)
> Also, Guttman's split algorithm is not needed in unidimensional case,
> because sorting based algorithm is good in this case.
I had some doubts whether this is true in the general case, instead of
the given example. I increased the interval width in your example to
0.25*b instead of 0.00005*b, with the purpose to increase overlaps
between intervals. Though the performance gain was less, it was still
faster than Guttmans algorithm. To make things worse I also tested with
an interval with of 1*b, resulting in a lot of overlaps and compared
several overlap queries. The sorting algorithm was 25% to 40% faster on
searches. Index creation time with the sorting algorithm is also a
fraction of the original creation time.

Since this testing could be part of a review, I looked at the code as
well and listed myself as reviewer on the commitfest.

Comparing with gbt_num_picksplit reveals some differences with sort
array intialization and size, the former's sort array starts at index 1
(FirstOffsetNumber), your implementation starts at 0 for sorting and
hence the size of the sorting array can be one element less. I prefer
your way of sort array initialization; gbt_num_pickplits's use of
FirstOffsetNumber of the qsort array seems to mix a define from the
gist/btree namespace for no reason and might even lead to confusion.

The remaining part of the new picksplit function puts the segs into left
or right, I think the code is easier to understand if there was only one
for loop from i=1 to 1 < maxoff, for the current code I had to verify
that all sort array entries were really used with the two seperate loops
that also skipped the first value. I edited the code a bit, and also
used seg_union to initialize/palloc the datum values. Finally, waste and
firsttime variables were initialized but not used anymore, so removed.

Attached is a revised patch.

regards,
Yeb Havinga

PS: when comparing with gbt_num_picksplit, I noticed that that one does
not update v->spl_ldatum and spl_rdatum to the union datums, but
initializes these to 0 at the beginning and never seems to update them.
Not sure if this is a problem since the num_picksplit stuff seems to
work well.

Attachment Content-Type Size
seg_picksplit_fix-0.2.patch text/x-patch 5.8 KB

From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: Yeb Havinga <yebhavinga(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-05 16:15:55
Message-ID: AANLkTinderiVZ11MnSMVDpgr3Em5dJTQfVO0BhO2ULTP@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hello Yeb,

Thank you for review and code refactoring.

PS: when comparing with gbt_num_picksplit, I noticed that that one does not
> update v->spl_ldatum and spl_rdatum to the union datums, but initializes
> these to 0 at the beginning and never seems to update them. Not sure if this
> is a problem since the num_picksplit stuff seems to work well.
>
Actually gbt_num_picksplit updates v->spl_ldatum and spl_rdatum
inside gbt_num_bin_union function.

----
With best regards,
Alexander Korotkov.


From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: Yeb Havinga <yebhavinga(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-06 13:51:27
Message-ID: AANLkTikQ3EDEGfZogeUhc2MzcJ7RUtOHeq1mT4=bo__w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Do you think now patch is ready for committer or it require further review
by you or somebody else?

----
With best regards,
Alexander Korotkov.


From: Yeb Havinga <yebhavinga(at)gmail(dot)com>
To: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-06 14:53:14
Message-ID: 4CD56BDA.7050000@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Alexander Korotkov wrote:
> Do you think now patch is ready for committer or it require further
> review by you or somebody else?
It's probably ready for committer, however the code now doesn't mention
any reference or bit of information that it is faster than the original
one. I was wondering how you discovered this, or is there any reverence
to e.g. a gist paper/other work where this is researched? If the info is
available, some comments in the code might help future gist developers
for picking a right algorithm for other datatypes.

I don't think further review is required, but very much welcome further
exploration of which picksplit algorithms match which datatype in which
distribution best.

regards,
Yeb Havinga


From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Yeb Havinga <yebhavinga(at)gmail(dot)com>
Cc: Alexander Korotkov <aekorotkov(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-10 13:27:54
Message-ID: 1289394705-sup-7337@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


Hmm, the second for loop in gseg_picksplit uses "i < maxoff" whereas the
other one uses <=. The first is probably correct; if the second is also
correct it merits a comment on the discrepancy (To be honest, I'd get
rid of the "-1" in computing maxoff and use < in both places, given that
offsets are 1-indexed). Also, the second one is using i++ to increment;
probably should be OffsetNumberNext just to stay consistent with the
rest of the code.

The assignment to *left and *right at the end of the routine seem pretty
useless (not to mention the comment talking about a routine that doesn't
exist anywhere).

--
Álvaro Herrera <alvherre(at)commandprompt(dot)com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: Yeb Havinga <yebhavinga(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-10 13:53:10
Message-ID: AANLkTik5KO6L=_FszWLgb=UQNFdcrPABCUJxVQvkHbfD@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> Hmm, the second for loop in gseg_picksplit uses "i < maxoff" whereas the
> other one uses <=. The first is probably correct; if the second is also
> correct it merits a comment on the discrepancy (To be honest, I'd get
> rid of the "-1" in computing maxoff and use < in both places, given that
> offsets are 1-indexed). Also, the second one is using i++ to increment;
> probably should be OffsetNumberNext just to stay consistent with the
> rest of the code.
>
Actually I can't understand the purpose of FirstOffsetNumber
and OffsetNumberNext macros. When I wrote the patch I though about sortItems
as about "clean from all these strange things" array, that's why I didn't
use OffsetNumberNext there. :)
I see only way to save logic of these macros is to use array starting from
FirstOffsetNumber index like in gbt_num_picksplit.

The assignment to *left and *right at the end of the routine seem pretty
> useless (not to mention the comment talking about a routine that doesn't
> exist anywhere).
>
I found, that gtrgm_picksplit in pg_trgm and gtsvector_picksplit in core
still use this assignment, while gist_box_picksplit and gbt_num_picksplit
not. If this assignment is overall useless, than I think we should remove it
from gtrgm_picksplit and gtsvector_picksplit in order to not mislead
developers of gist implementations.

----
With best regards,
Alexander Korotkov.


From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: Yeb Havinga <yebhavinga(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-10 14:02:15
Message-ID: AANLkTik2Uf_G=stpnuFK4fDjC0G0K_Mf_L0uScRv9ObM@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Nov 10, 2010 at 4:53 PM, Alexander Korotkov <aekorotkov(at)gmail(dot)com>wrote:

> Hmm, the second for loop in gseg_picksplit uses "i < maxoff" whereas the
>> other one uses <=. The first is probably correct; if the second is also
>> correct it merits a comment on the discrepancy (To be honest, I'd get
>> rid of the "-1" in computing maxoff and use < in both places, given that
>> offsets are 1-indexed). Also, the second one is using i++ to increment;
>> probably should be OffsetNumberNext just to stay consistent with the
>> rest of the code.
>>
> Actually I can't understand the purpose of FirstOffsetNumber
> and OffsetNumberNext macros. When I wrote the patch I though about sortItems
> as about "clean from all these strange things" array, that's why I didn't
> use OffsetNumberNext there. :)
> I see only way to save logic of these macros is to use array starting from
> FirstOffsetNumber index like in gbt_num_picksplit.
>
For example, if we assume, that OffsetNumberNext can do something other that
just increment, that we shouldn't use it in loop on sortItems, but should do
following things:
1) Do additional loop first to calculate actual items count. (Because we
don't know how OffsetNumberNext increases the index. Probably it can
increase it by various value.)
2) Fill sortItems using separate index.

----
With best regards,
Alexander Korotkov.


From: Yeb Havinga <yebhavinga(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: Alexander Korotkov <aekorotkov(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-10 14:37:55
Message-ID: 4CDAAE43.2000301@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2010-11-10 14:27, Alvaro Herrera wrote:
> Hmm, the second for loop in gseg_picksplit uses "i< maxoff" whereas the
> other one uses<=. The first is probably correct; if the second is also
> correct it merits a comment on the discrepancy (To be honest, I'd get
> rid of the "-1" in computing maxoff and use< in both places, given that
> offsets are 1-indexed).
Good point. The second loop walks over the sorted array, which is
0-indexed. Do you favor making the sort array 1-indexed, like done in
e.g. gbt_num_picksplit? The downside is that the sort array is
initialized with length maxoff + 1: puzzling on its own, even more since
maxoff itself is initialized as entryvec->n -1. Note also that the qsort
call for the 1-indexed variant is more complex since it must skip the
first element.
> probably should be OffsetNumberNext just to stay consistent with the
> rest of the code.
Yes.
> The assignment to *left and *right at the end of the routine seem pretty
> useless (not to mention the comment talking about a routine that doesn't
> exist anywhere).
They are necessary and it is code untouched by this patch, and the same
line occurs in other picksplit functions as well. The gbt_num_picksplit
function shows that it can be avoided, by rewriting in the second loop

*left++ = sortItems[i].index;
into
v->spl_left[v->spl_nleft] = sortItems[i].index

Even though this is longer code, I prefer this variant over the shorter one.

regards,
Yeb Havinga


From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: Yeb Havinga <yebhavinga(at)gmail(dot)com>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-10 14:46:25
Message-ID: AANLkTimYb_zRhX3AP2u0+K8wJEf751YP9-iZ6GVN_Jm5@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Nov 10, 2010 at 5:37 PM, Yeb Havinga <yebhavinga(at)gmail(dot)com> wrote:

> They are necessary and it is code untouched by this patch, and the same
> line occurs in other picksplit functions as well. The gbt_num_picksplit
> function shows that it can be avoided, by rewriting in the second loop
>
> *left++ = sortItems[i].index;
> into
> v->spl_left[v->spl_nleft] = sortItems[i].index
>
> Even though this is longer code, I prefer this variant over the shorter
> one.
>
I can't understand this point. How the way of spl_left and spl_right arrays
filling is related with additional FirstOffsetNumber value at the end of
array, which is added by "*left = *right = FirstOffsetNumber;" line?

----
With best regards,
Alexander Korotkov.


From: Yeb Havinga <yebhavinga(at)gmail(dot)com>
To: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-10 15:05:13
Message-ID: 4CDAB4A9.6080801@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2010-11-10 15:46, Alexander Korotkov wrote:
> On Wed, Nov 10, 2010 at 5:37 PM, Yeb Havinga <yebhavinga(at)gmail(dot)com
> <mailto:yebhavinga(at)gmail(dot)com>> wrote:
>
> They are necessary and it is code untouched by this patch, and the
> same line occurs in other picksplit functions as well. The
> gbt_num_picksplit function shows that it can be avoided, by
> rewriting in the second loop
>
> *left++ = sortItems[i].index;
> into
> v->spl_left[v->spl_nleft] = sortItems[i].index
>
> Even though this is longer code, I prefer this variant over the
> shorter one.
>
> I can't understand this point. How the way of spl_left
> and spl_right arrays filling is related with
> additional FirstOffsetNumber value at the end of array, which is
> added by "*left = *right = FirstOffsetNumber;" line?
You're right, they are not related. I'm no longer sure it is necessary,
looking at gistUserPicksplit.

regards,
Yeb Havinga


From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: Oleg Bartunov <oleg(at)sai(dot)msu(dot)su>, Teodor Sigaev <teodor(at)sigaev(dot)ru>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Yeb Havinga <yebhavinga(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-10 15:06:22
Message-ID: AANLkTikjM74_HsTuFSRoW8V5m-v+iFdOA+=66f1Uukep@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Nov 10, 2010 at 6:05 PM, Yeb Havinga <yebhavinga(at)gmail(dot)com> wrote:

> On 2010-11-10 15:46, Alexander Korotkov wrote:
>
> On Wed, Nov 10, 2010 at 5:37 PM, Yeb Havinga <yebhavinga(at)gmail(dot)com> wrote:
>
>> They are necessary and it is code untouched by this patch, and the same
>> line occurs in other picksplit functions as well. The gbt_num_picksplit
>> function shows that it can be avoided, by rewriting in the second loop
>>
>> *left++ = sortItems[i].index;
>> into
>> v->spl_left[v->spl_nleft] = sortItems[i].index
>>
>> Even though this is longer code, I prefer this variant over the shorter
>> one.
>>
> I can't understand this point. How the way of spl_left and spl_right arrays
> filling is related with additional FirstOffsetNumber value at the end of
> array, which is added by "*left = *right = FirstOffsetNumber;" line?
>
> You're right, they are not related. I'm no longer sure it is necessary,
> looking at gistUserPicksplit.
>

Teodor, Oleg, probably, you can help us. Is "*left = *right =
FirstOffsetNumber;" line necessary in picksplit function or doing something
useful?

----
With best regards,
Alexander Korotkov.


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Yeb Havinga <yebhavinga(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-10 15:07:12
Message-ID: 3633.1289401632@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Alexander Korotkov <aekorotkov(at)gmail(dot)com> writes:
> On Wed, Nov 10, 2010 at 4:53 PM, Alexander Korotkov <aekorotkov(at)gmail(dot)com>wrote:
>> Actually I can't understand the purpose of FirstOffsetNumber
>> and OffsetNumberNext macros.

> For example, if we assume, that OffsetNumberNext can do something other that
> just increment, that we shouldn't use it in loop on sortItems,

Right. Good style is to use FirstOffsetNumber/OffsetNumberNext if you
are walking through the items on a page. They should *not* be used when
you are just iterating over a local array. I'd go with "for (i = 0;
i < nitems; i++)" for the latter.

regards, tom lane


From: Yeb Havinga <yebhavinga(at)gmail(dot)com>
To: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-10 15:13:32
Message-ID: 4CDAB69C.2000103@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2010-11-10 14:53, Alexander Korotkov wrote:
>
>
> Actually I can't understand the purpose of FirstOffsetNumber
> and OffsetNumberNext macros. When I wrote the patch I though about
> sortItems as about "clean from all these strange things" array, that's
> why I didn't use OffsetNumberNext there. :)
> I see only way to save logic of these macros is to use array starting
> from FirstOffsetNumber index like in gbt_num_picksplit.
Another reason for not using is FirstOffsetNumber and it's related
macro's on the qsort array, is that InvalidOffsetNumber (0) is not
invalid for the array.
However all other sorts in picksplit functions already seem to do it
this way. I'm not sure it's wise to introduce a different approach.
>
> The assignment to *left and *right at the end of the routine seem
> pretty
> useless (not to mention the comment talking about a routine that
> doesn't
> exist anywhere).
>
> I found, that gtrgm_picksplit in pg_trgm and gtsvector_picksplit in
> core still use this assignment, while gist_box_picksplit
> and gbt_num_picksplit not. If this assignment is overall useless, than
> I think we should remove it from gtrgm_picksplit
> and gtsvector_picksplit in order to not mislead developers of gist
> implementations.
+1

regards,
Yeb Havinga


From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: Yeb Havinga <yebhavinga(at)gmail(dot)com>
Cc: Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-15 09:06:44
Message-ID: AANLkTikU-sO85aNTcKcjXdB6=WkVR7R5DEDoe+zY+nTp@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

With help of Oleg I found, that line "*left = *right = FirstOffsetNumber;"
was needed only for 7.X compatibility, and it isn't needed any more.
Also, I've replaced "i - 1" by "i - FirstOffsetNumber" in array filling.
I believe it's more correct way, because it'll work correctly in the case
when FirstOffsetNumber alters.

----
With best regards,
Alexander Korotkov.

Attachment Content-Type Size
seg_picksplit_fix-0.3.patch text/x-patch 6.7 KB

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
Cc: Yeb Havinga <yebhavinga(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-16 00:07:07
Message-ID: AANLkTinFNNJRmkRZWYeFD=v8-pBHkMcgQYQ3piSXYD5W@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Nov 15, 2010 at 4:06 AM, Alexander Korotkov
<aekorotkov(at)gmail(dot)com> wrote:
> With help of Oleg I found, that line "*left = *right = FirstOffsetNumber;"
> was needed only for 7.X compatibility, and it isn't needed any more.
> Also, I've replaced "i - 1" by "i - FirstOffsetNumber" in array filling.
> I believe it's more correct way, because it'll work correctly in the case
> when FirstOffsetNumber alters.

The loop that begins here:

for (i = 0; i < maxoff; i++)
{
/* First half of segs goes to the left datum. */
if (i < seed_2)

...looks like it should perhaps be broken into two separate loops.
That might also help tweak the logic in a way that eliminates this:

seg.c: In function ‘gseg_picksplit’:
seg.c:327: warning: ‘datum_r’ may be used uninitialized in this function
seg.c:326: warning: ‘datum_l’ may be used uninitialized in this function

But on a broader note, I'm not very certain the sorting algorithm is
sensible. For example, suppose you have 10 segments that are exactly
'0' and 20 segments that are exactly '1'. Maybe I'm misunderstanding,
but it seems like this will result in a 15/15 split when we almost
certainly want a 10/20 split. I think there will be problems in more
complex cases as well. The documentation says about the less-than and
greater-than operators that "These operators do not make a lot of
sense for any practical purpose but sorting."

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Yeb Havinga <yebhavinga(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-16 08:57:40
Message-ID: AANLkTimL7-iLSfv23osE6WO-s9FQQNBq3vBnCbZ7DYVS@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Nov 16, 2010 at 3:07 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> The loop that begins here:
>
> for (i = 0; i < maxoff; i++)
> {
> /* First half of segs goes to the left datum. */
> if (i < seed_2)
>
> ...looks like it should perhaps be broken into two separate loops.
> That might also help tweak the logic in a way that eliminates this:
>
> seg.c: In function ‘gseg_picksplit’:
> seg.c:327: warning: ‘datum_r’ may be used uninitialized in this function
> seg.c:326: warning: ‘datum_l’ may be used uninitialized in this function
>
I restored original version of that loop.

> But on a broader note, I'm not very certain the sorting algorithm is
> sensible. For example, suppose you have 10 segments that are exactly
> '0' and 20 segments that are exactly '1'. Maybe I'm misunderstanding,
> but it seems like this will result in a 15/15 split when we almost
> certainly want a 10/20 split. I think there will be problems in more
> complex cases as well. The documentation says about the less-than and
> greater-than operators that "These operators do not make a lot of
> sense for any practical purpose but sorting."

I think almost any split algorithm has corner cases when it's results don't
look very good. I think the way to understand significance of these corner
cases for real life is to perform sufficient testing on datasets which is
close to real life. I'm not feeling power to propose enough of test datasets
and estimate their significance for real life cases, and I need help in this
field.

----
With best regards,
Alexander Korotkov.

Attachment Content-Type Size
seg_picksplit_fix-0.4.patch text/x-patch 6.8 KB

From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Yeb Havinga <yebhavinga(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-16 11:07:58
Message-ID: AANLkTikbEpKcury9-YeudKBVs71iQWZsv2-GCfAtC7kx@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Nov 16, 2010 at 3:07 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> But on a broader note, I'm not very certain the sorting algorithm is
> sensible. For example, suppose you have 10 segments that are exactly
> '0' and 20 segments that are exactly '1'. Maybe I'm misunderstanding,
> but it seems like this will result in a 15/15 split when we almost
> certainly want a 10/20 split. I think there will be problems in more
> complex cases as well. The documentation says about the less-than and
> greater-than operators that "These operators do not make a lot of
> sense for any practical purpose but sorting."

In order to illustrate a real problem we should think about
gist behavior with great enough amount of data. For example, I tried to
extrapolate this case to 100000 of segs where 40% are (0,1) segs and 60% are
(1,2) segs. And this case doesn't seem a problem for me.
Here are the details.

test=# insert into seg_test (select case when random()<0.4 then '0..1'::seg
else '1..2'::seg end from generate_series(1,100000));
INSERT 0 100000
Time: 7543,941 ms

test=# create index seg_test_idx on seg_test using gist(a);
CREATE INDEX
Time: 17839,450 ms

test=# explain (analyze, buffers) select count(*) from seg_test where a @>
'1.5'::seg;
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=344.84..344.85 rows=1 width=0) (actual
time=186.192..186.193 rows=1 loops=1)
Buffers: shared hit=887
-> Bitmap Heap Scan on seg_test (cost=5.05..344.59 rows=100 width=0)
(actual time=16.066..102.586 rows=60206 l
oops=1)
Recheck Cond: (a @> '1.5'::seg)
Buffers: shared hit=887
-> Bitmap Index Scan on seg_test_idx (cost=0.00..5.03 rows=100
width=0) (actual time=15.948..15.948 rows
=60206 loops=1)
Index Cond: (a @> '1.5'::seg)
Buffers: shared hit=396
Total runtime: 186.306 ms
(9 rows)

test=# explain (analyze, buffers) select count(*) from seg_test where a @>
'0.5'::seg;
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=344.84..344.85 rows=1 width=0) (actual
time=144.293..144.295 rows=1 loops=1)
Buffers: shared hit=754
-> Bitmap Heap Scan on seg_test (cost=5.05..344.59 rows=100 width=0)
(actual time=28.926..87.625 rows=39794 lo
ops=1)
Recheck Cond: (a @> '0.5'::seg)
Buffers: shared hit=754
-> Bitmap Index Scan on seg_test_idx (cost=0.00..5.03 rows=100
width=0) (actual time=26.969..26.969 rows
=39794 loops=1)
Index Cond: (a @> '0.5'::seg)
Buffers: shared hit=263
Total runtime: 144.374 ms
(9 rows)

test=# select pg_relpages('seg_test_idx');
pg_relpages
-------------
656
(1 row)

Total number of pages in index is 656 and number of pages used in scans
is 263+396=659. Only 3 pages overhead.
We can see the distribution of segs in index using gevel.

test=# select a, count(1) from gist_print('seg_test_idx') as t(level int,
valid bool, a seg) group by a;
a | count
--------+-------
0 .. 1 | 40054
0 .. 2 | 2
1 .. 2 | 60599
(3 rows)

test=# select level, a, count(1) from gist_print('seg_test_idx') as t(level
int, valid bool, a seg) group by level,a order by level, a;
level | a | count
-------+--------+-------
1 | 0 .. 1 | 1
1 | 0 .. 2 | 1
1 | 1 .. 2 | 1
2 | 0 .. 1 | 259
2 | 0 .. 2 | 1
2 | 1 .. 2 | 392
3 | 0 .. 1 | 39794
3 | 1 .. 2 | 60206
(8 rows)

We have only 2 segs '0..2' (one on the first level and one on the second)
and all other segs are '0..1' and '1..2'. I think there will be more
problems when we'll have many of distinct values where each have count value
about half of index page.

----
With best regards,
Alexander Korotkov.


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
Cc: Yeb Havinga <yebhavinga(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-20 03:46:44
Message-ID: AANLkTimVoZSG8i83xS_pUvYZioOneTeRVeqwJDHhiLfQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Nov 16, 2010 at 6:07 AM, Alexander Korotkov
<aekorotkov(at)gmail(dot)com> wrote:
> On Tue, Nov 16, 2010 at 3:07 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> But on a broader note, I'm not very certain the sorting algorithm is
>> sensible.  For example, suppose you have 10 segments that are exactly
>> '0' and 20 segments that are exactly '1'.  Maybe I'm misunderstanding,
>> but it seems like this will result in a 15/15 split when we almost
>> certainly want a 10/20 split.  I think there will be problems in more
>> complex cases as well.  The documentation says about the less-than and
>> greater-than operators that "These operators do not make a lot of
>> sense for any practical purpose but sorting."
>
> In order to illustrate a real problem we should think about
> gist behavior with great enough amount of data. For example, I tried to
> extrapolate this case to 100000 of segs where 40% are (0,1) segs and 60% are
> (1,2) segs. And this case doesn't seem a problem for me.

Well, the problem with just comparing on < is that it takes very
little account of the upper bounds. I think the cases where a simple
split would hurt you the most are those where examining the upper
bound is necessary to to get a good split.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Yeb Havinga <yebhavinga(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Alexander Korotkov <aekorotkov(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-20 12:36:20
Message-ID: 4CE7C0C4.6070902@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2010-11-20 04:46, Robert Haas wrote:
> On Tue, Nov 16, 2010 at 6:07 AM, Alexander Korotkov
> <aekorotkov(at)gmail(dot)com> wrote:
>> On Tue, Nov 16, 2010 at 3:07 AM, Robert Haas<robertmhaas(at)gmail(dot)com> wrote:
>>> But on a broader note, I'm not very certain the sorting algorithm is
>>> sensible. For example, suppose you have 10 segments that are exactly
>>> '0' and 20 segments that are exactly '1'. Maybe I'm misunderstanding,
>>> but it seems like this will result in a 15/15 split when we almost
>>> certainly want a 10/20 split. I think there will be problems in more
>>> complex cases as well. The documentation says about the less-than and
>>> greater-than operators that "These operators do not make a lot of
>>> sense for any practical purpose but sorting."
>> In order to illustrate a real problem we should think about
>> gist behavior with great enough amount of data. For example, I tried to
>> extrapolate this case to 100000 of segs where 40% are (0,1) segs and 60% are
>> (1,2) segs. And this case doesn't seem a problem for me.
> Well, the problem with just comparing on< is that it takes very
> little account of the upper bounds. I think the cases where a simple
> split would hurt you the most are those where examining the upper
> bound is necessary to to get a good split.
With the current 8K default blocksize, I put my money on the sorting
algorithm for any seg case. The r-tree algorithm's performance is
probably more influenced by large buckets->low tree depth->generic keys
on non leaf nodes.

regards,
Yeb Havinga


From: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Yeb Havinga <yebhavinga(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-20 13:15:10
Message-ID: AANLkTi=cM_RQ47nPyVoj8x0pkX25zWFKzC6DdMPFBc9u@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Nov 20, 2010 at 6:46 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> Well, the problem with just comparing on < is that it takes very
> little account of the upper bounds. I think the cases where a simple
> split would hurt you the most are those where examining the upper
> bound is necessary to to get a good split.

Yes, also such asymmetric solution seems not beautiful enough for me :).
It's easy to sort segs by their center, in this case lower and upper bound
will be used equally. New patch is attached. I checked it on various data
distributions.

1) Uniform distribution
test=# insert into seg_test (select (a || ' .. ' || a + 0.00005*b)::seg from
(select random() as a, random() as b from generate_series(1,1000000)) x);
INSERT 0 1000000
Time: 79121,830 ms
test=# create index seg_test_idx on seg_test using gist (a);
CREATE INDEX
Time: 176409,434 ms
test=# explain (buffers, analyze) select * from seg_test where a @> '0.5 ..
0.5'::seg;
QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on seg_test (cost=28.19..2500.32 rows=1000 width=12)
(actual time=0.251..0.886 rows=27 loops=1)
Recheck Cond: (a @> '0.5'::seg)
Buffers: shared hit=3 read=27
-> Bitmap Index Scan on seg_test_idx (cost=0.00..27.94 rows=1000
width=0) (actual time=0.193..0.193 rows=27 loops=1)
Index Cond: (a @> '0.5'::seg)
Buffers: shared hit=3
Total runtime: 1.091 ms
(7 rows)

Time: 41,884 ms

2) Natural distribution (Box–Muller transform was used for data generation)
test=# insert into seg_test (select ( a - 0.00005*abs(b) || ' .. ' || a +
0.00005*abs(b))::seg from (select
cos(2.0*pi()*random())*sqrt(-2.0*ln(random())) as a,
cos(2.0*pi()*random())*sqrt(-2.0*ln(random())) as b from
generate_series(1,1000000)) x);
INSERT 0 1000000
Time: 98614,305 ms
test=# create index seg_test_idx on seg_test using gist(a);
CREATE INDEX
Time: 212513,540 ms
test=# explain (buffers, analyze) select * from seg_test where a @> '0.3 ..
0.3'::seg;
QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on seg_test (cost=28.18..2500.31 rows=1000 width=12)
(actual time=0.132..0.428 rows=27 loops=1)
Recheck Cond: (a @> '0.3'::seg)
Buffers: shared hit=3 read=27
-> Bitmap Index Scan on seg_test_idx (cost=0.00..27.93 rows=1000
width=0) (actual time=0.103..0.103 rows=27 loops=1)
Index Cond: (a @> '0.3'::seg)
Buffers: shared hit=3
Total runtime: 0.504 ms
(7 rows)

Time: 0,967 ms

3) Many distinct values
test=# insert into seg_test (select (a||'..'||(a+1))::seg from (select
(random()*13000)::integer as a from generate_series(1,1000000)) x);
INSERT 0 1000000
Time: 90775,952 ms
test=# create index seg_test_idx on seg_test using gist(a);
CREATE INDEX
Time: 200960,758 ms
test=# explain (buffers, analyze) select * from seg_test where a @> '700.0
.. 700.0'::seg;
QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on seg_test (cost=28.19..2500.33 rows=1000 width=12)
(actual time=0.358..3.531 rows=138 loops=1)
Recheck Cond: (a @> '700.0'::seg)
Buffers: shared hit=3 read=135
-> Bitmap Index Scan on seg_test_idx (cost=0.00..27.94 rows=1000
width=0) (actual time=0.270..0.270 rows=138 loops=1)
Index Cond: (a @> '700.0'::seg)
Buffers: shared hit=3
Total runtime: 3.882 ms
(7 rows)

Time: 5,271 ms

----
With best regards,
Alexander Korotkov.

Attachment Content-Type Size
seg_picksplit_fix-0.5.patch text/x-patch 7.1 KB

From: Yeb Havinga <yebhavinga(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Alexander Korotkov <aekorotkov(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-20 20:57:25
Message-ID: 4CE83635.2080405@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2010-11-20 13:36, Yeb Havinga wrote:
> On 2010-11-20 04:46, Robert Haas wrote:
>> Well, the problem with just comparing on< is that it takes very
>> little account of the upper bounds. I think the cases where a simple
>> split would hurt you the most are those where examining the upper
>> bound is necessary to to get a good split.
> With the current 8K default blocksize, I put my money on the sorting
> algorithm for any seg case. The r-tree algorithm's performance is
> probably more influenced by large buckets->low tree depth->generic
> keys on non leaf nodes.
To test this conjecture I compared a default 9.0.1 postgres (with
debugging) to exactly the same postgres but with an 1K blocksize, with
the test that Alexander posted upthread.

8K blocksize:
postgres=# create index seg_test_idx on seg_test using gist (a);
CREATE INDEX
Time: 99613.308 ms
SELECT
Total runtime: 81.482 ms

1K blocksize:
CREATE INDEX
Time: 40113.252 ms
SELECT
Total runtime: 3.363 ms

Details of explain analyze are below. The rowcount results are not
exactly the same because I forgot to backup the first test, so created
new random data.
Though I didn't compare the sorting picksplit this way, I suspect that
that algorithm won't be effected so much by the difference in blocksize.

regards,
Yeb Havinga

************** 8K test ********
ostgres=# \timing
Timing is on.
postgres=# create index seg_test_idx on seg_test using gist (a);
CREATE INDEX
Time: 99613.308 ms
postgres=# show block_size ;
block_size
------------
8192
(1 row)

Time: 0.313 ms
postgres=# explain (buffers, analyze) select * from seg_test where a @>
'0.5 .. 0.5'::seg;
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------
-----
Bitmap Heap Scan on seg_test (cost=44.32..2589.66 rows=1000 width=12)
(actual time=91.061..91.304 rows=27 loops=1)
Recheck Cond: (a @> '0.5'::seg)
Buffers: shared hit=581 read=1729 written=298
-> Bitmap Index Scan on seg_test_idx (cost=0.00..44.07 rows=1000
width=0) (actual time=91.029..91.029 rows=27 loop
s=1)
Index Cond: (a @> '0.5'::seg)
Buffers: shared hit=581 read=1702 written=297
Total runtime: 91.792 ms
(7 rows)

Time: 309.687 ms
postgres=# explain (buffers, analyze) select * from seg_test where a @>
'0.5 .. 0.5'::seg;
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------
-----
Bitmap Heap Scan on seg_test (cost=44.32..2589.66 rows=1000 width=12)
(actual time=81.357..81.405 rows=27 loops=1)
Recheck Cond: (a @> '0.5'::seg)
Buffers: shared hit=1231 read=1079
-> Bitmap Index Scan on seg_test_idx (cost=0.00..44.07 rows=1000
width=0) (actual time=81.337..81.337 rows=27 loop
s=1)
Index Cond: (a @> '0.5'::seg)
Buffers: shared hit=1204 read=1079
Total runtime: 81.482 ms
(7 rows)

Time: 82.291 ms

************** 1K test ********
postgres=# \timing
Timing is on.
postgres=# create index seg_test_idx on seg_test using gist (a);
CREATE INDEX
Time: 40113.252 ms
postgres=# explain (buffers, analyze) select * from seg_test where a @>
'0.5 .. 0.5'::seg;
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------
----
Bitmap Heap Scan on seg_test (cost=278.66..3812.85 rows=1000
width=12) (actual time=4.649..4.839 rows=34 loops=1)
Recheck Cond: (a @> '0.5'::seg)
Buffers: shared hit=221 read=385
-> Bitmap Index Scan on seg_test_idx (cost=0.00..278.41 rows=1000
width=0) (actual time=4.620..4.620 rows=34 loops
=1)
Index Cond: (a @> '0.5'::seg)
Buffers: shared hit=221 read=351
Total runtime: 4.979 ms
(7 rows)

Time: 6.217 ms
postgres=# explain (buffers, analyze) select * from seg_test where a @>
'0.5 .. 0.5'::seg;
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------
----
Bitmap Heap Scan on seg_test (cost=278.66..3812.85 rows=1000
width=12) (actual time=3.239..3.310 rows=34 loops=1)
Recheck Cond: (a @> '0.5'::seg)
Buffers: shared hit=606
-> Bitmap Index Scan on seg_test_idx (cost=0.00..278.41 rows=1000
width=0) (actual time=3.219..3.219 rows=34 loops
=1)
Index Cond: (a @> '0.5'::seg)
Buffers: shared hit=572
Total runtime: 3.363 ms
(7 rows)

Time: 4.063 ms
postgres=# show block_size;
block_size
------------
1024
(1 row)

Time: 0.300 ms


From: Yeb Havinga <yebhavinga(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Alexander Korotkov <aekorotkov(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-20 21:13:17
Message-ID: 4CE839ED.7000700@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2010-11-20 21:57, Yeb Havinga wrote:8K blocksize:
> postgres=# create index seg_test_idx on seg_test using gist (a);
> CREATE INDEX
> Time: 99613.308 ms
> SELECT
> Total runtime: 81.482 ms
>
> 1K blocksize:
> CREATE INDEX
> Time: 40113.252 ms
> SELECT
> Total runtime: 3.363 ms
>
> Details of explain analyze are below. The rowcount results are not
> exactly the same because I forgot to backup the first test, so created
> new random data.
> Though I didn't compare the sorting picksplit this way, I suspect that
> that algorithm won't be effected so much by the difference in blocksize.
Here are the results for a 1K blocksize (debug enabled) and Alexanders
latest (0.5) patch.

postgres=# create index seg_test_idx on seg_test using gist (a);
CREATE INDEX
Time: 37373.398 ms
postgres=# explain (buffers, analyze) select * from seg_test where a @>
'0.5 .. 0.5'::seg;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on seg_test (cost=209.97..3744.16 rows=1000
width=12) (actual time=0.091..0.283 rows=34 loops=1)
Recheck Cond: (a @> '0.5'::seg)
Buffers: shared hit=6 read=35
-> Bitmap Index Scan on seg_test_idx (cost=0.00..209.72 rows=1000
width=0) (actual time=0.071..0.071 rows=34 loops=1)
Index Cond: (a @> '0.5'::seg)
Buffers: shared hit=6 read=1
Total runtime: 0.392 ms
(7 rows)

Time: 1.798 ms
postgres=# explain (buffers, analyze) select * from seg_test where a @>
'0.5 .. 0.5'::seg;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on seg_test (cost=209.97..3744.16 rows=1000
width=12) (actual time=0.087..0.160 rows=34 loops=1)
Recheck Cond: (a @> '0.5'::seg)
Buffers: shared hit=41
-> Bitmap Index Scan on seg_test_idx (cost=0.00..209.72 rows=1000
width=0) (actual time=0.068..0.068 rows=34 loops=1)
Index Cond: (a @> '0.5'::seg)
Buffers: shared hit=7
Total runtime: 0.213 ms
(7 rows)

Time: 0.827 ms


From: Yeb Havinga <yebhavinga(at)gmail(dot)com>
To: Alexander Korotkov <aekorotkov(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-11-30 19:57:39
Message-ID: 4CF55733.7000403@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2010-11-16 09:57, Alexander Korotkov wrote:
> On Tue, Nov 16, 2010 at 3:07 AM, Robert Haas <robertmhaas(at)gmail(dot)com
> <mailto:robertmhaas(at)gmail(dot)com>> wrote:
>
> The loop that begins here:
>
> for (i = 0; i < maxoff; i++)
> {
> /* First half of segs goes to the left datum. */
> if (i < seed_2)
>
> ...looks like it should perhaps be broken into two separate loops.
> That might also help tweak the logic in a way that eliminates this:
>
> seg.c: In function ‘gseg_picksplit’:
> seg.c:327: warning: ‘datum_r’ may be used uninitialized in this
> function
> seg.c:326: warning: ‘datum_l’ may be used uninitialized in this
> function
>
> I restored original version of that loop.
>
> But on a broader note, I'm not very certain the sorting algorithm is
> sensible. For example, suppose you have 10 segments that are exactly
> '0' and 20 segments that are exactly '1'. Maybe I'm misunderstanding,
> but it seems like this will result in a 15/15 split when we almost
> certainly want a 10/20 split. I think there will be problems in more
> complex cases as well. The documentation says about the less-than and
> greater-than operators that "These operators do not make a lot of
> sense for any practical purpose but sorting."
>
> I think almost any split algorithm has corner cases when it's results
> don't look very good. I think the way to understand significance of
> these corner cases for real life is to perform sufficient testing
> on datasets which is close to real life. I'm not feeling power to
> propose enough of test datasets and estimate their significance for
> real life cases, and I need help in this field.
I think it is time to mark this patch ready for committer:

The unintuitive result thus far is that sorting outperforms the R-tree
bounding boxes style index, as Alexander has demonstrated with several
different distributions on 20-11 (uniform, natural (is that a bell
curve?), many distinct values)

My personal opinion is that I like the single loop for walking over the
sort array (aka gbt_num_picksplit) more than the two different ones, but
I'm in the minority here.

Two remarks on this patch also apply to other picksplit functions
currently around:
1) the *first = *last = FirstOffsetNumber assignment, as noted by
Alvaro, is not necessary for anymore, and Oleg confirmed this is true
since PostgreSQL > 7.x. 2) loops over something other than the
entryvector better not use FirstOffsetNumber, OffsetNumberNext, as
indicated by Tom.

If this patch is committed, it might be an idea to change the other
occurences as well.

regards,
Yeb Havinga


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Yeb Havinga <yebhavinga(at)gmail(dot)com>
Cc: Alexander Korotkov <aekorotkov(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Fix for seg picksplit function
Date: 2010-12-15 23:14:53
Message-ID: 13372.1292454893@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Yeb Havinga <yebhavinga(at)gmail(dot)com> writes:
> I think it is time to mark this patch ready for committer:

> The unintuitive result thus far is that sorting outperforms the R-tree
> bounding boxes style index, as Alexander has demonstrated with several
> different distributions on 20-11 (uniform, natural (is that a bell
> curve?), many distinct values)

I spent some time analyzing this patch today. A few observations:

1. The figures of merit that we're interested in are minimizing the
overlap between the bounding boxes of the split pages, and trying to
divide the items more or less equally between the two pages. The less
overlap, the fewer subsequent searches will have to descend to both
children. If we divide the items unequally we risk ending up with a large
(low fill ratio) index, if we're unlucky enough for the less-populated
side to receive few additional entries. (Which is quite likely, if the
subsequently-added data has distribution like the existing data's.)
Low fill ratio is of course bad for I/O and buffer space consumption.

Alexander's proposed algorithm wins on the equal-division front, since
it always produces an even split, while the existing Guttman algorithm
can produce very uneven splits (more on that below). However, it's less
clear that Alexander's patch beats the existing code in terms of getting
good bounding boxes.

2. It's not really appropriate to compare the patch directly against git
HEAD, since we know that's broken. I experimented with just fixing the
s/size_alpha/size_beta/ problem, and observed that search efficiency
(measured as the number of buffers touched in a single query) improved
markedly, but index size didn't, and index build time actually got worse.

3. I think that the test cases presented by Alexander aren't really that
useful, because they all consist of large numbers of small segments.
Almost any split algorithm can do reasonably well there, because you can
always find a split that has pretty minimal overlap between the bounding
boxes of the two child pages; and in particular, changing the distribution
of where the segments are doesn't change things much. Yeb pointed this
out upthread but didn't pursue the point. Of course, if there are only
large segments then it's impossible to find minimally-overlapping splits,
so any split algorithm will do poorly. I think the interesting case is
where there are a few large segments among a population of small ones.
Accordingly, I experimented with test data built this way:

create table seg_test as
select (a || ' .. ' || a + 0.25*b)::seg as a from
(select random() as a, random() as b from generate_series(1,10)) x
union all
select (a || ' .. ' || a + 0.00005*b)::seg as a from
(select random() as a, random() as b from generate_series(1,1000000)) x;

(Note: order is important here, and it's also important to set
synchronize_seqscans off, else you won't get repeatable results.
We want to inject the large segments early in the index build process.)

What I got for this was

index build time index pages search buffers

fixed HEAD run A 114 sec 17777 16
fixed HEAD run B 113 sec 16574 6
Alexander's 0.5 run A 15.5 sec 6259 34
Alexander's 0.5 run B 16 sec 6016 3

("Search buffers" is the number of buffers touched in
select * from seg_test where a @> '0.5 .. 0.5'::seg
Run A and run B are two different data sets generated as per the
above recipe)

Unsurprisingly, the patch wins on index size, but its variance in
search efficiency seems a little worse than before. Note however that
the search-buffers number is still a lot better than unpatched HEAD,
where I was seeing values of several hundred.

4. The main speed problem with Guttman's algorithm is that the initial
seed-finding loop is O(N^2) in the number of items on the page to be split
(which is ~260, on my machine anyway). This is why Yeb found
significantly shorter index build time with 1K instead of 8K pages.
However, given the 1-D nature of the data it's not hard to think of
cheaper ways to select the seed items --- basically we want the two
"extremal" items. I experimented with finding the smallest-lower and
largest-upper items, and also the smallest-upper and largest-lower,
which of course can be done in one pass with O(N) time. Leaving the
second pass as-is, I got

index build time index pages search buffers

smallest_upper run A 34 sec 16049 7
smallest_upper run B 33 sec 15185 5
smallest_lower run A 15 sec 7327 40
smallest_lower run B 14 sec 7234 4

(I also tried smallest and largest center points, but that was worse than
these across the board.) So there's more than one way to skin a cat here.

5. The *real* problem with Guttman's algorithm became obvious while I was
doing these experiments: it's unstable as heck. If one of the seed items
is something of an outlier, it's not hard at all for the code to end up
attaching all the other items to the other seed's page, ie you end up with
just one item in one of the child pages. This can happen repeatedly,
leading to severe index bloat. Several times I ended up cancelling an
index build after it had blown past a gigabyte of index with no sign of
stopping. In particular the algorithm is sensitive to data ordering
because of the incremental way it enlarges the two pages: if the input
data is ordered already, that can increase the bias towards the first
seed. I also found out that it seems to be intentional, not a bug, that
the seed-finding loop excludes the last input item (which is generally the
new item that didn't fit on the page): if you don't do that then this
misbehavior becomes extremely likely to happen, anytime the new item isn't
within the range of existing items. This undocumented assumption didn't
make me any more pleasantly disposed towards the existing code.

Bottom line: although there might be even better ways to do it,
Alexander's patch seems to me to be a significant improvement over what's
there. It definitely wins on index size, and I'm not seeing any major
degradation in search efficiency. There are some stylistic things that
could be improved, but I'm going to clean it up and commit it.

As a future TODO, I think we ought to look at whether we can't get rid of
Guttman's algorithm in the other picksplit routines. I find it hard to
believe that it's any more stable in higher-dimensional cases than it is
here.

regards, tom lane