Re: Parser Cruft in gram.y

From: "Kevin Grittner" <kgrittn(at)mail(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Dimitri Fontaine" <dimitri(at)2ndQuadrant(dot)fr>
Cc: "Robert Haas" <robertmhaas(at)gmail(dot)com>,pgsql-hackers(at)postgresql(dot)org
Subject: Re: Parser Cruft in gram.y
Date: 2012-12-14 23:41:49
Message-ID: 20121214234149.80060@gmx.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:

> the parser tables are basically number-of-tokens wide by
> number-of-states high. (In HEAD there are 433 tokens known to the
> grammar, all but 30 of which are keywords, and 4367 states.)
>
> Splitting the grammar into multiple grammars is unlikely to do
> much to improve this --- in fact, it could easily make matters
> worse due to duplication.

I agree that without knowing what percentage would be used by each
parser in a split, it could go either way.  Consider a hypothetical
situation where one parser has 80% and the other 50% of the current
combined parser -- 30% overlap on both tokens and grammer
constructs. (Picking numbers out of the air, for purposes of
demonstration.)

Combined = 433 * 4,367 = 1,890,911

80% = 346 * 3,493 = 1,208,578
50% = 216 * 2,183 =   471,528

Total for split =   1,680,106

Of course if they were both at 80% it would be a higher total than
combined, but unless you have a handle on the percentages, it
doesn't seem like a foregone conclusion. Do you have any feel for
what the split would be?

-Kevin

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2012-12-15 00:19:26 Re: logical decoding - GetOldestXmin
Previous Message David Gould 2012-12-14 23:37:33 Re: Re: bulk_multi_insert infinite loops with large rows and small fill factors