Re: Vitesse DB call for testing

Lists: pgsql-hackers
From: CK Tan <cktan(at)vitessedata(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Vitesse DB call for testing
Date: 2014-10-17 12:32:13
Message-ID: CAJNt7=bEXacvfbVu-YKzQiFzxk7E6f9ZqWsbsRAxZpa61N7q2Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi everyone,

Vitesse DB 9.3.5.S is Postgres 9.3.5 with a LLVM-JIT query executor
designed for compute intensive OLAP workload. We have gotten it to a
reasonable state and would like to open it up to the pg hackers
community for testing and suggestions.

Vitesse DB offers
-- JIT Compilation for compute-intensive queries
-- CSV parsing with SSE instructions
-- 100% binary compatibility with PG9.3.5.

Our results show CSV imports run up to 2X faster, and TPCH Q1 runs 8X faster.

Our TPCH 1GB benchmark results is also available at
http://vitessedata.com/benchmark/ .

Please direct any questions by email to cktan(at)vitessedata(dot)com .

Thank you for your help.

--
CK Tan
Vitesse Data, Inc.


From: Andres Freund <andres(at)2ndquadrant(dot)com>
To: CK Tan <cktan(at)vitessedata(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 12:37:54
Message-ID: 20141017123754.GC2075@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2014-10-17 05:32:13 -0700, CK Tan wrote:
> Vitesse DB 9.3.5.S is Postgres 9.3.5 with a LLVM-JIT query executor
> designed for compute intensive OLAP workload. We have gotten it to a
> reasonable state and would like to open it up to the pg hackers
> community for testing and suggestions.
>
> Vitesse DB offers
> -- JIT Compilation for compute-intensive queries
> -- CSV parsing with SSE instructions
> -- 100% binary compatibility with PG9.3.5.
>
> Our results show CSV imports run up to 2X faster, and TPCH Q1 runs 8X faster.

How are these modifications licensed?

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: CK Tan <cktan(at)vitessedata(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 13:14:14
Message-ID: CAHyXU0yowsvdek04CdUaeioYDAHWJJs+dgVyjZiaAqf=3fhM7Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Oct 17, 2014 at 7:32 AM, CK Tan <cktan(at)vitessedata(dot)com> wrote:
> Hi everyone,
>
> Vitesse DB 9.3.5.S is Postgres 9.3.5 with a LLVM-JIT query executor
> designed for compute intensive OLAP workload. We have gotten it to a
> reasonable state and would like to open it up to the pg hackers
> community for testing and suggestions.
>
> Vitesse DB offers
> -- JIT Compilation for compute-intensive queries
> -- CSV parsing with SSE instructions
> -- 100% binary compatibility with PG9.3.5.
>
> Our results show CSV imports run up to 2X faster, and TPCH Q1 runs 8X faster.
>
> Our TPCH 1GB benchmark results is also available at
> http://vitessedata.com/benchmark/ .
>
> Please direct any questions by email to cktan(at)vitessedata(dot)com .

You offer a binary with 32k block size...what's the rationale for that?

merlin


From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: CK Tan <cktan(at)vitessedata(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 13:43:30
Message-ID: CAHyXU0w=b3+t8izRr5pS=uW4GBiX6wn_HJ4-4u30sfjhG526=w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Oct 17, 2014 at 8:14 AM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
> On Fri, Oct 17, 2014 at 7:32 AM, CK Tan <cktan(at)vitessedata(dot)com> wrote:
>> Hi everyone,
>>
>> Vitesse DB 9.3.5.S is Postgres 9.3.5 with a LLVM-JIT query executor
>> designed for compute intensive OLAP workload. We have gotten it to a
>> reasonable state and would like to open it up to the pg hackers
>> community for testing and suggestions.
>>
>> Vitesse DB offers
>> -- JIT Compilation for compute-intensive queries
>> -- CSV parsing with SSE instructions
>> -- 100% binary compatibility with PG9.3.5.
>>
>> Our results show CSV imports run up to 2X faster, and TPCH Q1 runs 8X faster.
>>
>> Our TPCH 1GB benchmark results is also available at
>> http://vitessedata.com/benchmark/ .
>>
>> Please direct any questions by email to cktan(at)vitessedata(dot)com .
>
> You offer a binary with 32k block size...what's the rationale for that?

(sorry for the double post)

OK, I downloaded the ubuntu binary and ran your benchmarks (after
making some minor .conf tweaks like disabling SSL). I then ran your
benchmark (after fixing the typo) of the count/sum/avg test -- *and
noticed a 95% reduction in runtime performance* which is really quite
amazing IMNSHO. I also ran a select only test on small scale factor
pgbench and didn't see any regression there -- in fact you beat stock
by ~ 3% (although this could be measurement noise). So now you've
got my attention. So, if you don't mind, quit being coy and explain
how the software works and all the neat things it does and doesn't do.

merlin


From: CK Tan <cktan(at)vitessedata(dot)com>
To: Merlin Moncure <mmoncure(at)gmail(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 15:47:14
Message-ID: 5CFE0CA1-E5CC-4CD1-9D0B-8D72143D81C2@vitessedata.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Merlin, glad you tried it.

We take the query plan exactly as given by the planner, decide whether to JIT or to punt depending on the cost. If we punt, it goes back to pg executor. If we JIT, and if we could not proceed (usually of some operators we haven't implemented yet), we again punt. Once we were able to generate the code, there is no going back; we call into LLVM to obtain the function entry point, and run it to completion. The 3% improvement you see in OLTP tests is definitely noise.

The bigint sum,avg,count case in the example you tried has some optimization. We use int128 to accumulate the bigint instead of numeric in pg. Hence the big speed up. Try the same query on int4 for the improvement where both pg and vitessedb are using int4 in the execution.

The speed up is really noticeable when the data type is nonvarlena. In the varlena cases, we still call into pg routines most of the times. Again, try the sum,avg,count query on numeric, and you will see what I mean.

Also, we don't support UDF at the moment. So all queries involving UDF gets sent to pg executor.

On your question of 32k page size, the rational is that some of our customers could be interested in a data warehouse on pg. 32k page size is a big win when all you do is seqscan all day long.

We are looking for bug reports at these stage and some stress tests done without our own prejudices. Some test on real data in non prod setting on queries that are highly CPU bound would be ideal.

Thanks,
-cktan

> On Oct 17, 2014, at 6:43 AM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
>
>> On Fri, Oct 17, 2014 at 8:14 AM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
>>> On Fri, Oct 17, 2014 at 7:32 AM, CK Tan <cktan(at)vitessedata(dot)com> wrote:
>>> Hi everyone,
>>>
>>> Vitesse DB 9.3.5.S is Postgres 9.3.5 with a LLVM-JIT query executor
>>> designed for compute intensive OLAP workload. We have gotten it to a
>>> reasonable state and would like to open it up to the pg hackers
>>> community for testing and suggestions.
>>>
>>> Vitesse DB offers
>>> -- JIT Compilation for compute-intensive queries
>>> -- CSV parsing with SSE instructions
>>> -- 100% binary compatibility with PG9.3.5.
>>>
>>> Our results show CSV imports run up to 2X faster, and TPCH Q1 runs 8X faster.
>>>
>>> Our TPCH 1GB benchmark results is also available at
>>> http://vitessedata.com/benchmark/ .
>>>
>>> Please direct any questions by email to cktan(at)vitessedata(dot)com .
>>
>> You offer a binary with 32k block size...what's the rationale for that?
>
> (sorry for the double post)
>
> OK, I downloaded the ubuntu binary and ran your benchmarks (after
> making some minor .conf tweaks like disabling SSL). I then ran your
> benchmark (after fixing the typo) of the count/sum/avg test -- *and
> noticed a 95% reduction in runtime performance* which is really quite
> amazing IMNSHO. I also ran a select only test on small scale factor
> pgbench and didn't see any regression there -- in fact you beat stock
> by ~ 3% (although this could be measurement noise). So now you've
> got my attention. So, if you don't mind, quit being coy and explain
> how the software works and all the neat things it does and doesn't do.
>
> merlin


From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: CK Tan <cktan(at)vitessedata(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 16:00:41
Message-ID: CAHyXU0wnMSV=U2BKvTqxuo0G7cuc-W7iCpVrvZAFMsYTNpbo=w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Oct 17, 2014 at 10:47 AM, CK Tan <cktan(at)vitessedata(dot)com> wrote:
> Merlin, glad you tried it.
>
> We take the query plan exactly as given by the planner, decide whether to JIT or to punt depending on the cost. If we punt, it goes back to pg executor. If we JIT, and if we could not proceed (usually of some operators we haven't implemented yet), we again punt. Once we were able to generate the code, there is no going back; we call into LLVM to obtain the function entry point, and run it to completion. The 3% improvement you see in OLTP tests is definitely noise.
>
> The bigint sum,avg,count case in the example you tried has some optimization. We use int128 to accumulate the bigint instead of numeric in pg. Hence the big speed up. Try the same query on int4 for the improvement where both pg and vitessedb are using int4 in the execution.
>
> The speed up is really noticeable when the data type is nonvarlena. In the varlena cases, we still call into pg routines most of the times. Again, try the sum,avg,count query on numeric, and you will see what I mean.
>
> Also, we don't support UDF at the moment. So all queries involving UDF gets sent to pg executor.
>
> On your question of 32k page size, the rational is that some of our customers could be interested in a data warehouse on pg. 32k page size is a big win when all you do is seqscan all day long.
>
> We are looking for bug reports at these stage and some stress tests done without our own prejudices. Some test on real data in non prod setting on queries that are highly CPU bound would be ideal.

One thing that I noticed is that when slamming your benchmark query
via pgbench, resident memory consumption was really aggressive and
would have taken down the server had I not spuriously stopped the
test. Memory consumption did return to baseline after that so I
figured some type of llvm memory management games were going on. This
isn't really a problem for most OLAP workloads but it's something to
be aware of.

Via 'perf top' on stock postgres, you see the usual suspects: palloc,
hash_search_etc. On your build though HeapTuplesSatisfiesMVCC zooms
right to the top of the stack which is pretty interesting...the
executor is you've built is very lean and mean for sure. A drop in
optimization engine with little no schema/sql changes is pretty neat
-- your primary competitor here is going to be column organized type
table solutions to olap type problems.

merlin


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: CK Tan <cktan(at)vitessedata(dot)com>
Cc: Merlin Moncure <mmoncure(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 17:12:27
Message-ID: 2905.1413565947@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

CK Tan <cktan(at)vitessedata(dot)com> writes:
> The bigint sum,avg,count case in the example you tried has some optimization. We use int128 to accumulate the bigint instead of numeric in pg. Hence the big speed up. Try the same query on int4 for the improvement where both pg and vitessedb are using int4 in the execution.

Well, that's pretty much cheating: it's too hard to disentangle what's
coming from JIT vs what's coming from using a different accumulator
datatype. If we wanted to depend on having int128 available we could
get that speedup with a couple hours' work.

But what exactly are you "compiling" here? I trust not the actual data
accesses; that seems far too complicated to try to inline.

regards, tom lane


From: Feng Tian <ftian(at)vitessedata(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Fwd: Vitesse DB call for testing
Date: 2014-10-17 18:08:48
Message-ID: CAFWGqnuUGrjomoaNHxSzyvgNFxH6dYLzzmrWydkWCp_guuhkcA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi, Tom,

Sorry for double post to you.

Feng

---------- Forwarded message ----------
From: Feng Tian <ftian(at)vitessedata(dot)com>
Date: Fri, Oct 17, 2014 at 10:29 AM
Subject: Re: [HACKERS] Vitesse DB call for testing
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>

Hi, Tom,

I agree using that using int128 in stock postgres will speed up things
too. On the other hand, that is only one part of the equation. For
example, if you look at TPCH Q1, the int128 "cheating" does not kick in at
all, but we are 8x faster.

I am not sure why do you mean by "actual data access". Data is still in
stock postgres format on disk. We indeed jit-ed all data fields access
(deform tuple). To put things in perspective, I just timed select
count(*) and select count(l_orderkey) from tpch1.lineitem; Our code is
bottlenecked by memory bandwidth and difference is pretty much invisible.

Thanks,
Feng

ftian=# set vdb_jit = 0;

SET

Time: 0.155 ms

ftian=# select count(*) from tpch1.lineitem;

count

---------

6001215

(1 row)

Time: 688.658 ms

ftian=# select count(*) from tpch1.lineitem;

count

---------

6001215

(1 row)

Time: 690.753 ms

ftian=# select count(l_orderkey) from tpch1.lineitem;

count

---------

6001215

(1 row)

Time: 819.452 ms

ftian=# set vdb_jit = 1;

SET

Time: 0.167 ms

ftian=# select count(*) from tpch1.lineitem;

count

---------

6001215

(1 row)

Time: 203.543 ms

ftian=# select count(l_orderkey) from tpch1.lineitem;

count

---------

6001215

(1 row)

Time: 202.253 ms

ftian=#

On Fri, Oct 17, 2014 at 10:12 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> CK Tan <cktan(at)vitessedata(dot)com> writes:
> > The bigint sum,avg,count case in the example you tried has some
> optimization. We use int128 to accumulate the bigint instead of numeric in
> pg. Hence the big speed up. Try the same query on int4 for the improvement
> where both pg and vitessedb are using int4 in the execution.
>
> Well, that's pretty much cheating: it's too hard to disentangle what's
> coming from JIT vs what's coming from using a different accumulator
> datatype. If we wanted to depend on having int128 available we could
> get that speedup with a couple hours' work.
>
> But what exactly are you "compiling" here? I trust not the actual data
> accesses; that seems far too complicated to try to inline.
>
> regards, tom lane
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


From: Josh Berkus <josh(at)agliodbs(dot)com>
To: CK Tan <cktan(at)vitessedata(dot)com>, Merlin Moncure <mmoncure(at)gmail(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 18:13:00
Message-ID: 54415C2C.5020901@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

CK,

Before we go any further on this, how is Vitesse currently licensed?
last time we talked it was still proprietary. If it's not being
open-sourced, we likely need to take discussion off this list.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


From: Peter Geoghegan <pg(at)heroku(dot)com>
To: Feng Tian <ftian(at)vitessedata(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 18:21:55
Message-ID: CAM3SWZQAGF5+Kb9W+u=L4ObLtwWQfDuPJ55X2oP9bORxMAGybQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Oct 17, 2014 at 11:08 AM, Feng Tian <ftian(at)vitessedata(dot)com> wrote:
> I agree using that using int128 in stock postgres will speed up things too.
> On the other hand, that is only one part of the equation. For example, if
> you look at TPCH Q1, the int128 "cheating" does not kick in at all, but we
> are 8x faster.

I'm curious about how the numbers look when stock Postgres is built
with the same page size as your fork. You didn't mention whether or
not your Postgres numbers came from a standard build.

--
Peter Geoghegan


From: Andres Freund <andres(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: CK Tan <cktan(at)vitessedata(dot)com>, Merlin Moncure <mmoncure(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 18:25:00
Message-ID: 20141017182500.GF2075@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2014-10-17 13:12:27 -0400, Tom Lane wrote:
> Well, that's pretty much cheating: it's too hard to disentangle what's
> coming from JIT vs what's coming from using a different accumulator
> datatype. If we wanted to depend on having int128 available we could
> get that speedup with a couple hours' work.

I think doing that when configure detects int128 would make a great deal
of sense. It's not like we'd save a great deal of complicated code by
removing the existing accumulator... We'd still have to return a
numeric, but that's likely lost in the noise cost wise.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Peter Geoghegan <pg(at)heroku(dot)com>
Cc: Feng Tian <ftian(at)vitessedata(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 18:29:09
Message-ID: CAHyXU0ywWYAApjEpGiRcf=LwWZO4T9Ps5jF+jaZCOSRCFrGsXA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Oct 17, 2014 at 1:21 PM, Peter Geoghegan <pg(at)heroku(dot)com> wrote:
> On Fri, Oct 17, 2014 at 11:08 AM, Feng Tian <ftian(at)vitessedata(dot)com> wrote:
>> I agree using that using int128 in stock postgres will speed up things too.
>> On the other hand, that is only one part of the equation. For example, if
>> you look at TPCH Q1, the int128 "cheating" does not kick in at all, but we
>> are 8x faster.
>
> I'm curious about how the numbers look when stock Postgres is built
> with the same page size as your fork. You didn't mention whether or
> not your Postgres numbers came from a standard build.

I downloaded the 8kb varant.

vitesse (median of 3):
postgres=# select count(*), sum(i*i), avg(i) from t;
count │ sum │ avg
─────────┼────────────────────┼─────────────────────
1000000 │ 333333833333500000 │ 500000.500000000000
(1 row)

Time: 39.197 ms

stock (median of 3):
postgres=# select count(*), sum(i*i), avg(i) from t;
count │ sum │ avg
─────────┼────────────────────┼─────────────────────
1000000 │ 333333833333500000 │ 500000.500000000000
(1 row)

Time: 667.362 ms

(stock int4 ops)
postgres=# select sum(1::int4) from t;
sum
─────────
1000000
(1 row)

Time: 75.265 ms

What I'm wondering is how complex the hooks are that tie the
technology in. Unless a bsd licensed patch materializes, the
conversation (beyond the intitial wow! factor) should probably focus
on a possible integration points and/or implementation of technology
into core in a general way.

merlin


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andres Freund <andres(at)2ndquadrant(dot)com>
Cc: CK Tan <cktan(at)vitessedata(dot)com>, Merlin Moncure <mmoncure(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 18:35:55
Message-ID: 5197.1413570955@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andres Freund <andres(at)2ndquadrant(dot)com> writes:
> On 2014-10-17 13:12:27 -0400, Tom Lane wrote:
>> Well, that's pretty much cheating: it's too hard to disentangle what's
>> coming from JIT vs what's coming from using a different accumulator
>> datatype. If we wanted to depend on having int128 available we could
>> get that speedup with a couple hours' work.

> I think doing that when configure detects int128 would make a great deal
> of sense.

Yeah, I was wondering about that myself: use int128 if available,
else fall back on existing code path.

regards, tom lane


From: CK Tan <cktan(at)vitessedata(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Andres Freund <andres(at)2ndquadrant(dot)com>, Merlin Moncure <mmoncure(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-17 18:52:32
Message-ID: CAJNt7=Y_Q9s2KRvRmXcU8JDmPmo6rwP8E13Z7DvFE8K-0NW8rg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Happy to contribute to that decision :-)

On Fri, Oct 17, 2014 at 11:35 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Andres Freund <andres(at)2ndquadrant(dot)com> writes:
>> On 2014-10-17 13:12:27 -0400, Tom Lane wrote:
>>> Well, that's pretty much cheating: it's too hard to disentangle what's
>>> coming from JIT vs what's coming from using a different accumulator
>>> datatype. If we wanted to depend on having int128 available we could
>>> get that speedup with a couple hours' work.
>
>> I think doing that when configure detects int128 would make a great deal
>> of sense.
>
> Yeah, I was wondering about that myself: use int128 if available,
> else fall back on existing code path.
>
> regards, tom lane


From: David Gould <daveg(at)sonic(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: CK Tan <cktan(at)vitessedata(dot)com>, Merlin Moncure <mmoncure(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-18 03:40:14
Message-ID: 20141017204014.07f3755c@jekyl.lan
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, 17 Oct 2014 13:12:27 -0400
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> CK Tan <cktan(at)vitessedata(dot)com> writes:
> > The bigint sum,avg,count case in the example you tried has some optimization. We use int128 to accumulate the bigint instead of numeric in pg. Hence the big speed up. Try the same query on int4 for the improvement where both pg and vitessedb are using int4 in the execution.
>
> Well, that's pretty much cheating: it's too hard to disentangle what's
> coming from JIT vs what's coming from using a different accumulator
> datatype. If we wanted to depend on having int128 available we could
> get that speedup with a couple hours' work.
>
> But what exactly are you "compiling" here? I trust not the actual data
> accesses; that seems far too complicated to try to inline.
>
> regards, tom lane
>
>

I don't have any inside knowledge, but from the presentation given at the
recent SFPUG followed by a bit of google-fu I think these papers are
relevant:

http://www.vldb.org/pvldb/vol4/p539-neumann.pdf
http://sites.computer.org/debull/A14mar/p3.pdf

-dg

--
David Gould 510 282 0869 daveg(at)sonic(dot)net
If simplicity worked, the world would be overrun with insects.


From: CK Tan <cktan(at)vitessedata(dot)com>
To: David Gould <daveg(at)sonic(dot)net>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Merlin Moncure <mmoncure(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-18 04:46:10
Message-ID: CAJNt7=aup4cZMAio52gwdCukyAyCqb7nhQuCYVi4r1+stfJcdw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Indeed! A big part of our implementation is based on the Neumann
paper. There are also a few other papers that impacted our
implemented:

A. Ailamaki, D. DeWitt, M. Hill, D. Wood. DBMSs On A Modern Processor:
Where Does Time Go?

Peter Boncz, Marcin Zukowski, Niels Nes. MonetDB/X100:
Hyper-Pipelining Query Execution

M. Zukowski el al. Super-Scalar RAM-CPU Cache Compression

Of course, we need to adapt a lot of the design to Postgres to make
something that could stand up harmoniously with the Postgres system,
and also to take care that we would be able to merge easily with
future versions of Postgres -- the implementation needs to be as
non-invasive as possible.

Regards,
-cktan

On Fri, Oct 17, 2014 at 8:40 PM, David Gould <daveg(at)sonic(dot)net> wrote:
> On Fri, 17 Oct 2014 13:12:27 -0400
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
>> CK Tan <cktan(at)vitessedata(dot)com> writes:
>> > The bigint sum,avg,count case in the example you tried has some optimization. We use int128 to accumulate the bigint instead of numeric in pg. Hence the big speed up. Try the same query on int4 for the improvement where both pg and vitessedb are using int4 in the execution.
>>
>> Well, that's pretty much cheating: it's too hard to disentangle what's
>> coming from JIT vs what's coming from using a different accumulator
>> datatype. If we wanted to depend on having int128 available we could
>> get that speedup with a couple hours' work.
>>
>> But what exactly are you "compiling" here? I trust not the actual data
>> accesses; that seems far too complicated to try to inline.
>>
>> regards, tom lane
>>
>>
>
> I don't have any inside knowledge, but from the presentation given at the
> recent SFPUG followed by a bit of google-fu I think these papers are
> relevant:
>
> http://www.vldb.org/pvldb/vol4/p539-neumann.pdf
> http://sites.computer.org/debull/A14mar/p3.pdf
>
> -dg
>
> --
> David Gould 510 282 0869 daveg(at)sonic(dot)net
> If simplicity worked, the world would be overrun with insects.


From: Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>
To: Josh Berkus <josh(at)agliodbs(dot)com>, CK Tan <cktan(at)vitessedata(dot)com>, Merlin Moncure <mmoncure(at)gmail(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-18 05:55:06
Message-ID: 544200BA.6090308@catalyst.net.nz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 18/10/14 07:13, Josh Berkus wrote:
> CK,
>
> Before we go any further on this, how is Vitesse currently licensed?
> last time we talked it was still proprietary. If it's not being
> open-sourced, we likely need to take discussion off this list.
>

+1

Guys, you need to 'fess up on the licensing!

Regards

Mark


From: CK Tan <cktan(at)vitessedata(dot)com>
To: Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, Merlin Moncure <mmoncure(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vitesse DB call for testing
Date: 2014-10-18 07:11:51
Message-ID: CAJNt7=YT-4OT913=mQ3ms6_sPiew6zYu-BoOwn4OpYx1g14sDg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi Mark,

Vitesse DB won't be open-sourced, or it would have been a contrib
module to postgres. We should take further discussions off this list.
People should contact me directly if there is any questions.

Thanks,
cktan(at)vitessedata(dot)com

On Fri, Oct 17, 2014 at 10:55 PM, Mark Kirkwood
<mark(dot)kirkwood(at)catalyst(dot)net(dot)nz> wrote:
> On 18/10/14 07:13, Josh Berkus wrote:
>>
>> CK,
>>
>> Before we go any further on this, how is Vitesse currently licensed?
>> last time we talked it was still proprietary. If it's not being
>> open-sourced, we likely need to take discussion off this list.
>>
>
> +1
>
> Guys, you need to 'fess up on the licensing!
>
> Regards
>
> Mark