Re: In-Memory Columnar Store

From: knizhnik <knizhnik(at)garret(dot)ru>
To: desmodemone <desmodemone(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Abhijit Menon-Sen <ams(at)2ndquadrant(dot)com>, Oleg Bartunov <obartunov(at)gmail(dot)com>
Subject: Re: In-Memory Columnar Store
Date: 2013-12-12 06:53:00
Message-ID: 52A95D4C.1030204@garret.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Thank you very much for reporting the problem.
And sorry for this bug and lack of negative tests.

Attempt to access unexisted value cause autoloading of data from the
table to columnar store (because autoload property is enabled by default)
and as far as this entry is not present in the table, the code falls
into infinite recursion.
Patched version of IMCS is available at
http://www.garret.ru/imcs-1.01.tar.gz

I am going to place IMCS under version control now. Just looking for
proper place for repository...

On 12/12/2013 04:06 AM, desmodemone wrote:
>
>
>
> 2013/12/9 knizhnik <knizhnik(at)garret(dot)ru <mailto:knizhnik(at)garret(dot)ru>>
>
> Hello!
>
> I want to annouce my implementation of In-Memory Columnar Store
> extension for PostgreSQL:
>
> Documentation: http://www.garret.ru/imcs/user_guide.html
> Sources: http://www.garret.ru/imcs-1.01.tar.gz
>
> Any feedbacks, bug reports and suggestions are welcome.
>
> Vertical representation of data is stored in PostgreSQL shared memory.
> This is why it is important to be able to utilize all available
> physical memory.
> Now servers with Tb or more RAM are not something exotic,
> especially in financial world.
> But there is limitation in Linux with standard 4kb pages for
> maximal size of mapped memory segment: 256Gb.
> It is possible to overcome this limitation either by creating
> multiple segments - but it requires too much changes in PostgreSQL
> memory manager.
> Or just set MAP_HUGETLB flag (assuming that huge pages were
> allocated in the system).
>
> I found several messages related with MAP_HUGETLB flag, the most
> recent one was from 21 of November:
> http://www.postgresql.org/message-id/20131125032920.GA23793@toroid.org
>
> I wonder what is the current status of this patch?
>
>
>
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org
> <mailto:pgsql-hackers(at)postgresql(dot)org>)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>
>
> Hello,
> excellent work! I begin to do testing and it's very fast,
> by the way I found a strange case of "endless" query with CPU a 100%
> when the value used as filter does not exists:
>
> I am testing with postgres 9.3.1 on debian and I used default value
> for the extension except memory ( 512mb )
>
> how to recreate the test case :
>
> ## create a table :
>
> create table endless ( col1 int , col2 char(30) , col3 int ) ;
>
> ## insert some values:
>
> insert into endless values ( 1, 'ahahahaha', 3);
>
> insert into endless values ( 2, 'ghghghghg', 4);
>
> ## create the column store objects:
>
> select cs_create('endless','col1','col2');
> cs_create
> -----------
>
> (1 row)
>
> ## try and test column store :
>
> select cs_avg(col3) from endless_get('ahahahaha');
> cs_avg
> --------
> 3
> (1 row)
>
> select cs_avg(col3) from endless_get('ghghghghg');
> cs_avg
> --------
> 4
> (1 row)
>
> ## now select with a value that does not exist :
>
> select cs_avg(col3) from endless_get('testing');
>
> # and now start to loop on cpu and seems to never ends , I had to
> terminate backend
>
> Bye
>
> Mat

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Haribabu kommi 2013-12-12 06:54:51 Re: Heavily modified big table bloat even in auto vacuum is running
Previous Message Tatsuo Ishii 2013-12-12 06:28:30 Re: pgbench with large scale factor