I: arrays and block size

Lists: pgsql-general
From: Scara Maccai <m_lists(at)yahoo(dot)it>
To: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: I: arrays and block size
Date: 2009-02-02 17:48:37
Message-ID: 890362.3397.qm@web24605.mail.ird.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Anyone?

----- Messaggio inoltrato -----
> Da: Scara Maccai <m_lists(at)yahoo(dot)it>
> A: pgsql-general <pgsql-general(at)postgresql(dot)org>
> Inviato: Venerdì 30 gennaio 2009, 13:59:09
> Oggetto: [GENERAL] arrays and block size
>
> Hi,
>
> I need to store a lot of int8 columns (2000-2500) in a table.
>
> I was thinking about using int8[], and I would like to know:
>
> 1) is there a max size for arrays? I guess I could have 1 GB "worth" of values,
> but I would like a confirmation
> 2) there won't be any updates, only inserts and selects; since I will be using
> ZFS would there be any benefit in changing the block size (for example to 32K)
> in postgresql? Would it be better to use ZFS compression instead of postgresql's
> (TOAST)? That is: should I EXTERNAL for that array column and set compression=on
> for ZFS? I tried ZFS compression=on a 200 columns table and it worked faster
> than with compression=off....


From: Sam Mason <sam(at)samason(dot)me(dot)uk>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: I: arrays and block size
Date: 2009-02-02 20:41:18
Message-ID: 20090202204118.GJ3008@frubble.xen.chris-lamb.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Mon, Feb 02, 2009 at 09:48:37AM -0800, Scara Maccai wrote:
> > I need to store a lot of int8 columns (2000-2500) in a table.
> > I was thinking about using int8[]

An array of ints sounds like the way to go here as you wouldn't be able
to have that many columns. TOAST is one non-obvious implementation
artifact that may cause extra work to be done and may cause confusion.
Tuples in PG can be a maximum of 8KB and hence 2000 INT8 values would
only just fit and 2500 values wouldn't. TOAST docs are available here:

http://www.postgresql.org/docs/current/static/storage-toast.html

> , and I would like to know:
> > 1) is there a max size for arrays? I guess I could have 1 GB "worth"
> > of values,

Not that I'm aware of; as a quick test:

CREATE TABLE arrtest ( v INT8[] );
INSERT INTO arrtest SELECT ARRAY(SELECT generate_series(1,10000000));

worked (very slowly) for me, but 100000000 failed with an out of memory
error (32bit build).

> > but I would like a confirmation
> > 2) there won't be any updates, only inserts and selects; since I
> > will be using ZFS would there be any benefit in changing the block
> > size (for example to 32K) in postgresql?

No idea, maybe someone else?

> > Would it be better to use
> > ZFS compression instead of postgresql's (TOAST)? That is: should I
> > EXTERNAL for that array column and set compression=on for ZFS? I
> > tried ZFS compression=on a 200 columns table and it worked faster
> > than with compression=off....

Oh, I guess you know about TOAST. Otherwise I can't help much, sorry!

--
Sam http://samason.me.uk/