Re: pg_largeobject

From: Sridhar N Bamandlapally <sridhar(dot)bn1(at)gmail(dot)com>
To: Daniel Verite <daniel(at)manitou-mail(dot)org>
Cc: John R Pierce <pierce(at)hogranch(dot)com>, PG-General Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: pg_largeobject
Date: 2016-03-30 10:32:27
Message-ID: CAGuFTBW4O0Xb8qqT5tzwxp+XrVoMohPXzTXVHcmC0iacZwiRtg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-hackers

Is there any way we can change the segment file size,

I am trying to look into the possibility of segment file size Vs bytea size
limitation

PostgreSQL installation

step 1: ./configure --enable-largefile --with-segsize ( throwing error
"configure: error: Large file support is not enabled. Segment size cannot
be larger than 1GB" )

Thanks
Sridhar

On Tue, Mar 29, 2016 at 9:01 PM, Daniel Verite <daniel(at)manitou-mail(dot)org>
wrote:

> Sridhar N Bamandlapally wrote:
>
> > due to size limitation BYTEA was not considered
>
> You could adopt for a custom table the same kind of structure that
> pg_largeobject has, that is an ordered series of BYTEA chunks.
>
> # \d pg_largeobject
> Table "pg_catalog.pg_largeobject"
> Column | Type | Modifiers
> --------+---------+-----------
> loid | oid | not null
> pageno | integer | not null
> data | bytea |
>
> Say you create a table looking like this:
> (
> object_id int
> pageno integer
> data bytea
> )
> with a unique index on (object_id,pageno),
> and octet_length(data) never exceeding a reasonable max size,
> such as 1MB, so if you want a 5GB object, that's just 5*1024 rows in
> that table, one row per pageno.
>
> It's really a good plan if your client code cooperates by streaming
> contents ordered by pageno instead of handling the blob as
> a monolith.
>
> About the chunk size, by comparison, the large object facility limits
> pg_largeobject.data to a quarter of a page, or 2048 bytes per row
> (=LOBLKSIZE, see comments around
> http://doxygen.postgresql.org/large__object_8h_source.html#l00072 )
>
> Having your own table has several advantages:
> - it contains much less rows for the same contents, if the choosen chunk
> size is
> much larger than 2048 bytes.
> - TOAST storage is enabled so that the main relation is way smaller.
> - it can be partitioned.
> - it can have triggers (good for custom replication)
>
> The drawback being that your application has to provide the equivalent
> code to the lo_* client-side and server-side functions that it needs.
> But that's a relatively easy work for a programmer, especially if the blobs
> happen to be immutable, as is often the case.
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Andreas Kretschmer 2016-03-30 10:45:54 Re: pg_largeobject
Previous Message Sándor Daku 2016-03-30 09:45:53 Re: Fetching last n records from Posgresql

Browse pgsql-hackers by date

  From Date Subject
Next Message Andreas Kretschmer 2016-03-30 10:45:54 Re: pg_largeobject
Previous Message Alexander Korotkov 2016-03-30 10:16:04 Re: Move PinBuffer and UnpinBuffer to atomics