From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andres Freund <andres(at)2ndquadrant(dot)com> |
Cc: | Magnus Hagander <magnus(at)hagander(dot)net>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_basebackup failed to back up large file |
Date: | 2014-06-03 15:42:49 |
Message-ID: | 14325.1401810169@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Andres Freund <andres(at)2ndquadrant(dot)com> writes:
> On 2014-06-03 11:04:58 -0400, Tom Lane wrote:
>> My point is that having backups crash on an overflow doesn't really seem
>> acceptable. IMO we need to reconsider the basebackup protocol and make
>> sure we don't *need* to put values over 4GB into this field. Where's the
>> requirement coming from anyway --- surely all files in PGDATA ought to be
>> 1GB max?
> Fujii's example was logfiles in pg_log. But we allow to change the
> segment size via a configure flag, so we should support that or remove
> the ability to change the segment size...
What we had better do, IMO, is fix things so that we don't have a filesize
limit in the basebackup format. After a bit of googling, I found out that
recent POSIX specs for tar format include "extended headers" that among
other things support member files of unlimited size [1]. Rather than
fooling with partial fixes, we should make the basebackup logic use an
extended header when the file size is over INT_MAX.
regards, tom lane
[1] http://pubs.opengroup.org/onlinepubs/9699919799/
see "pax" under shells & utilities
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2014-06-03 15:57:52 | Re: pg_basebackup failed to back up large file |
Previous Message | Andres Freund | 2014-06-03 15:40:35 | Re: strtoll/strtoull emulation |