Re: WIP patch for parallel pg_dump

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: "Andrew Dunstan" <andrew(at)dunslane(dot)net>, "Greg Smith" <greg(at)2ndquadrant(dot)com>, "Heikki Linnakangas" <heikki(dot)linnakangas(at)enterprisedb(dot)com>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, "Joachim Wieland" <joe(at)mcknight(dot)de>, "pgsql-hackers" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: WIP patch for parallel pg_dump
Date: 2010-12-06 18:24:27
Message-ID: 29177.1291659867@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> I'm still not convinced that using shared memory is a bad way to
>>> pass these around. Surely we're not talking about large numbers
>>> of them. What am I missing here?
>>
>> They're not of a very predictable size.

> Surely you can predict that any snapshot is no larger than a fairly
> small fixed portion plus sizeof(TransactionId) * MaxBackends?

No. See subtransactions.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Kevin Grittner 2010-12-06 18:28:39 Re: WIP patch for parallel pg_dump
Previous Message Tom Lane 2010-12-06 18:23:36 Re: allow COPY routines to read arbitrary numbers of fields