From: | Shigeru Hanada <shigeru(dot)hanada(at)gmail(dot)com> |
---|---|
To: | Kohei KaiGai <kaigai(at)kaigai(dot)gr(dot)jp> |
Cc: | Etsuro Fujita <fujita(dot)etsuro(at)lab(dot)ntt(dot)co(dot)jp>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: FDW for PostgreSQL |
Date: | 2012-11-22 05:40:48 |
Message-ID: | CAEZqfEcvQQjot68R5BEjUzKZnzNAC6jTDGF6E0EwxfWus9Wdog@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Nov 21, 2012 at 7:31 PM, Kohei KaiGai <kaigai(at)kaigai(dot)gr(dot)jp> wrote:
> At execute_query(), it stores the retrieved rows onto tuplestore of
> festate->tuples at once. Doesn't it make problems when remote-
> table has very big number of rows?
>
No. postgres_fdw uses single-row processing mode of libpq when
retrieving query results in execute_query, so memory usage will
be stable at a certain level.
> IIRC, the previous code used cursor feature to fetch a set of rows
> to avoid over-consumption of local memory. Do we have some
> restriction if we fetch a certain number of rows with FETCH?
> It seems to me, we can fetch 1000 rows for example, and tentatively
> store them onto the tuplestore within one PG_TRY() block (so, no
> need to worry about PQclear() timing), then we can fetch remote
> rows again when IterateForeignScan reached end of tuplestore.
>
As you say, postgres_fdw had used cursor to avoid possible memory
exhaust on large result set. I switched to single-row processing mode
(it could be said "protocol-level cursor"), which was added in 9.2,
because it accomplish same task with less SQL calls than cursor.
Regards,
--
Shigeru HANADA
From | Date | Subject | |
---|---|---|---|
Next Message | Karl O. Pinc | 2012-11-22 05:56:13 | Re: Doc patch: Document names of automatically created constraints and indexes |
Previous Message | Ranjeet Dhumal | 2012-11-22 05:38:53 | Re: ERROR: volatile EquivalenceClass has no sortref |