Re: Documentation improvement for PQgetResult

Lists: pgsql-docs
From: Daniele Varrazzo <daniele(dot)varrazzo(at)gmail(dot)com>
To: pgsql-docs(at)postgresql(dot)org
Cc: Marko Kreen <markokr(at)gmail(dot)com>
Subject: Documentation improvement for PQgetResult
Date: 2010-08-06 13:53:38
Message-ID: AANLkTikP5a0yHUNYJC+p+KadnyNH+Wmbt=QkZ=rZYE9O@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-docs

Hello,

documentation about the PQgetResult function suggests to "[call]
repeatedly until it returns a null pointer, indicating that the
command is done". This has lead psycopg developers to write code with
the pattern:

while (NULL != (res = PQgetResult(conn))) {
/* do something with res */
PQclear(res);
}

Marko Kreen has pointed out
(http://lists.initd.org/pipermail/psycopg/2010-July/007149.html) that
this can lead to an infinite loop if the connection goes bad at the
rightly wrong time, advising then to check for the connection status
in the middle of the loop. libpq code seems actually returning
PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR) in case of something
unexpected.

If this is the case (I've not managed to reproduce it but the libpq
code seems clear), shouldn't the function documentation warn for this
risk, advising to check the connection status in the middle of the
loop and bail out if it's bad? Would it be better to perform a check
on the connection or on the result?

Thank you, cheers.

-- Daniele


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Daniele Varrazzo <daniele(dot)varrazzo(at)gmail(dot)com>
Cc: pgsql-docs(at)postgresql(dot)org, Marko Kreen <markokr(at)gmail(dot)com>
Subject: Re: Documentation improvement for PQgetResult
Date: 2010-08-06 14:20:18
Message-ID: 18521.1281104418@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-docs

Daniele Varrazzo <daniele(dot)varrazzo(at)gmail(dot)com> writes:
> documentation about the PQgetResult function suggests to "[call]
> repeatedly until it returns a null pointer, indicating that the
> command is done". This has lead psycopg developers to write code with
> the pattern:

> while (NULL != (res = PQgetResult(conn))) {
> /* do something with res */
> PQclear(res);
> }

> Marko Kreen has pointed out
> (http://lists.initd.org/pipermail/psycopg/2010-July/007149.html) that
> this can lead to an infinite loop if the connection goes bad at the
> rightly wrong time, advising then to check for the connection status
> in the middle of the loop. libpq code seems actually returning
> PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR) in case of something
> unexpected.

> If this is the case (I've not managed to reproduce it but the libpq
> code seems clear), shouldn't the function documentation warn for this
> risk, advising to check the connection status in the middle of the
> loop and bail out if it's bad?

No.

I'm not convinced we need to do anything about that, since AFAIR no one
has ever reported such a failure from the field (which might well
indicate that Marko's analysis is wrong, anyway). But if we were to do
something about it, changing the documentation would be the wrong
response, because that coding pattern is all over the place. It'd be
better to kluge PQgetResult so it only returns a bad-connection error
object once per query, or something like that.

regards, tom lane


From: Marko Kreen <markokr(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Daniele Varrazzo <daniele(dot)varrazzo(at)gmail(dot)com>, pgsql-docs(at)postgresql(dot)org
Subject: Re: Documentation improvement for PQgetResult
Date: 2010-08-09 17:21:22
Message-ID: AANLkTin_VJa6pPu0_AU97ks0RN24jkiF7yPjLym2_Ndn@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-docs

On 8/6/10, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Daniele Varrazzo <daniele(dot)varrazzo(at)gmail(dot)com> writes:
> > documentation about the PQgetResult function suggests to "[call]
> > repeatedly until it returns a null pointer, indicating that the
> > command is done". This has lead psycopg developers to write code with
> > the pattern:
>
> > while (NULL != (res = PQgetResult(conn))) {
> > /* do something with res */
> > PQclear(res);
> > }
>
> > Marko Kreen has pointed out
> > (http://lists.initd.org/pipermail/psycopg/2010-July/007149.html) that
> > this can lead to an infinite loop if the connection goes bad at the
> > rightly wrong time, advising then to check for the connection status
> > in the middle of the loop. libpq code seems actually returning
> > PQmakeEmptyPGresult(conn, PGRES_FATAL_ERROR) in case of something
> > unexpected.
>
> > If this is the case (I've not managed to reproduce it but the libpq
> > code seems clear), shouldn't the function documentation warn for this
> > risk, advising to check the connection status in the middle of the
> > loop and bail out if it's bad?
>
>
> No.
>
> I'm not convinced we need to do anything about that, since AFAIR no one
> has ever reported such a failure from the field (which might well
> indicate that Marko's analysis is wrong, anyway). But if we were to do
> something about it, changing the documentation would be the wrong
> response, because that coding pattern is all over the place. It'd be
> better to kluge PQgetResult so it only returns a bad-connection error
> object once per query, or something like that.

So the PQgetResult currently does not guarantee that, yes?
As that's what I noticed...

Few facts that I have:
- psycopg2 was in such loop.
- libpq [8.4.2] issued close(-1) per PQgetResult call.
Due to accident, it was without debug symbols, so I could not
analyze deeply.
- There were 2 processes in such state.
- NIC problem - corrupted the stream after multi-GB data transfer,
but not always. In this case, there were large COPY-ing
going on.
- There were 5 dbs in the machine that were copying data. Some
of them failed with data-format error, some perhaps succeeded.
- The NIC has been changed, so its probably not reproducible easily.

--
marko