Re: BUG #1756: PQexec eats huge amounts of memory

Lists: pgsql-bugs
From: "Denis Vlasenko" <vda(at)ilport(dot)com(dot)ua>
To: pgsql-bugs(at)postgresql(dot)org
Subject: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-06 11:35:28
Message-ID: 20050706113528.000E3F0B01@svr2.postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs


The following bug has been logged online:

Bug reference: 1756
Logged by: Denis Vlasenko
Email address: vda(at)ilport(dot)com(dot)ua
PostgreSQL version: 8.0.1
Operating system: Linux
Description: PQexec eats huge amounts of memory
Details:

Verbatim from http://bugs.php.net/bug.php?id=33587:

Description:
------------
Seen on php-4.3.4RC2. Since I was just testing how good
PG fares compared to Oracle, and I am not feeling any
real pain from this (IOW: not my itch to scratch),
I do not research this in depth, apart from submitting
bug report. Sorry.

Symptom: even the simplest query
$result = pg_query($db, "SELECT * FROM big_table");
eats enormous amounts of memory on server
(proportional to table size).

I think this is a problem with PostgreSQL client libs.
php's source is included for easy reference.

PHP_FUNCTION(pg_query)
{

...
pgsql_result = PQexec(pgsql, Z_STRVAL_PP(query));
if ((PGG(auto_reset_persistent) & 2) && PQstatus(pgsql) !=
CONNECTION_OK) {
PQclear(pgsql_result);
PQreset(pgsql);
pgsql_result = PQexec(pgsql, Z_STRVAL_PP(query));
}

if (pgsql_result) {
status = PQresultStatus(pgsql_result);
} else {
status = (ExecStatusType) PQstatus(pgsql);
}

switch (status) {
case PGRES_EMPTY_QUERY:
case PGRES_BAD_RESPONSE:
case PGRES_NONFATAL_ERROR:
case PGRES_FATAL_ERROR:
php_error_docref(NULL TSRMLS_CC, E_WARNING,
"Query failed: %s.", PQerrorMessage(pgsql));
PQclear(pgsql_result);
RETURN_FALSE;
break;
case PGRES_COMMAND_OK: /* successful command that did
not return rows */
default:
if (pgsql_result) {
pg_result = (pgsql_result_handle *)
emalloc(sizeof(pgsql_result_handle));
pg_result->conn = pgsql;
pg_result->result = pgsql_result;
pg_result->row = 0;
ZEND_REGISTER_RESOURCE(return_value,
pg_result, le_result);
} else {
PQclear(pgsql_result);
RETURN_FALSE;
}
break;
}
}


From: Harald Armin Massa <haraldarminmassa(at)gmail(dot)com>
To: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-06 13:52:30
Message-ID: 7be3f35d05070606522a3ec2f9@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

Denis,

$result = pg_query($db, "SELECT * FROM big_table");
>

you are reading a big result (as I suspect from big_table) into memory. It
is perfectly normal that this uses large amounts of memory.

[it would be rather suspicious if loading a big file / big resultset would
not use big amounts of memory]

Harald

--
GHUM Harald Massa
persuasion python postgresql
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607


From: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
To: Harald Armin Massa <haraldarminmassa(at)gmail(dot)com>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-07 05:21:38
Message-ID: 200507070821.38121.vda@ilport.com.ua
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

On Wednesday 06 July 2005 16:52, Harald Armin Massa wrote:
> Denis,
>
> $result = pg_query($db, "SELECT * FROM big_table");
>
> you are reading a big result (as I suspect from big_table) into memory. It
> is perfectly normal that this uses large amounts of memory.

No, I am not reading it into memory. I am executing query _on the server_,
fetching result row-by-row and discarding rows as prey are processed
(i.e. without accumulating all rows in _client's memory_) in the part
of php script which you snipped off.

Similar construct with Oracle, with 10x larger table,
does not use Apache (php) memory significantly.

php's pg_query() calls PQuery(), a Postgresql client library function,
which is likely implemented so that it fetches all rows and stores them
in client's RAM before completion.

Oracle OCI8 does not work this way, it keeps result set
on db server (in a form of a cursor or something like that).

> [it would be rather suspicious if loading a big file / big resultset would
> not use big amounts of memory]
--
vda


From: Neil Conway <neilc(at)samurai(dot)com>
To: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-07 05:54:29
Message-ID: 42CCC395.7080204@samurai.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

Denis Vlasenko wrote:
> Symptom: even the simplest query
> $result = pg_query($db, "SELECT * FROM big_table");
> eats enormous amounts of memory on server
> (proportional to table size).

Right, which is exactly what you would expect. The entire result set is
sent to the client and stored in local memory; if you only want to
process part of the result set at a time, use a cursor.

(And I'm a little suspicious that the performance of "SELECT * FROM
big_table" will contribute to a meaningful comparison between database
systems.)

-Neil


From: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
To: Neil Conway <neilc(at)samurai(dot)com>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-07 06:51:54
Message-ID: 200507070951.54660.vda@ilport.com.ua
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

On Thursday 07 July 2005 08:54, Neil Conway wrote:
> Denis Vlasenko wrote:
> > Symptom: even the simplest query
> > $result = pg_query($db, "SELECT * FROM big_table");
> > eats enormous amounts of memory on server
> > (proportional to table size).
>
> Right, which is exactly what you would expect. The entire result set is
> sent to the client and stored in local memory; if you only want to
> process part of the result set at a time, use a cursor.

The same php script but done against Oracle does not have this
behaviour.

> (And I'm a little suspicious that the performance of "SELECT * FROM
> big_table" will contribute to a meaningful comparison between database
> systems.)

I wanted to show colleagues which are Oracle admins that peak
data fetch rate of PostgreSQL is way better than Oracle one.

While it turned out to be true (Oracle+WinNT = 2kb TCP output buffer,
~1Mb/s over 100Mbit; PostgreSQL+Linux = 8kb buffer, ~2.6Mb/s),
I was ridiculed instead when my php script failed miserably,
crashing Apache with OOM condition, while alanogous script for Oracle
ran to completion just fine.
--
vda


From: Neil Conway <neilc(at)samurai(dot)com>
To: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-07 07:31:09
Message-ID: 42CCDA3D.5060103@samurai.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

Denis Vlasenko wrote:
> The same php script but done against Oracle does not have this
> behaviour.

Perhaps; presumably Oracle is essentially creating a cursor for you
behind the scenes. libpq does not attempt to do this automatically; if
you need a cursor, you can create one by hand.

-Neil


From: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
To: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
Cc: Neil Conway <neilc(at)samurai(dot)com>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-07 15:09:25
Message-ID: 20050707150925.GD7157@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

On Thu, Jul 07, 2005 at 09:51:54AM +0300, Denis Vlasenko wrote:

> I wanted to show colleagues which are Oracle admins that peak
> data fetch rate of PostgreSQL is way better than Oracle one.
>
> While it turned out to be true (Oracle+WinNT = 2kb TCP output buffer,
> ~1Mb/s over 100Mbit; PostgreSQL+Linux = 8kb buffer, ~2.6Mb/s),
> I was ridiculed instead when my php script failed miserably,
> crashing Apache with OOM condition, while alanogous script for Oracle
> ran to completion just fine.

You should have tested the script before showing off :-) You may want
to convert it to manually use a cursor, at least the Postgres version.
That would alleviate the memory problem.

--
Alvaro Herrera (<alvherre[a]alvh.no-ip.org>)
"Some men are heterosexual, and some are bisexual, and some
men don't think about sex at all... they become lawyers" (Woody Allen)


From: John R Pierce <pierce(at)hogranch(dot)com>
To: Neil Conway <neilc(at)samurai(dot)com>
Cc: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-07 15:17:23
Message-ID: 42CD4783.80303@hogranch.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

Neil Conway wrote:
> Denis Vlasenko wrote:
>
>> The same php script but done against Oracle does not have this
>> behaviour.
>
>
> Perhaps; presumably Oracle is essentially creating a cursor for you
> behind the scenes. libpq does not attempt to do this automatically; if
> you need a cursor, you can create one by hand.

I do not understand how a cursor could be autocreated by a query like

$result = pg_query($db, "SELECT * FROM big_table");

php will expect $result to contain the entire table (yuck!).


From: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
To: John R Pierce <pierce(at)hogranch(dot)com>
Cc: Neil Conway <neilc(at)samurai(dot)com>, Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-07 17:43:57
Message-ID: 20050707174357.GD10035@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

On Thu, Jul 07, 2005 at 08:17:23AM -0700, John R Pierce wrote:
> Neil Conway wrote:
> >Denis Vlasenko wrote:
> >
> >>The same php script but done against Oracle does not have this
> >>behaviour.
> >
> >
> >Perhaps; presumably Oracle is essentially creating a cursor for you
> >behind the scenes. libpq does not attempt to do this automatically; if
> >you need a cursor, you can create one by hand.
>
> I do not understand how a cursor could be autocreated by a query like
>
> $result = pg_query($db, "SELECT * FROM big_table");
>
> php will expect $result to contain the entire table (yuck!).

Really? I thought what really happened is you had to get the results
one at a time using the pg_fetch family of functions. If that is true,
then it's possible to make the driver fake having the whole table by
using a cursor. (Even if PHP doesn't do it, it's possible for OCI to do
it behind the scenes.)

--
Alvaro Herrera (<alvherre[a]alvh.no-ip.org>)
Officer Krupke, what are we to do?
Gee, officer Krupke, Krup you! (West Side Story, "Gee, Officer Krupke")


From: Volkan YAZICI <volkan(dot)yazici(at)gmail(dot)com>
To: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
Cc: pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-08 06:35:30
Message-ID: 7104a73705070723357a457b6@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

Hi,

A similar topic has been discussed before on pgsql-sql mailing list:
Subject: SELECT very slow Thomas Kellerer
URL: http://archives.postgresql.org/pgsql-sql/2005-06/msg00118.php

Regards.

On 7/6/05, Denis Vlasenko <vda(at)ilport(dot)com(dot)ua> wrote:
> Bug reference: 1756
> Logged by: Denis Vlasenko
> Email address: vda(at)ilport(dot)com(dot)ua
> PostgreSQL version: 8.0.1
> Operating system: Linux
> Description: PQexec eats huge amounts of memory


From: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
To: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, John R Pierce <pierce(at)hogranch(dot)com>
Cc: Neil Conway <neilc(at)samurai(dot)com>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-10 10:05:10
Message-ID: 200507101305.10878.vda@ilport.com.ua
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

On Thursday 07 July 2005 20:43, Alvaro Herrera wrote:
> On Thu, Jul 07, 2005 at 08:17:23AM -0700, John R Pierce wrote:
> > Neil Conway wrote:
> > >Denis Vlasenko wrote:
> > >
> > >>The same php script but done against Oracle does not have this
> > >>behaviour.
> > >
> > >
> > >Perhaps; presumably Oracle is essentially creating a cursor for you
> > >behind the scenes. libpq does not attempt to do this automatically; if
> > >you need a cursor, you can create one by hand.
> >
> > I do not understand how a cursor could be autocreated by a query like
> >
> > $result = pg_query($db, "SELECT * FROM big_table");
> >
> > php will expect $result to contain the entire table (yuck!).
>
> Really? I thought what really happened is you had to get the results
> one at a time using the pg_fetch family of functions. If that is true,
> then it's possible to make the driver fake having the whole table by
> using a cursor. (Even if PHP doesn't do it, it's possible for OCI to do
> it behind the scenes.)

Even without cursor, result can be read incrementally.

I mean, query result is transferred over network, right?
We just can stop read()'ing before we reached the end of result set,
and continue at pg_fetch as needed.

This way server does not need to do any of cursor
creation/destruction work. Not a big win, but combined with
reduced memory usage at client side, it is a win-win situation.
--
vda


From: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
To: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
Cc: John R Pierce <pierce(at)hogranch(dot)com>, Neil Conway <neilc(at)samurai(dot)com>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-11 00:38:36
Message-ID: 20050711003836.GA31881@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

On Sun, Jul 10, 2005 at 01:05:10PM +0300, Denis Vlasenko wrote:
> On Thursday 07 July 2005 20:43, Alvaro Herrera wrote:

> > Really? I thought what really happened is you had to get the results
> > one at a time using the pg_fetch family of functions. If that is true,
> > then it's possible to make the driver fake having the whole table by
> > using a cursor. (Even if PHP doesn't do it, it's possible for OCI to do
> > it behind the scenes.)
>
> Even without cursor, result can be read incrementally.
>
> I mean, query result is transferred over network, right?
> We just can stop read()'ing before we reached the end of result set,
> and continue at pg_fetch as needed.

It's not that simple. libpq is designed to read whole result sets at a
time; there's no support for reading incrementally from the server.
Other problem is that neither libpq nor the server know how many tuples
the query will return, until the whole query is executed. Thus,
pg_numrows (for example) wouldn't work at all, which is a showstopper
for many PHP scripts.

In short, it can be made to work, but it's not as simple as you put it.

--
Alvaro Herrera (<alvherre[a]alvh.no-ip.org>)
"Industry suffers from the managerial dogma that for the sake of stability
and continuity, the company should be independent of the competence of
individual employees." (E. Dijkstra)


From: Oliver Jowett <oliver(at)opencloud(dot)com>
To: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
Cc: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>, John R Pierce <pierce(at)hogranch(dot)com>, Neil Conway <neilc(at)samurai(dot)com>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-11 02:29:37
Message-ID: 42D1D991.4060703@opencloud.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

Alvaro Herrera wrote:
> On Sun, Jul 10, 2005 at 01:05:10PM +0300, Denis Vlasenko wrote:
>
>>On Thursday 07 July 2005 20:43, Alvaro Herrera wrote:
>
>
>>>Really? I thought what really happened is you had to get the results
>>>one at a time using the pg_fetch family of functions. If that is true,
>>>then it's possible to make the driver fake having the whole table by
>>>using a cursor. (Even if PHP doesn't do it, it's possible for OCI to do
>>>it behind the scenes.)
>>
>>Even without cursor, result can be read incrementally.
>>
>>I mean, query result is transferred over network, right?
>>We just can stop read()'ing before we reached the end of result set,
>>and continue at pg_fetch as needed.
>
>
> It's not that simple. [...]

It also requires that you assume there is only one set of query results
outstanding at a time. I know that you can't assume that in JDBC, and by
the sounds of it PHP's interface is similar in that you can have
multiple query result objects active at the same time.

-O


From: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
To: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
Cc: John R Pierce <pierce(at)hogranch(dot)com>, Neil Conway <neilc(at)samurai(dot)com>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-12 10:33:46
Message-ID: 200507121333.46473.vda@ilport.com.ua
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

On Monday 11 July 2005 03:38, Alvaro Herrera wrote:
> On Sun, Jul 10, 2005 at 01:05:10PM +0300, Denis Vlasenko wrote:
> > On Thursday 07 July 2005 20:43, Alvaro Herrera wrote:
>
> > > Really? I thought what really happened is you had to get the results
> > > one at a time using the pg_fetch family of functions. If that is true,
> > > then it's possible to make the driver fake having the whole table by
> > > using a cursor. (Even if PHP doesn't do it, it's possible for OCI to do
> > > it behind the scenes.)
> >
> > Even without cursor, result can be read incrementally.
> >
> > I mean, query result is transferred over network, right?
> > We just can stop read()'ing before we reached the end of result set,
> > and continue at pg_fetch as needed.
>
> It's not that simple. libpq is designed to read whole result sets at a
> time; there's no support for reading incrementally from the server.
> Other problem is that neither libpq nor the server know how many tuples
> the query will return, until the whole query is executed. Thus,
> pg_numrows (for example) wouldn't work at all, which is a showstopper
> for many PHP scripts.
>
> In short, it can be made to work, but it's not as simple as you put it.

This sounds reasonable.

Consider my posts in this thread as user wish to

* libpq and network protocol to be changed to allow for incremental reads
of executed queries and for multiple outstanding result sets,

or, if above thing looks unsurmountable at the moment,

* libpq-only change as to allow incremental reads of single outstanding
result set. Attempt to use pg_numrows, etc, or attempt to execute
another query forces libpq to read and store all remaining rows
in client's memory (i.e. current behaviour).
--
vda


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
Cc: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, John R Pierce <pierce(at)hogranch(dot)com>, Neil Conway <neilc(at)samurai(dot)com>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-13 14:43:41
Message-ID: 1809.1121265821@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

Denis Vlasenko <vda(at)ilport(dot)com(dot)ua> writes:
> Consider my posts in this thread as user wish to
> * libpq and network protocol to be changed to allow for incremental reads
> of executed queries and for multiple outstanding result sets,
> or, if above thing looks unsurmountable at the moment,
> * libpq-only change as to allow incremental reads of single outstanding
> result set. Attempt to use pg_numrows, etc, or attempt to execute
> another query forces libpq to read and store all remaining rows
> in client's memory (i.e. current behaviour).

This isn't going to happen because it would be a fundamental change in
libpq's behavior and would undoubtedly break a lot of applications.
The reason it cannot be done transparently is that you would lose the
guarantee that a query either succeeds or fails: it would be entirely
possible to return some rows to the application and only later get a
failure.

You can have this behavior today, though, as long as you are willing to
work a little harder at it --- just declare some cursors and then FETCH
in convenient chunks from the cursors.

regards, tom lane


From: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, John R Pierce <pierce(at)hogranch(dot)com>, Neil Conway <neilc(at)samurai(dot)com>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-13 14:56:47
Message-ID: 200507131756.47065.vda@ilport.com.ua
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

On Wednesday 13 July 2005 17:43, Tom Lane wrote:
> Denis Vlasenko <vda(at)ilport(dot)com(dot)ua> writes:
> > Consider my posts in this thread as user wish to
> > * libpq and network protocol to be changed to allow for incremental reads
> > of executed queries and for multiple outstanding result sets,
> > or, if above thing looks unsurmountable at the moment,
> > * libpq-only change as to allow incremental reads of single outstanding
> > result set. Attempt to use pg_numrows, etc, or attempt to execute
> > another query forces libpq to read and store all remaining rows
> > in client's memory (i.e. current behaviour).
>
> This isn't going to happen because it would be a fundamental change in
> libpq's behavior and would undoubtedly break a lot of applications.
> The reason it cannot be done transparently is that you would lose the
> guarantee that a query either succeeds or fails: it would be entirely
> possible to return some rows to the application and only later get a
> failure.
>
> You can have this behavior today, though, as long as you are willing to
> work a little harder at it --- just declare some cursors and then FETCH
> in convenient chunks from the cursors.

Thanks, I already tried that. It works.
--
vda


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Denis Vlasenko <vda(at)ilport(dot)com(dot)ua>
Cc: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, John R Pierce <pierce(at)hogranch(dot)com>, Neil Conway <neilc(at)samurai(dot)com>, pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #1756: PQexec eats huge amounts of memory
Date: 2005-07-14 12:38:27
Message-ID: 13620.1121344707@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs

Denis Vlasenko <vda(at)ilport(dot)com(dot)ua> writes:
> On Wednesday 13 July 2005 17:43, Tom Lane wrote:
>> The reason it cannot be done transparently is that you would lose the
>> guarantee that a query either succeeds or fails: it would be entirely
>> possible to return some rows to the application and only later get a
>> failure.

> What failures are likely?

Consider
select x, 1/x from foo;

where x is zero in the 10,000'th row ...

regards, tom lane