From: | Bernd Helmle <mailings(at)oopsware(dot)de> |
---|---|
To: | Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> |
Cc: | amul sul <sul_amul(at)yahoo(dot)co(dot)in>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Selecting large tables gets killed |
Date: | 2014-02-20 09:56:47 |
Message-ID: | F3269A63EC7B12A7788707C0@apophis.credativ.lan |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
--On 20. Februar 2014 14:49:28 +0530 Ashutosh Bapat
<ashutosh(dot)bapat(at)enterprisedb(dot)com> wrote:
> If I set some positive value for this variable, psql runs smoothly with
> any size of data. But unset that variable, and it gets killed. But it's
> nowhere written explicitly that psql can run out of memory while
> collecting the result set. Either the documentation or the behaviour
> should be modified.
Maybe somewhere in the future we should consider single row mode for psql,
see
<http://www.postgresql.org/docs/9.3/static/libpq-single-row-mode.html>
However, i think nobody has tackled this yet, afair.
--
Thanks
Bernd
From | Date | Subject | |
---|---|---|---|
Next Message | Ashutosh Bapat | 2014-02-20 10:07:09 | Re: Selecting large tables gets killed |
Previous Message | Heikki Linnakangas | 2014-02-20 09:48:30 | Re: GIN improvements part2: fast scan |