From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: PQgetssl() and alternative SSL implementations |
Date: | 2014-08-19 18:40:15 |
Message-ID: | 20140819184015.GJ16422@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
* Tom Lane (tgl(at)sss(dot)pgh(dot)pa(dot)us) wrote:
> Stephen Frost <sfrost(at)snowman(dot)net> writes:
> > * Alvaro Herrera (alvherre(at)2ndquadrant(dot)com) wrote:
> >> Um, libpq has recently gained the ability to return result fragments,
> >> right? Those didn't exist when libpq-ification of odbc was attempted,
> >> as I recall -- perhaps it's possible now.
>
> > I was trying to remember off-hand if we still had that or not.. I
> > thought there was discussion about removing it, actually, but perhaps
> > that was something else.
>
> Sure,
> http://www.postgresql.org/docs/devel/static/libpq-single-row-mode.html
> That's a done deal, it won't be going away.
Ugh. Yes, there's single-row mode, but I had been thinking there was a
'batch' mode available ala what OCI8 had, where you'd allocate a chunk
of memory and then have it filled directly by the library as rows came
back in until it was full (there was a similar 'bulk send' operation, as
I recall). Perhaps it was the 'pipelining' thread that I was thinking
about. Not really relevant, in any case.
> Whether it would solve ODBC's problem I don't know (and I'm not
> volunteering to do the work ;-))
It could work.. though it's certainly been a while since I looked at
the ODBC internals.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2014-08-19 18:49:45 | Re: PQgetssl() and alternative SSL implementations |
Previous Message | Tomas Vondra | 2014-08-19 18:37:55 | Re: bad estimation together with large work_mem generates terrible slow hash joins |