Re: [BUGS] BUG #1347: Bulk Import stopps after a while (

Lists: pgsql-bugspgsql-jdbc
From: "PostgreSQL Bugs List" <pgsql-bugs(at)postgresql(dot)org>
To: pgsql-bugs(at)postgresql(dot)org
Subject: BUG #1347: Bulk Import stopps after a while ( 8.0.0. RC1)
Date: 2004-12-13 07:20:18
Message-ID: 20041213072018.2B6C2738805@www.postgresql.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-jdbc


The following bug has been logged online:

Bug reference: 1347
Logged by: Bahadur Singh

Email address: bahadursingh(at)yahoo(dot)com

PostgreSQL version: 8.0 Beta

Operating system: Windows XP

Description: Bulk Import stopps after a while ( 8.0.0. RC1)

Details:

Hello,

I have found a trivial problem during the bulk import of data.
I have a following situation.
- I have 2 big table on SQL Server 2000 with 163000 rows each.

- I have written a java program to transfer data from SQL Server 2000 to
PosgresSQL 8.0.0 RC1 release. I am updating the data in batches.
If my batch size is 1000/2000 records at a time.. This works fine.. And if I
change this size to say 20,000, it does only finishes one loop.. and then
stays idle. The CPU usage down to 10 % which was before 100 % while applying
the first batch of 20, 000 records.

The execution of program is halting just at
int n [] = stmt.batchUpdate();

I waited one hours at the most to wait above instruction to finish and then
stopped manually.. I am not sure that is it JDBC error or PostgreSQL error.
I am using JDBC release of PosgresSQL 8.0.0 RC1 release.

Thanks
Bahadur

Include :
Log 1 : when does not work..
Log 2 : when works ..

------------------ Log 1 ----------------

WHEN BATCH SIZE = 2000
---------------

getMsSqlConnection() :: status = ACTIVE
getPgSqlConnection() :: status = ACTIVE
Fri Dec 10 16:47:23 CET 2004 Migration: CREATE TABLE EodIDC_S (
"transstep" int,
"transactionid" int,
"departmentnumber" char (4),
"plunumber" char (16),
"identnumber" char (16),
"quantity" int,
"quantitydecimals" int,
"packingunit" int,
"mmcode" int,
"amountsign" char (1),
"amountabsolute" int,
"code1" char (1),
"code2" char (1),
"code3" char (1),
"idcdate" char (14),
"originalitemprice" int,
PRIMARY KEY ( TransactionId,TransStep ) )
Fri Dec 10 16:47:23 CET 2004 Migration: CREATE TABLE EodIDC_S FINISHED
Fri Dec 10 16:47:23 CET 2004 Migration: CREATE TABLE EodIDC_W (
"transactionid" int,
"transstep" int,
"iteminfo" char (4),
"itemnumber" char (16),
"flag1" char (2),
"flag2" char (2),
"amount" int,
"flag3" char (2),
"code1" char (1),
"flag4" char (2),
"code2" char (1),
"code3" char (1),
"idcdate" char (14),
"plunumber" char (16),
"quantity" int,
PRIMARY KEY ( TransactionId,TransStep ) )
Fri Dec 10 16:47:23 CET 2004 Migration: CREATE TABLE EodIDC_W FINISHED
Fri Dec 10 16:47:23 CET 2004 Migration: DDL finished in 109 ms
Fri Dec 10 16:47:23 CET 2004 Migration: Exporting Table to PostgreSQL =
EodIDC_S
Fri Dec 10 16:47:23 CET 2004 Migration: Reading data from MS SQL Server
table ... EodIDC_S
Fri Dec 10 16:47:24 CET 2004 Migration: Batch sent upto = 20000
Fri Dec 10 16:47:24 CET 2004 Migration: EodIDC_S Total SQLs sent to DB=20000
Fri Dec 10 16:47:31 CET 2004 Migration: EodIDC_S Records updated in DB=20000
Fri Dec 10 16:47:32 CET 2004 Migration: Batch sent upto = 40000
Fri Dec 10 16:47:32 CET 2004 Migration: EodIDC_S Total SQLs sent to DB=20000

****** NO RESPONSE AFTER ******
Process stopped and restarted with batch size 2000 rows

------------------ Log 2 ----------------
WHEN BATCH SIZE = 2000
----------------------

Fri Dec 10 16:51:02 CET 2004 Migration: CREATE TABLE EodIDC_S (
"transstep" int,
"transactionid" int,
"departmentnumber" char (4),
"plunumber" char (16),
"identnumber" char (16),
"quantity" int,
"quantitydecimals" int,
"packingunit" int,
"mmcode" int,
"amountsign" char (1),
"amountabsolute" int,
"code1" char (1),
"code2" char (1),
"code3" char (1),
"idcdate" char (14),
"originalitemprice" int,
PRIMARY KEY ( TransactionId,TransStep ) )
Fri Dec 10 16:51:02 CET 2004 Migration: CREATE TABLE EodIDC_S FINISHED
Fri Dec 10 16:51:02 CET 2004 Migration: CREATE TABLE EodIDC_W (
"transactionid" int,
"transstep" int,
"iteminfo" char (4),
"itemnumber" char (16),
"flag1" char (2),
"flag2" char (2),
"amount" int,
"flag3" char (2),
"code1" char (1),
"flag4" char (2),
"code2" char (1),
"code3" char (1),
"idcdate" char (14),
"plunumber" char (16),
"quantity" int,
PRIMARY KEY ( TransactionId,TransStep ) )
Fri Dec 10 16:51:02 CET 2004 Migration: CREATE TABLE EodIDC_W FINISHED
Fri Dec 10 16:51:02 CET 2004 Migration: DDL finished in 94 ms
Fri Dec 10 16:51:02 CET 2004 Migration: Exporting Table to PostgreSQL =
EodIDC_S
Fri Dec 10 16:51:02 CET 2004 Migration: Reading data from MS SQL Server
table ... EodIDC_S
Fri Dec 10 16:51:03 CET 2004 Migration: Batch sent upto = 2000
Fri Dec 10 16:51:03 CET 2004 Migration: EodIDC_S Total SQLs sent to DB=2000
Fri Dec 10 16:51:03 CET 2004 Migration: EodIDC_S Records updated in DB=2000
Fri Dec 10 16:51:03 CET 2004 Migration: Batch sent upto = 4000
Fri Dec 10 16:51:03 CET 2004 Migration: EodIDC_S Total SQLs sent to DB=2000
Fri Dec 10 16:51:04 CET 2004 Migration: EodIDC_S Records updated in DB=2000
Fri Dec 10 16:51:04 CET 2004 Migration: Batch sent upto = 6000
Fri Dec 10 16:51:04 CET 2004 Migration: EodIDC_S Total SQLs sent to DB=2000
Fri Dec 10 16:51:05 CET 2004 Migration: EodIDC_S Records updated in DB=2000
Fri Dec 10 16:51:05 CET 2004 Migration: Batch sent upto = 8000

..
..

..

..

Fri Dec 10 16:52:07 CET 2004 Migration: EodIDC_S Records updated in DB=2000
Fri Dec 10 16:52:07 CET 2004 Migration: Batch sent upto = 162000
Fri Dec 10 16:52:07 CET 2004 Migration: EodIDC_S Total SQLs sent to DB=2000
Fri Dec 10 16:52:08 CET 2004 Migration: EodIDC_S Records updated in DB=2000
Fri Dec 10 16:52:08 CET 2004 Migration: Complete Batch sent = 163492
Fri Dec 10 16:52:08 CET 2004 Migration: Total SQL=1492
Fri Dec 10 16:52:08 CET 2004 Migration: Records updated 1492
Fri Dec 10 16:52:08 CET 2004 Migration: Exporting Table to PostgreSQL =
EodIDC_S DONE

**** THE DATA WAS IMPORTED SUCCESSFULLY *****

************ JAVA METHOD ***************

private void insertBulkData (ArrayList insertSqlList, String tableName,
boolean save ) throws Exception {
// update

try {
Connection con = getPgSqlConnection();
con.setAutoCommit(false);
Statement stmt = con.createStatement();

for (Iterator sqls = insertSqlList.iterator(); sqls.hasNext(); ) {
String sql = (String)sqls.next();
stmt.addBatch(sql);
}
log( tableName + " Total SQLs sent to DB=" + insertSqlList.size());
int n [] = stmt.executeBatch();
stmt.close();
log( tableName + " Records updated in DB=" + n.length );
if (save) {
con.commit();
}
} catch (SQLException e) {
e.printStackTrace();
throw e;
} catch (Throwable te) {
te.printStackTrace();
}

}


From: Kris Jurka <books(at)ejurka(dot)com>
To: Bahadur Singh <bahadursingh(at)yahoo(dot)com>
Cc: pgsql-bugs(at)postgresql(dot)org, pgsql-jdbc(at)postgresql(dot)org
Subject: Re: BUG #1347: Bulk Import stopps after a while ( 8.0.0.
Date: 2004-12-13 07:46:35
Message-ID: Pine.BSO.4.56.0412130234430.26362@leary.csoft.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-jdbc

On Mon, 13 Dec 2004, PostgreSQL Bugs List wrote:

>
> The following bug has been logged online:
>
> Bug reference: 1347
> PostgreSQL version: 8.0 Beta
> Operating system: Windows XP
> Description: Bulk Import stopps after a while ( 8.0.0. RC1)
>
> - I have written a java program to transfer data from SQL Server 2000 to
> PosgresSQL 8.0.0 RC1 release. I am updating the data in batches.
> If my batch size is 1000/2000 records at a time.. This works fine.. And if I
> change this size to say 20,000, it does only finishes one loop.. and then
> stays idle. The CPU usage down to 10 % which was before 100 % while applying
> the first batch of 20, 000 records.
>
>
> The execution of program is halting just at
> int n [] = stmt.batchUpdate();
>

This may be a problem with the JDBC driver deadlocking as described in the
below code comment. When originally written I asked Oliver about the
estimate of MAX_BUFFERED_QUERIES and he felt confident in that number. It
would be good to know if lowering this number fixes your problem.

Kris Jurka

// Deadlock avoidance:
//
// It's possible for the send and receive streams to get
// "deadlocked" against each other since we do not have a separate
// thread. The scenario is this: we have two streams:
//
// driver -> TCP buffering -> server
// server -> TCP buffering -> driver
//
// The server behaviour is roughly:
// while true:
// read message
// execute message
// write results
//
// If the server -> driver stream has a full buffer, the write will
// block. If the driver is still writing when this happens, and the
// driver -> server stream also fills up, we deadlock: the driver is
// blocked on write() waiting for the server to read some more data,
// and the server is blocked on write() waiting for the driver to read
// some more data.
//
// To avoid this, we guess at how many queries we can send before the
// server -> driver stream's buffer is full (MAX_BUFFERED_QUERIES).
// This is the point where the server blocks on write and stops
// reading data. If we reach this point, we force a Sync message and
// read pending data from the server until ReadyForQuery,
// then go back to writing more queries unless we saw an error.
//
// This is not 100% reliable -- it's only done in the batch-query case
// and only at a reasonably high level (per query, not per message),
// and it's only an estimate -- so it might break. To do it correctly
// in all cases would seem to require a separate send or receive
// thread as we can only do the Sync-and-read-results operation at
// particular points, and also as we don't really know how much data
// the server is sending.

// Assume 64k server->client buffering and 250 bytes response per
// query (conservative).
private static final int MAX_BUFFERED_QUERIES = (64000 / 250);


From: Csaba Nagy <nagy(at)ecircle-ag(dot)com>
To: Postgres JDBC <pgsql-jdbc(at)postgresql(dot)org>
Subject: Re: [BUGS] BUG #1347: Bulk Import stopps after a while (
Date: 2004-12-13 09:57:14
Message-ID: 1102931834.3498.23.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-jdbc

Hi all,

I just shoot in the dark, but using the non blocking IO facilities of
jdk1.4+ wouldn't solve this problem ? Of course this would be no more
compatible with older jdks... and not that I would have time to
contribute :-(

Cheers,
Csaba.

On Mon, 2004-12-13 at 08:46, Kris Jurka wrote:
> On Mon, 13 Dec 2004, PostgreSQL Bugs List wrote:
>
> >
> > The following bug has been logged online:
> >
> > Bug reference: 1347
> > PostgreSQL version: 8.0 Beta
> > Operating system: Windows XP
> > Description: Bulk Import stopps after a while ( 8.0.0. RC1)
> >
> > - I have written a java program to transfer data from SQL Server 2000 to
> > PosgresSQL 8.0.0 RC1 release. I am updating the data in batches.
> > If my batch size is 1000/2000 records at a time.. This works fine.. And if I
> > change this size to say 20,000, it does only finishes one loop.. and then
> > stays idle. The CPU usage down to 10 % which was before 100 % while applying
> > the first batch of 20, 000 records.
> >
> >
> > The execution of program is halting just at
> > int n [] = stmt.batchUpdate();
> >
>
> This may be a problem with the JDBC driver deadlocking as described in the
> below code comment. When originally written I asked Oliver about the
> estimate of MAX_BUFFERED_QUERIES and he felt confident in that number. It
> would be good to know if lowering this number fixes your problem.
>
> Kris Jurka
>
> // Deadlock avoidance:
> //
> // It's possible for the send and receive streams to get
> // "deadlocked" against each other since we do not have a separate
> // thread. The scenario is this: we have two streams:
> //
> // driver -> TCP buffering -> server
> // server -> TCP buffering -> driver
> //
> // The server behaviour is roughly:
> // while true:
> // read message
> // execute message
> // write results
> //
> // If the server -> driver stream has a full buffer, the write will
> // block. If the driver is still writing when this happens, and the
> // driver -> server stream also fills up, we deadlock: the driver is
> // blocked on write() waiting for the server to read some more data,
> // and the server is blocked on write() waiting for the driver to read
> // some more data.
> //
> // To avoid this, we guess at how many queries we can send before the
> // server -> driver stream's buffer is full (MAX_BUFFERED_QUERIES).
> // This is the point where the server blocks on write and stops
> // reading data. If we reach this point, we force a Sync message and
> // read pending data from the server until ReadyForQuery,
> // then go back to writing more queries unless we saw an error.
> //
> // This is not 100% reliable -- it's only done in the batch-query case
> // and only at a reasonably high level (per query, not per message),
> // and it's only an estimate -- so it might break. To do it correctly
> // in all cases would seem to require a separate send or receive
> // thread as we can only do the Sync-and-read-results operation at
> // particular points, and also as we don't really know how much data
> // the server is sending.
>
> // Assume 64k server->client buffering and 250 bytes response per
> // query (conservative).
> private static final int MAX_BUFFERED_QUERIES = (64000 / 250);
>
> ---------------------------(end of broadcast)---------------------------
> TIP 8: explain analyze is your friend


From: Oliver Jowett <oliver(at)opencloud(dot)com>
To: Csaba Nagy <nagy(at)ecircle-ag(dot)com>
Cc: Postgres JDBC <pgsql-jdbc(at)postgresql(dot)org>
Subject: Re: [BUGS] BUG #1347: Bulk Import stopps after a while (
Date: 2004-12-13 13:26:52
Message-ID: 41BD989C.5030803@opencloud.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-jdbc

Csaba Nagy wrote:

> I just shoot in the dark, but using the non blocking IO facilities of
> jdk1.4+ wouldn't solve this problem ? Of course this would be no more
> compatible with older jdks...

Well, that's a big problem right there..

-O


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Kris Jurka <books(at)ejurka(dot)com>
Cc: Bahadur Singh <bahadursingh(at)yahoo(dot)com>, pgsql-bugs(at)postgresql(dot)org, pgsql-jdbc(at)postgresql(dot)org
Subject: Re: BUG #1347: Bulk Import stopps after a while ( 8.0.0.
Date: 2004-12-13 17:21:03
Message-ID: 25966.1102958463@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-jdbc

Kris Jurka <books(at)ejurka(dot)com> writes:
> // To avoid this, we guess at how many queries we can send before the
> // server -> driver stream's buffer is full (MAX_BUFFERED_QUERIES).

It seems essentially impossible for the driver to do this reliably,
since it has no clue how much data any one query will return.

How about instead thinking in terms of not filling the driver->server
channel? You have a good handle on how much data you have sent (or at
least you could keep track of that), and if you bound it to 64K or so
then you should be safe. Perhaps the limit ought to be easily
configurable just in case, but at least you'd be measuring something
measurable.

regards, tom lane


From: Oliver Jowett <oliver(at)opencloud(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Kris Jurka <books(at)ejurka(dot)com>, Bahadur Singh <bahadursingh(at)yahoo(dot)com>, pgsql-bugs(at)postgresql(dot)org, pgsql-jdbc(at)postgresql(dot)org
Subject: Re: [JDBC] BUG #1347: Bulk Import stopps after a while ( 8.0.0.
Date: 2004-12-13 20:58:16
Message-ID: 41BE0268.1060602@opencloud.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-jdbc

Tom Lane wrote:
> Kris Jurka <books(at)ejurka(dot)com> writes:
>
>> // To avoid this, we guess at how many queries we can send before the
>> // server -> driver stream's buffer is full (MAX_BUFFERED_QUERIES).
>
>
> It seems essentially impossible for the driver to do this reliably,
> since it has no clue how much data any one query will return.

Right, but I'm not convinced that this is the problem here as batch
execution in JDBC is only allowed to do non-result-set-returning
queries. The only case I can think of where this would break is if
something is causing lots of logging output to the client (triggers etc.).

> How about instead thinking in terms of not filling the driver->server
> channel? You have a good handle on how much data you have sent (or at
> least you could keep track of that), and if you bound it to 64K or so
> then you should be safe. Perhaps the limit ought to be easily
> configurable just in case, but at least you'd be measuring something
> measurable.

That's possibly a better idea but it does mean that we wouldn't be able
to batch inserts that contain lots of data. That's the use case I needed
to support when I wrote this in the first place..

Also, it's never going to be 100% without a separate thread, as the
server can spontaneously generate output (e.g. because of NOTIFY)
regardless of how careful we are with our queries.

There's actually another problem with this code: the subdivision into
smaller batches is not transparent if autocommit is on. We send a Sync
at the end of the batch which will cause an implicit commit. We should
be sending a Flush, but it's harder for the driver to handle this as a
Flush does not provoke a response message from the server, so we would
have to track the protocol state more closely. Given that JDBC is silent
about the autocommit semantics of batch execution anyway, I'm not too
worried about fixing this urgently.

I'd like to see that this is really the problem before tweaking this
code. Given that the OP said that batch sizes of 1000-2000 worked OK,
I'm not sure that this code is the problem since the maximum number of
queries we'll send per batch is around 250 by default.

-O


From: Oliver Jowett <oliver(at)opencloud(dot)com>
To: Bahadur Singh <bahadursingh(at)yahoo(dot)com>
Cc: Kris Jurka <books(at)ejurka(dot)com>, pgsql-bugs(at)postgresql(dot)org, pgsql-jdbc(at)postgresql(dot)org
Subject: Re: [JDBC] BUG #1347: Bulk Import stopps after a while (
Date: 2004-12-13 21:13:57
Message-ID: 41BE0615.2000203@opencloud.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-jdbc

Kris Jurka wrote:
>
> On Mon, 13 Dec 2004, PostgreSQL Bugs List wrote:
>
>
>>The following bug has been logged online:
>>
>>Bug reference: 1347
>>PostgreSQL version: 8.0 Beta
>>Operating system: Windows XP
>>Description: Bulk Import stopps after a while ( 8.0.0. RC1)
>>
>> - I have written a java program to transfer data from SQL Server 2000 to
>>PosgresSQL 8.0.0 RC1 release. I am updating the data in batches.
>>If my batch size is 1000/2000 records at a time.. This works fine.. And if I
>>change this size to say 20,000, it does only finishes one loop.. and then
>>stays idle. The CPU usage down to 10 % which was before 100 % while applying
>>the first batch of 20, 000 records.
>>
>>
>>The execution of program is halting just at
>>int n [] = stmt.batchUpdate();
>>
>
>
> This may be a problem with the JDBC driver deadlocking as described in the
> below code comment. When originally written I asked Oliver about the
> estimate of MAX_BUFFERED_QUERIES and he felt confident in that number. It
> would be good to know if lowering this number fixes your problem. [...]

I'm not convinced that this is the problem as MAX_BUFFERED_QUERIES is
around 250 by default, and yet batches of 1000/2000 execute OK.

Note that I haven't seen much benefit to batch sizes above perhaps 50 on
a LAN anyway. Batch sizes of 20,000 are almost certainly excessive (but
the driver shouldn't fail!)

Are you executing the batch with autocommit on or off?

Anyway, there are two bits of debug output that would be useful to see:

1. A thread dump of the importing JVM when it gets stuck. On unix-like
systems you'd do this by sending the JVM a SIGQUIT. I don't know how
you'd do the same under Windows.

2. Driver debug output:

a. add "?logLevel=2" to the driver URL
b. set a log writer on DriverManager at the start of the run:

// Point this somewhere else, e.g. to a file,
// if it's more convenient.
java.sql.DriverManager.setLogWriter(
new java.io.PrintWriter(System.err)))

This should trace all the protocol messages being sent or received to
System.err. If the driver is wedging at the protocol level, it should
show up in that output. Send me this output off-list and compressed --
it'll be pretty big for 20,000+ queries!

-O


From: Markus Schaber <schabios(at)logi-track(dot)com>
To: pgsql-jdbc(at)postgresql(dot)org
Subject: Re: [BUGS] BUG #1347: Bulk Import stopps after a while (
Date: 2004-12-14 14:06:58
Message-ID: 20041214150658.0598b277@kingfisher.intern.logi-track.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-jdbc

Hi, Oliver,

On Tue, 14 Dec 2004 09:58:16 +1300
Oliver Jowett <oliver(at)opencloud(dot)com> wrote:

> Also, it's never going to be 100% without a separate thread, as the
> server can spontaneously generate output (e.g. because of NOTIFY)
> regardless of how careful we are with our queries.

Sadly, we cannot abandon compatibility with older JDK releases, this
would allow us to use the new asynchroneous I/O interfaces.

Async I/O is a great tool when in the right hands, you might google for
Felix von Leitner and his "fnord" and "gatling" servers that even beat
the kernel based khttpd.

Thanks,
Markus

--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios(at)logi-track(dot)com | www.logi-track.com


From: Oliver Jowett <oliver(at)opencloud(dot)com>
To: Markus Schaber <schabios(at)logi-track(dot)com>
Cc: pgsql-jdbc(at)postgresql(dot)org
Subject: Re: [BUGS] BUG #1347: Bulk Import stopps after a while (
Date: 2004-12-14 21:04:20
Message-ID: 41BF5554.4040309@opencloud.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-bugs pgsql-jdbc

Markus Schaber wrote:
> Hi, Oliver,
>
> On Tue, 14 Dec 2004 09:58:16 +1300
> Oliver Jowett <oliver(at)opencloud(dot)com> wrote:
>
>
>>Also, it's never going to be 100% without a separate thread, as the
>>server can spontaneously generate output (e.g. because of NOTIFY)
>>regardless of how careful we are with our queries.

I think my analysis here was wrong. It doesn't matter if the server
unexpectedly fills up its output pipe so long as the driver doesn't do
the same.

> Sadly, we cannot abandon compatibility with older JDK releases, this
> would allow us to use the new asynchroneous I/O interfaces.

Hm, is async I/O standardized yet? I thought JSR203 ("more NIO") covered
this, but it's still in the early stages according to jcp.org.

> Async I/O is a great tool when in the right hands, you might google for
> Felix von Leitner and his "fnord" and "gatling" servers that even beat
> the kernel based khttpd.

Current NIO isn't async I/O, but it's still fairly good (use at least
1.4.2_06, though, or you can have GC problems when collecting direct
buffers). We have written several resource adaptors that use it and they
perform well. The main win is that you don't need N threads for N
clients. That can't really be applied to the JDBC driver easily as the
API means that the client controls most of the threading, but the
ability to get at non-blocking primitives would still be useful.

I wonder if we can have some threads-or-NIO abstraction .. the main
thing we need is the ability to call out to some other code to do reads
either when we discover our writes will block (NIO) or when we have data
ready to read (threads). We then read and process messages until we
would block (NIO) or we *do* block (threads).

I doubt I'll have time to look at this in the near future though.

-O