MySQL million tables

Lists: pgsql-advocacy
From: Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>
To: PostgreSQL Advocacy <pgsql-advocacy(at)postgresql(dot)org>
Subject: MySQL million tables
Date: 2006-03-09 05:22:58
Message-ID: 440FBBB2.1010202@familyhealth.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

I see in this post on planetmysql.org that this guy got MySQL to die
after creating 250k tables. Anyone want to see how far PostgreSQL can
go? :)

http://bobfield.blogspot.com/2006/03/million-tables.html

Also, can we beat his estimate of 27 hours for creation?

Chris


From: "Jonah H(dot) Harris" <jonah(dot)harris(at)gmail(dot)com>
To: "Christopher Kings-Lynne" <chriskl(at)familyhealth(dot)com(dot)au>
Cc: "PostgreSQL Advocacy" <pgsql-advocacy(at)postgresql(dot)org>
Subject: Re: MySQL million tables
Date: 2006-03-09 07:01:04
Message-ID: 36e682920603082301ia2b3bceq8517c584a47dd61b@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

On 3/9/06, Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au> wrote:
>
> Also, can we beat his estimate of 27 hours for creation?

I just tried it on my a Thinkpad:(Pentium M @ 1.73GHz) running SuSE 10 and
an untuned PostgreSQL 8.1.3 using a shell script and psql. I was able to
create 274,000 tables in exactly 798 seconds.

And it's still chugging away :)

Chris
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faq
>

--
Jonah H. Harris, Database Internals Architect
EnterpriseDB Corporation
732.331.1324


From: "Stefan 'Kaishakunin' Schumacher" <stefan(at)net-tex(dot)de>
To: Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>
Cc: PostgreSQL Advocacy <pgsql-advocacy(at)postgresql(dot)org>
Subject: Re: MySQL million tables
Date: 2006-03-09 07:03:04
Message-ID: 20060309070304.GA1495@balmung.net-tex.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

Also sprach Christopher Kings-Lynne (chriskl(at)familyhealth(dot)com(dot)au)
> I see in this post on planetmysql.org that this guy got MySQL to die
> after creating 250k tables. Anyone want to see how far PostgreSQL can
> go? :)
>
> http://bobfield.blogspot.com/2006/03/million-tables.html
>
> Also, can we beat his estimate of 27 hours for creation?

for i in `seq 1 1000000`;
do
echo "create table test$i (ts timestamp);"|psql tabletorture;
done

This is a Pentium 3 with 256MB Ram, I'll see how far it gets and how
long it takes.

Now I got ~2600 tables in 4 minutes, so it might take a day to get the
million. However, the MySQL-guy has no statements about his
test-machine, so it's not comparable.

--
PGP FPR: CF74 D5F2 4871 3E5C FFFE 0130 11F4 C41E B3FB AE33
--
Im Klang der Gion-Shoja-Glocken tönt die Vergänglichkeit aller Dinge,
die Farbe der Sala-Blüte offenbart, daß die Erfolgreichen fallen müssen.
Die Stolzen währen nicht ewig, sie vergehen wie der Traum einer Frühlingsnacht.
Die Mächtigen fallen zuletzt, sie sind wie Staub vor dem Winde. Heike Monogatari


From: Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>
To: "Jonah H(dot) Harris" <jonah(dot)harris(at)gmail(dot)com>
Cc: PostgreSQL Advocacy <pgsql-advocacy(at)postgresql(dot)org>
Subject: Re: MySQL million tables
Date: 2006-03-09 07:21:42
Message-ID: 440FD786.3010908@familyhealth.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

Another mysql blogger has chimed in now:

http://arjen-lentz.livejournal.com/66547.html

He did it with MyISAM tables though.

Chris

Jonah H. Harris wrote:
> On 3/9/06, *Christopher Kings-Lynne* <chriskl(at)familyhealth(dot)com(dot)au
> <mailto:chriskl(at)familyhealth(dot)com(dot)au>> wrote:
>
> Also, can we beat his estimate of 27 hours for creation?
>
>
> I just tried it on my a Thinkpad:(Pentium M @ 1.73GHz) running SuSE 10
> and an untuned PostgreSQL 8.1.3 using a shell script and psql. I was
> able to create 274,000 tables in exactly 798 seconds.
>
> And it's still chugging away :)
>
>
> Chris
>
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faq
>
>
>
>
> --
> Jonah H. Harris, Database Internals Architect
> EnterpriseDB Corporation
> 732.331.1324


From: "Jonah H(dot) Harris" <jonah(dot)harris(at)gmail(dot)com>
To: "Stefan 'Kaishakunin' Schumacher" <stefan(at)net-tex(dot)de>
Cc: "Christopher Kings-Lynne" <chriskl(at)familyhealth(dot)com(dot)au>, "PostgreSQL Advocacy" <pgsql-advocacy(at)postgresql(dot)org>
Subject: Re: MySQL million tables
Date: 2006-03-09 07:48:43
Message-ID: 36e682920603082348v6f8e255bx6f8010e0833eb67a@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

On 3/9/06, Stefan 'Kaishakunin' Schumacher <stefan(at)net-tex(dot)de> wrote:
>
> Also sprach Christopher Kings-Lynne (chriskl(at)familyhealth(dot)com(dot)au)
> > I see in this post on planetmysql.org that this guy got MySQL to die
> > after creating 250k tables. Anyone want to see how far PostgreSQL can
> > go? :)
> >
> > http://bobfield.blogspot.com/2006/03/million-tables.html
> >
> > Also, can we beat his estimate of 27 hours for creation?
>
> for i in `seq 1 1000000`;
> do
> echo "create table test$i (ts timestamp);"|psql tabletorture;
> done

My results were based on 500 CREATE TABLE's per transaction, so my results
aren't comparable either.

--
Jonah H. Harris, Database Internals Architect
EnterpriseDB Corporation
732.331.1324


From: Jean-Paul Argudo <jean-paul(at)argudo(dot)org>
To: Stefan 'Kaishakunin' Schumacher <stefan(at)net-tex(dot)de>
Cc: Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>, PostgreSQL Advocacy <pgsql-advocacy(at)postgresql(dot)org>
Subject: Re: MySQL million tables
Date: 2006-03-09 09:03:22
Message-ID: 440FEF5A.6080304@argudo.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi all,

I dont think this kind of test could have any meaning, given that table
creation time doesnt import in any way.

... but I was curious, so I made the test :)

>>Also, can we beat his estimate of 27 hours for creation?

We would need his complete machine specs to be sure we can beat him.

> for i in `seq 1 1000000`;
> do
> echo "create table test$i (ts timestamp);"|psql tabletorture;
> done

Just tried that script (still running) on a Dell PowerEdge 2800 (not my
prefered, but only one I could use for that test...)

2 physical CPUS in ht: Intel(R) Xeon(TM) CPU 3.00GHz with 2m cache
2Gb ram

Builtin Perc4, with Raid 5 and 5+1 10Krpm disks

Linux ***** 2.4.27-2-686-smp #1 SMP Wed Aug 17 10:05:21 UTC 2005 i686
GNU/Linux

I can send more details if you want (postgresql.conf..)

> Now I got ~2600 tables in 4 minutes, so it might take a day to get the
> million. However, the MySQL-guy has no statements about his
> test-machine, so it's not comparable.

I got a rate of 12 tables created per second (fluctuating from 10 to 13
so thats an approximation I do, Ill get final results on the end).

So at the moment I'll have 1M table in about 23 hours on that server.

I'll send complete results when it will be finished.

- --
Jean-Paul Argudo
www.PostgreSQLFr.org
www.dalibo.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFED+9anDfYiZOkHKQRAgkSAJ0SUM5GfouYbushribtHPzGtfWBPQCffWNc
4vwrAr9z8p+8AAqL2ndFlP4=
=bX5g
-----END PGP SIGNATURE-----


From: "Greg Sabino Mullane" <greg(at)turnstep(dot)com>
To: pgsql-advocacy(at)postgresql(dot)org
Subject: Re: MySQL million tables
Date: 2006-03-09 12:27:17
Message-ID: da365fd5acda3d5fb7deabcb0a81172d@biglumber.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I kicked this off last night before bed. It ran much quicker than
I thought, due to that 27 hour estimate.

Total time: 23 minutes 29 seconds :)

I committed every 2000 table commits. It could probably be made
to go slightly faster, as this was an out-of-the-box Postgres database,
with no postgresql.conf tweaking. I simply piped a text file into
psql for the testing, from the outout of a quick perl script:

my $max = 1_000_000;
my $check = 2_000;
print "BEGIN;\n";
for my $num (1..$max) {
print "CREATE TABLE foo$num (a smallint);\n";
$num%$check or print "COMMIT;\nBEGIN;\n";
}
print "COMMIT;\n";

And the proof:

greg=# select count(*) from pg_class where relkind='r' and relname ~ 'foo'
count
- ---------
1000000

Maybe I'll see just how far PG *can* go next. Time to make a PlanetPG post,
at any rate.

- --
Greg Sabino Mullane greg(at)turnstep(dot)com
PGP Key: 0x14964AC8 200603090720
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iD8DBQFEEB2yvJuQZxSWSsgRAvUbAKCzO80prZ4DX4l3iT0Dh+Re5M4TpACfW95z
y4cdkQjw2ubAP4btMwSw5iw=
=Ginx
-----END PGP SIGNATURE-----


From: Christopher Browne <cbbrowne(at)acm(dot)org>
To: pgsql-advocacy(at)postgresql(dot)org
Subject: Re: MySQL million tables
Date: 2006-03-09 14:04:48
Message-ID: 87veunlnnj.fsf@wolfe.cbbrowne.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

A long time ago, in a galaxy far, far away, greg(at)turnstep(dot)com ("Greg Sabino Mullane") wrote:
> I kicked this off last night before bed. It ran much quicker than
> I thought, due to that 27 hour estimate.
>
> Total time: 23 minutes 29 seconds :)

I'm jealous. I've got the very same thing running on some Supposedly
Pretty Fast Hardware, and it's cruising towards 31 minutes plus a few
seconds.

While it's running, the time estimate is...

select (now() - '2006-03-09 13:47:49') * 1000000 / (select count(*)
from pg_class where relkind='r' and relname ~ 'foo');

That pretty quickly converged to 31:0?...

> Maybe I'll see just how far PG *can* go next. Time to make a
> PlanetPG post, at any rate.

Another interesting approach to it would be to break this into several
streams.

There ought to be some parallelism to be gained, on systems with
multiple disks and CPUs, by having 1..100000 go in parallel to 100001
to 200000, and so forth, for (oh, say) 10 streams. Perhaps it's
irrelevant parallelism; knowing that it helps/hurts would be nice...
--
(format nil "~S(at)~S" "cbbrowne" "cbbrowne.com")
http://linuxfinances.info/info/rdbms.html
Where do you want to Tell Microsoft To Go Today?


From: Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>
To: Greg Sabino Mullane <greg(at)turnstep(dot)com>
Cc: pgsql-advocacy(at)postgresql(dot)org
Subject: Re: MySQL million tables
Date: 2006-03-10 01:16:20
Message-ID: 4410D364.10006@familyhealth.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

> greg=# select count(*) from pg_class where relkind='r' and relname ~ 'foo'
> count
> - ---------
> 1000000
>
> Maybe I'll see just how far PG *can* go next. Time to make a PlanetPG post,
> at any rate.

Try \dt :D

Chris


From: "Greg Sabino Mullane" <greg(at)turnstep(dot)com>
To: pgsql-advocacy(at)postgresql(dot)org
Subject: Re: MySQL million tables
Date: 2006-03-10 14:23:07
Message-ID: b53499524964992236c129105745ce6d@biglumber.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

>> greg=# select count(*) from pg_class where relkind='r' and relname ~ 'foo'
>> count
>> - ---------
>> 1000000

> Try \dt :D

Sure. Took 42 seconds, but it showed up just fine. :)

List of relations
Schema | Name | Type | Owner
- --------+-----------+-------+-------
public | foo1 | table | greg
public | foo10 | table | greg
public | foo100 | table | greg
public | foo1000 | table | greg
public | foo10000 | table | greg
public | foo100000 | table | greg
public | foo100001 | table | greg
public | foo100002 | table | greg
public | foo100003 | table | greg

etc...

- --
Greg Sabino Mullane greg(at)turnstep(dot)com
PGP Key: 0x14964AC8 200603100913
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-----BEGIN PGP SIGNATURE-----

iD8DBQFEEYpuvJuQZxSWSsgRAnXFAKCQD31fIXDvN/2lLl9Unaw0zvAdcgCgkHxh
WsrUkThL+xYz6bdzvZ5jqA4=
=4TZu
-----END PGP SIGNATURE-----


From: Richard Huxton <dev(at)archonet(dot)com>
To: Greg Sabino Mullane <greg(at)turnstep(dot)com>
Cc: pgsql-advocacy(at)postgresql(dot)org
Subject: Re: MySQL million tables
Date: 2006-03-10 17:46:00
Message-ID: 4411BB58.8010406@archonet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

Greg Sabino Mullane wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
>>> greg=# select count(*) from pg_class where relkind='r' and relname ~ 'foo'
>>> count
>>> - ---------
>>> 1000000
>
>> Try \dt :D
>
> Sure. Took 42 seconds, but it showed up just fine. :)

The real test is to put a few rows into each, join the lot and see how
long it takes geqo to plan it.

Actually, if that works the real test is to find a use for this :-)

--
Richard Huxton
Archonet Ltd


From: Jim Nasby <jim(at)nasby(dot)net>
To: Christopher Browne <cbbrowne(at)acm(dot)org>
Cc: pgsql-advocacy(at)postgresql(dot)org
Subject: Re: MySQL million tables
Date: 2006-03-11 00:17:11
Message-ID: 5C6BC264-916E-4A08-8AC9-6ECCDAE3C6F4@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

I can't believe y'all are burning cycles on this. :P

On Mar 9, 2006, at 8:04 AM, Christopher Browne wrote:

> A long time ago, in a galaxy far, far away, greg(at)turnstep(dot)com
> ("Greg Sabino Mullane") wrote:
>> I kicked this off last night before bed. It ran much quicker than
>> I thought, due to that 27 hour estimate.
>>
>> Total time: 23 minutes 29 seconds :)
>
> I'm jealous. I've got the very same thing running on some Supposedly
> Pretty Fast Hardware, and it's cruising towards 31 minutes plus a few
> seconds.
>
> While it's running, the time estimate is...
>
> select (now() - '2006-03-09 13:47:49') * 1000000 / (select count(*)
> from pg_class where relkind='r' and relname ~ 'foo');
>
> That pretty quickly converged to 31:0?...
>
>> Maybe I'll see just how far PG *can* go next. Time to make a
>> PlanetPG post, at any rate.
>
> Another interesting approach to it would be to break this into several
> streams.
>
> There ought to be some parallelism to be gained, on systems with
> multiple disks and CPUs, by having 1..100000 go in parallel to 100001
> to 200000, and so forth, for (oh, say) 10 streams. Perhaps it's
> irrelevant parallelism; knowing that it helps/hurts would be nice...
> --
> (format nil "~S(at)~S" "cbbrowne" "cbbrowne.com")
> http://linuxfinances.info/info/rdbms.html
> Where do you want to Tell Microsoft To Go Today?
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org
>

--
Jim C. Nasby, Database Architect decibel(at)decibel(dot)org
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"


From: "Guido Barosio" <gbarosio(at)gmail(dot)com>
To: "Jim Nasby" <jim(at)nasby(dot)net>
Cc: "Christopher Browne" <cbbrowne(at)acm(dot)org>, pgsql-advocacy(at)postgresql(dot)org
Subject: Re: MySQL million tables
Date: 2006-03-11 00:40:58
Message-ID: f7f6b4c70603101640j51b29cb3uda58aaa1556f2b51@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

Well,

This is a WTF case, but year ago, a request arrived from the
Develociraptors to the DBA team.

Their need was a 2 terabyte db [with a particular need, continue]. They
did benchmark on both mysql and postgresql, and believe me, it was funny,
cause the DBA team refuse to support the idea, and left the funny und wild
developmentiraptors on their own.

The result? A script creating more o less 40.000 (oh yeah, like the foo$i
one) tables on a mysql db and making it almost unable to be browsed, but
live, currently in a beta stage, but freezed as the lack of support.
(without DBA's support, again)

Lovely! But you never know with these things, you neeever know.

note: I've created 250k tables in 63 minutes using the perl script from a
previous post, on my own workstation. (RH3, short on ram, average CPU,
ebay.com used and shipped from the north to the south crappy drive)

g.-

On 3/11/06, Jim Nasby <jim(at)nasby(dot)net> wrote:
>
> I can't believe y'all are burning cycles on this. :P
>
> On Mar 9, 2006, at 8:04 AM, Christopher Browne wrote:
>
> > A long time ago, in a galaxy far, far away, greg(at)turnstep(dot)com
> > ("Greg Sabino Mullane") wrote:
> >> I kicked this off last night before bed. It ran much quicker than
> >> I thought, due to that 27 hour estimate.
> >>
> >> Total time: 23 minutes 29 seconds :)
> >
> > I'm jealous. I've got the very same thing running on some Supposedly
> > Pretty Fast Hardware, and it's cruising towards 31 minutes plus a few
> > seconds.
> >
> > While it's running, the time estimate is...
> >
> > select (now() - '2006-03-09 13:47:49') * 1000000 / (select count(*)
> > from pg_class where relkind='r' and relname ~ 'foo');
> >
> > That pretty quickly converged to 31:0?...
> >
> >> Maybe I'll see just how far PG *can* go next. Time to make a
> >> PlanetPG post, at any rate.
> >
> > Another interesting approach to it would be to break this into several
> > streams.
> >
> > There ought to be some parallelism to be gained, on systems with
> > multiple disks and CPUs, by having 1..100000 go in parallel to 100001
> > to 200000, and so forth, for (oh, say) 10 streams. Perhaps it's
> > irrelevant parallelism; knowing that it helps/hurts would be nice...
> > --
> > (format nil "~S(at)~S" "cbbrowne" "cbbrowne.com")
> > http://linuxfinances.info/info/rdbms.html
> > Where do you want to Tell Microsoft To Go Today?
> >
> > ---------------------------(end of
> > broadcast)---------------------------
> > TIP 4: Have you searched our list archives?
> >
> > http://archives.postgresql.org
> >
>
> --
> Jim C. Nasby, Database Architect decibel(at)decibel(dot)org
> Give your computer some brain candy! www.distributed.net Team #1828
>
> Windows: "Where do you want to go today?"
> Linux: "Where do you want to go tomorrow?"
> FreeBSD: "Are you guys coming, or what?"
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faq
>

--
/"\ ASCII Ribbon Campaign .
\ / - NO HTML/RTF in e-mail .
X - NO Word docs in e-mail .
/ \ -----------------------------------------------------------------


From: "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>
To: Jim Nasby <jim(at)nasby(dot)net>
Cc: Christopher Browne <cbbrowne(at)acm(dot)org>, pgsql-advocacy(at)postgresql(dot)org
Subject: Re: MySQL million tables
Date: 2006-03-11 01:10:11
Message-ID: 44122373.3040501@commandprompt.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

Jim Nasby wrote:
> I can't believe y'all are burning cycles on this. :P
Your kidding right? Have you seen the discussions that happen this list? ;)

Joshua D. Drake


From: Robert Treat <xzilla(at)users(dot)sourceforge(dot)net>
To: pgsql-advocacy(at)postgresql(dot)org
Cc: Jim Nasby <jim(at)nasby(dot)net>, Christopher Browne <cbbrowne(at)acm(dot)org>
Subject: Re: MySQL million tables
Date: 2006-03-11 01:34:45
Message-ID: 200603102034.45797.xzilla@users.sourceforge.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

I can't believe the mysql guys found this to be non-trivial.

http://bitbybit.dk/carsten/blog/?p=83
http://www.flamingspork.com/blog/2006/03/09/a-million-tables/

On Friday 10 March 2006 19:17, Jim Nasby wrote:
> I can't believe y'all are burning cycles on this. :P
>
> On Mar 9, 2006, at 8:04 AM, Christopher Browne wrote:
> > A long time ago, in a galaxy far, far away, greg(at)turnstep(dot)com
> >
> > ("Greg Sabino Mullane") wrote:
> >> I kicked this off last night before bed. It ran much quicker than
> >> I thought, due to that 27 hour estimate.
> >>
> >> Total time: 23 minutes 29 seconds :)
> >
> > I'm jealous. I've got the very same thing running on some Supposedly
> > Pretty Fast Hardware, and it's cruising towards 31 minutes plus a few
> > seconds.
> >
> > While it's running, the time estimate is...
> >
> > select (now() - '2006-03-09 13:47:49') * 1000000 / (select count(*)
> > from pg_class where relkind='r' and relname ~ 'foo');
> >
> > That pretty quickly converged to 31:0?...
> >
> >> Maybe I'll see just how far PG *can* go next. Time to make a
> >> PlanetPG post, at any rate.
> >
> > Another interesting approach to it would be to break this into several
> > streams.
> >
> > There ought to be some parallelism to be gained, on systems with
> > multiple disks and CPUs, by having 1..100000 go in parallel to 100001
> > to 200000, and so forth, for (oh, say) 10 streams. Perhaps it's
> > irrelevant parallelism; knowing that it helps/hurts would be nice...
> > --
> > (format nil "~S(at)~S" "cbbrowne" "cbbrowne.com")
> > http://linuxfinances.info/info/rdbms.html
> > Where do you want to Tell Microsoft To Go Today?
> >
> > ---------------------------(end of
> > broadcast)---------------------------
> > TIP 4: Have you searched our list archives?
> >
> > http://archives.postgresql.org
>
> --
> Jim C. Nasby, Database Architect decibel(at)decibel(dot)org
> Give your computer some brain candy! www.distributed.net Team #1828
>
> Windows: "Where do you want to go today?"
> Linux: "Where do you want to go tomorrow?"
> FreeBSD: "Are you guys coming, or what?"
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faq

--
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL


From: ellis(at)spinics(dot)net (Rick Ellis)
To: pgsql-advocacy(at)postgresql(dot)org
Subject: Re: MySQL million tables
Date: 2006-03-11 23:59:23
Message-ID: 1142121562.703973@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

In article <200603102034(dot)45797(dot)xzilla(at)users(dot)sourceforge(dot)net>,
Robert Treat <xzilla(at)users(dot)sourceforge(dot)net> wrote:

>I can't believe the mysql guys found this to be non-trivial.

I can ;)

--
http://yosemitenews.info/


From: "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>
To: Robert Treat <xzilla(at)users(dot)sourceforge(dot)net>
Cc: pgsql-advocacy(at)postgresql(dot)org, Jim Nasby <jim(at)nasby(dot)net>, Christopher Browne <cbbrowne(at)acm(dot)org>
Subject: Re: MySQL million tables
Date: 2006-03-15 19:54:01
Message-ID: 20060315195400.GE15742@pervasive.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

On Fri, Mar 10, 2006 at 08:34:45PM -0500, Robert Treat wrote:
> I can't believe the mysql guys found this to be non-trivial.
>
> http://bitbybit.dk/carsten/blog/?p=83
> http://www.flamingspork.com/blog/2006/03/09/a-million-tables/

So did anyone complete the million table test? We have anything to post?
--
Jim C. Nasby, Sr. Engineering Consultant jnasby(at)pervasive(dot)com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461


From: Chris <dmagick(at)gmail(dot)com>
To: pgsql-advocacy(at)postgresql(dot)org
Cc: jnasby(at)pervasive(dot)com
Subject: Re: MySQL million tables
Date: 2006-03-15 22:39:17
Message-ID: 44189795.80908@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-advocacy

Jim C. Nasby wrote:
> On Fri, Mar 10, 2006 at 08:34:45PM -0500, Robert Treat wrote:
>
>>I can't believe the mysql guys found this to be non-trivial.
>>
>>http://bitbybit.dk/carsten/blog/?p=83
>>http://www.flamingspork.com/blog/2006/03/09/a-million-tables/
>
>
> So did anyone complete the million table test? We have anything to post?

Already done:

http://people.planetpostgresql.org/greg/index.php?/archives/37-The-million-table-challenge.html

--
Postgresql & php tutorials
http://www.designmagick.com/