Re: Re: [PATCHES] [PATCH] Contrib C source for casting MONEY to INT[248] and FLOAT[48]

Lists: pgsql-generalpgsql-hackerspgsql-jdbcpgsql-patches
From: ÀîÁ¢Ð <lilixin(at)cqu(dot)edu(dot)cn>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>, "pgsql-patches(at)postgresql(dot)org" <pgsql-patches(at)postgresql(dot)org>, "David D(dot) Kilzer" <ddkilzer(at)theracingworld(dot)com>, Hitesh Patel <hitesh(at)presys(dot)com>, Jose Soares <jose(at)sferacarta(dot)com>
Subject: Re: Re: [PATCHES] [PATCH] Contrib C source for casting MONEY to INT[248] and FLOAT[48]
Date: 2001-06-20 01:20:36
Message-ID: 0GF7001JAFR6RG@mail.cqu.edu.cn
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

Bruce Momjian:
I am a begineer,The question is PgSQL support the full entrity integrity
and refernece integerity.For example.does it support "Restricted Delete、NULLIFIES-delete,default-delete....",I read your book,But can not find detail.Where to find?

>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)

lilixin(at)cqu(dot)edu(dot)cn
>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)


礼!
李立新 lilixin(at)cqu(dot)edu(dot)cn


From: "Thalis A(dot) Kalfigopoulos" <thalis(at)cs(dot)pitt(dot)edu>
To: ÀîÁ¢Ð <lilixin(at)cqu(dot)edu(dot)cn>
Cc: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>, "pgsql-patches(at)postgresql(dot)org" <pgsql-patches(at)postgresql(dot)org>, "David D(dot) Kilzer" <ddkilzer(at)theracingworld(dot)com>, Hitesh Patel <hitesh(at)presys(dot)com>, Jose Soares <jose(at)sferacarta(dot)com>
Subject: Re: Re: [PATCHES] [PATCH] Contrib C source for casting MONEY to INT[248] and FLOAT[48]
Date: 2001-06-20 15:10:21
Message-ID: Pine.LNX.4.21.0106201105441.24987-100000@aluminum.cs.pitt.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

I'm guessing you are asking for support of referential indegrity constraints. It exists in Bruce's book under http://www.ca.postgresql.org/docs/aw_pgsql_book/node131.html (ON DELETE NO ACTION/SET NULL/SET DEFAULT)

cheers,
thalis

On Wed, 20 Jun 2001, [ISO-8859-1] wrote:

> Bruce Momjian:
> I am a begineer,The question is PgSQL support the full entrity integrity
> and refernece integerity.For example.does it support "Restricted DeleteNULLIFIES-delete,default-delete....",I read your book,But can not find detail.Where to find?
>
>
> >---------------------------(end of broadcast)---------------------------
> >TIP 2: you can get off all lists at once with the unregister command
> > (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)
>
> lilixin(at)cqu(dot)edu(dot)cn
> >---------------------------(end of broadcast)---------------------------
> >TIP 2: you can get off all lists at once with the unregister command
> > (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)
>
>
>
> lilixin(at)cqu(dot)edu(dot)cn
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo(at)postgresql(dot)org
>


From: Naomi Walker <nwalker(at)eldocomp(dot)com>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>, "pgsql-patches(at)postgresql(dot)org" <pgsql-patches(at)postgresql(dot)org>
Subject: 2 gig file size limit
Date: 2001-07-06 22:51:44
Message-ID: 4.2.2.20010706154929.00aabe00@logic1design.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

If PostgreSQL is run on a system that has a file size limit (2 gig?), where
might cause us to hit the limit?
--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100 ext 242


From: Larry Rosenman <ler(at)lerctr(dot)org>
To: Naomi Walker <nwalker(at)eldocomp(dot)com>
Cc: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>, "pgsql-patches(at)postgresql(dot)org" <pgsql-patches(at)postgresql(dot)org>
Subject: Re: [HACKERS] 2 gig file size limit
Date: 2001-07-07 00:12:05
Message-ID: 20010706191205.A12351@lerami.lerctr.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

* Naomi Walker <nwalker(at)eldocomp(dot)com> [010706 17:57]:
> If PostgreSQL is run on a system that has a file size limit (2 gig?), where
> might cause us to hit the limit?
PostgreSQL is smart, and breaks the table files up at ~1GB per each,
so it's transparent to you.

You shouldn't have to worry about it.
LER

> --
> Naomi Walker
> Chief Information Officer
> Eldorado Computing, Inc.
> 602-604-3100 ext 242
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>

--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: ler(at)lerctr(dot)org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749


From: Lamar Owen <lamar(dot)owen(at)wgcr(dot)org>
To: Naomi Walker <nwalker(at)eldocomp(dot)com>, Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>, "pgsql-patches(at)postgresql(dot)org" <pgsql-patches(at)postgresql(dot)org>
Subject: Re: [HACKERS] 2 gig file size limit
Date: 2001-07-07 14:33:40
Message-ID: 01070710334002.07080@lowen.wgcr.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

On Friday 06 July 2001 18:51, Naomi Walker wrote:
> If PostgreSQL is run on a system that has a file size limit (2 gig?), where
> might cause us to hit the limit?

Since PostgreSQL automatically segments its internal data files to get around
such limits, the only place you will hit this limit will be when making
backups using pg_dump or pg_dumpall. You may need to pipe the output of
those commands into a file splitting utility, and then you'll have to pipe
through a reassembly utility to restore.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11


From: Joseph Shraibman <jks(at)selectacast(dot)net>
To: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Backups WAS: 2 gig file size limit
Date: 2001-07-09 23:48:41
Message-ID: 3B4A42D9.C448D456@selectacast.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

Lamar Owen wrote:
>
> On Friday 06 July 2001 18:51, Naomi Walker wrote:
> > If PostgreSQL is run on a system that has a file size limit (2 gig?), where
> > might cause us to hit the limit?
>
> Since PostgreSQL automatically segments its internal data files to get around
> such limits, the only place you will hit this limit will be when making
> backups using pg_dump or pg_dumpall. You may need to pipe the output of

Speaking of which.

Doing a dumpall for a backup is taking a long time, the a restore from
the dump files doesn't leave the database in its original state. Could
a command be added that locks all the files, quickly tars them up, then
releases the lock?

--
Joseph Shraibman
jks(at)selectacast(dot)net
Increase signal to noise ratio. http://www.targabot.com


From: Doug McNaught <doug(at)wireboard(dot)com>
To: Joseph Shraibman <jks(at)selectacast(dot)net>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: Backups WAS: 2 gig file size limit
Date: 2001-07-10 00:51:47
Message-ID: m3ith14ndo.fsf@belphigor.mcnaught.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

[HACKERS removed from CC: list]

Joseph Shraibman <jks(at)selectacast(dot)net> writes:

> Doing a dumpall for a backup is taking a long time, the a restore from
> the dump files doesn't leave the database in its original state. Could
> a command be added that locks all the files, quickly tars them up, then
> releases the lock?

As I understand it, pg_dump runs inside a transaction, so the output
reflects a consistent snapshot of the database as of the time the dump
starts (thanks to MVCC); restoring will put the database back to where
it was at the start of the dump.

Have you observed otherwise?

-Doug
--
The rain man gave me two cures; he said jump right in,
The first was Texas medicine--the second was just railroad gin,
And like a fool I mixed them, and it strangled up my mind,
Now people just get uglier, and I got no sense of time... --Dylan


From: Joseph Shraibman <jks(at)selectacast(dot)net>
To: Doug McNaught <doug(at)wireboard(dot)com>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: Backups WAS: 2 gig file size limit
Date: 2001-07-10 00:59:59
Message-ID: 3B4A538F.8C901C72@selectacast.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

Doug McNaught wrote:
>
> [HACKERS removed from CC: list]
>
> Joseph Shraibman <jks(at)selectacast(dot)net> writes:
>
> > Doing a dumpall for a backup is taking a long time, the a restore from
> > the dump files doesn't leave the database in its original state. Could
> > a command be added that locks all the files, quickly tars them up, then
> > releases the lock?
>
> As I understand it, pg_dump runs inside a transaction, so the output
> reflects a consistent snapshot of the database as of the time the dump
> starts (thanks to MVCC); restoring will put the database back to where
> it was at the start of the dump.
>
In theory.

> Have you observed otherwise?

Yes. Specifically timestamps are dumped in a way that (1) they lose
percision (2) sometimes have 60 in the seconds field which prevents the
dump from being restored.

And I suspect any statistics generated by VACUUM ANALYZE are lost.

If a database got corrupted somehow in order to restore from the dump
the database would have to delete the original database then restore
from the dump. Untarring would be much easier (especially as the
database grows). Obviously this won't replace dumps but for quick
backups it would be great.

--
Joseph Shraibman
jks(at)selectacast(dot)net
Increase signal to noise ratio. http://www.targabot.com


From: Mike Castle <dalgoda(at)ix(dot)netcom(dot)com>
To: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: Backups WAS: 2 gig file size limit
Date: 2001-07-10 01:46:10
Message-ID: 20010709184610.B5912@thune.mrc-home.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

On Mon, Jul 09, 2001 at 08:59:59PM -0400, Joseph Shraibman wrote:
> If a database got corrupted somehow in order to restore from the dump
> the database would have to delete the original database then restore
> from the dump. Untarring would be much easier (especially as the

You could always shut the system down and tar on your own.

Of course, tarring up several gigabytes is going to take a while.

Better to fix the dump/restore process than to hack in a work around that
has very limited benefit.

mrc
--
Mike Castle dalgoda(at)ix(dot)netcom(dot)com www.netcom.com/~dalgoda/
We are all of us living in the shadow of Manhattan. -- Watchmen
fatal ("You are in a maze of twisty compiler features, all different"); -- gcc


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Joseph Shraibman <jks(at)selectacast(dot)net>
Cc: Doug McNaught <doug(at)wireboard(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: Backups WAS: 2 gig file size limit
Date: 2001-07-10 02:07:41
Message-ID: 27750.994730861@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

Joseph Shraibman <jks(at)selectacast(dot)net> writes:
> Could a command be added that locks all the files, quickly tars them
> up, then releases the lock?

pg_ctl stop
tar cfz - $PGDATA >someplace
pg_ctl start

There is no possibility of anything less drastic, if you want to ensure
that the database files are consistent and not changing. Don't even
think about doing a partial dump of the $PGDATA tree, either. If you
don't have a pg_log that matches your data files, you've got nothing.

regards, tom lane


From: Thomas Lockhart <lockhart(at)fourpalms(dot)org>
To: Joseph Shraibman <jks(at)selectacast(dot)net>
Cc: Doug McNaught <doug(at)wireboard(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: Backups WAS: 2 gig file size limit
Date: 2001-07-10 02:27:06
Message-ID: 3B4A67FA.3392E7DB@fourpalms.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

> > Have you observed otherwise?
> Yes. Specifically timestamps are dumped in a way that (1) they lose
> percision (2) sometimes have 60 in the seconds field which prevents the
> dump from being restored.

The loss of precision for timestamp data stems from conservative
attempts to get consistant behavior from the data type. It is certainly
not entirely successful, but changes would have to solve some of these
problems without introducing more.

I've only seen the "60 seconds problem" with earlier Mandrake distros
which combined normal compiler optimizations with a "fast math"
optimization, against the apparent advice of the gcc developers. What
kind of system are you on, and how did you build PostgreSQL?

Regards.

- Thomas


From: Joseph Shraibman <jks(at)selectacast(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Doug McNaught <doug(at)wireboard(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: Backups WAS: 2 gig file size limit
Date: 2001-07-10 19:33:29
Message-ID: 3B4B5889.52D4220B@selectacast.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

Tom Lane wrote:
>
> Joseph Shraibman <jks(at)selectacast(dot)net> writes:
> > Could a command be added that locks all the files, quickly tars them
> > up, then releases the lock?
>
> pg_ctl stop
> tar cfz - $PGDATA >someplace
> pg_ctl start
>
But that would mean I would have to have all my programs detect that the
database went down and make new connections. I would rather that
postgres just lock all the files and do the tar.

--
Joseph Shraibman
jks(at)selectacast(dot)net
Increase signal to noise ratio. http://www.targabot.com


From: Joseph Shraibman <jks(at)selectacast(dot)net>
To: lockhart(at)fourpalms(dot)org
Cc: Doug McNaught <doug(at)wireboard(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: Backups WAS: 2 gig file size limit
Date: 2001-07-10 19:40:12
Message-ID: 3B4B5A1C.FA6F8418@selectacast.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

I mentioned this on general a while ago.

I had the problem when I dumped my 7.0.3 db to upgrade to 7.1. I had to
modify the dump because there were some 60 seconds in there. It was
obvious in the code in backend/utils/adt/datetime that it was using
sprintf to do the formatting, and sprintf was taking the the float the
represented the seconds and rounding it.

select '2001-07-10 15:39:59.999'::timestamp;
?column?
---------------------------
2001-07-10 15:39:60.00-04
(1 row)

Thomas Lockhart wrote:
>
> > > Have you observed otherwise?
> > Yes. Specifically timestamps are dumped in a way that (1) they lose
> > percision (2) sometimes have 60 in the seconds field which prevents the
> > dump from being restored.
>
> The loss of precision for timestamp data stems from conservative
> attempts to get consistant behavior from the data type. It is certainly
> not entirely successful, but changes would have to solve some of these
> problems without introducing more.
>
> I've only seen the "60 seconds problem" with earlier Mandrake distros
> which combined normal compiler optimizations with a "fast math"
> optimization, against the apparent advice of the gcc developers. What
> kind of system are you on, and how did you build PostgreSQL?
>
> Regards.
>
> - Thomas

--
Joseph Shraibman
jks(at)selectacast(dot)net
Increase signal to noise ratio. http://www.targabot.com


From: "Neil Conway" <nconway(at)klamath(dot)dyndns(dot)org>
To: nwalker(at)eldocomp(dot)com
Cc: pgman(at)candle(dot)pha(dot)pa(dot)us, pgsql-hackers(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org, pgsql-patches(at)postgresql(dot)org
Subject: Re: 2 gig file size limit
Date: 2001-07-10 23:17:05
Message-ID: 2585.192.168.40.6.994807025.squirrel@klamath.dyndns.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

(This question was answered several days ago on this list; please check
the list archives before posting. I believe it's also in the FAQ.)

> If PostgreSQL is run on a system that has a file size limit (2
> gig?), where might cause us to hit the limit?

Postgres will never internally use files (e.g. for tables, indexes,
etc) larger than 1GB -- at that point, the file is split.

However, you might run into problems when you export the data from Pg
to another source, such as if you pg_dump the contents of a database >
2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the
size of the dump. If that's still not enough, you can dump individual
tables (with -t) or use 'split' to divide the dump into several files.

Cheers,

Neil


From: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>
To: neilconway(at)home(dot)com
Cc: nwalker(at)eldocomp(dot)com, pgsql-hackers(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org, pgsql-patches(at)postgresql(dot)org
Subject: Re: 2 gig file size limit
Date: 2001-07-11 01:01:48
Message-ID: 200107110101.f6B11nC23950@candle.pha.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

> (This question was answered several days ago on this list; please check
> the list archives before posting. I believe it's also in the FAQ.)
>
> > If PostgreSQL is run on a system that has a file size limit (2
> > gig?), where might cause us to hit the limit?
>
> Postgres will never internally use files (e.g. for tables, indexes,
> etc) larger than 1GB -- at that point, the file is split.
>
> However, you might run into problems when you export the data from Pg
> to another source, such as if you pg_dump the contents of a database >
> 2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the
> size of the dump. If that's still not enough, you can dump individual
> tables (with -t) or use 'split' to divide the dump into several files.

I just added the second part of this sentense to the FAQ to try and make
it more visible:

The maximum table size of 16TB does not require large file
support from the operating system. Large tables are stored as
multiple 1GB files so file system size limits are not important.

--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026


From: Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu>
To: Joseph Shraibman <jks(at)selectacast(dot)net>
Cc: lockhart(at)fourpalms(dot)org, Doug McNaught <doug(at)wireboard(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: Backups WAS: 2 gig file size limit
Date: 2001-07-11 01:41:26
Message-ID: 3B4BAEC6.CCFEEE37@alumni.caltech.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

> I mentioned this on general a while ago.

I'm not usually there/here, but subscribed recently to avoid annoying
bounce messages from replies to messages cross posted to -hackers. I may
not stay long, since the volume is hard to keep up with.

> I had the problem when I dumped my 7.0.3 db to upgrade to 7.1. I had to
> modify the dump because there were some 60 seconds in there. It was
> obvious in the code in backend/utils/adt/datetime that it was using
> sprintf to do the formatting, and sprintf was taking the the float the
> represented the seconds and rounding it.
>
> select '2001-07-10 15:39:59.999'::timestamp;
> ?column?
> ---------------------------
> 2001-07-10 15:39:60.00-04
> (1 row)

Ah, right. I remember that now. Will continue to look at it...

- Thomas


From: markMLl(dot)pgsql-general(at)telemetry(dot)co(dot)uk
To: pgsql-general(at)PostgreSQL(dot)org
Subject: Re: 2 gig file size limit
Date: 2001-07-11 10:00:10
Message-ID: 3B4C23AA.2530B2F1@telemetry.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

Can a single database be split over multiple filesystems, or does the
filesystem size under e.g. Linux (whatever it is these days) constrain
the database size?

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or
colleagues]


From: markMLl(dot)pgsql-general(at)telemetry(dot)co(dot)uk
To: pgsql-general(at)PostgreSQL(dot)org
Subject: Re: 2 gig file size limit
Date: 2001-07-11 12:06:05
Message-ID: 3B4C412D.7C78B564@telemetry.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

Ian Willis wrote:
>
> Postgresql transparently breaks the db into 1G chunks.

Yes, but presumably these are still in the directory tree that was
created by initdb, i.e. normally on a single filesystem.

> The main concern is during dumps. A 10G db can't be dumped if the
> filesustem has a 2G limit.

Which is why somebody suggested piping into tar or whatever.

> Linus no longer has a filesystem file size limit ( or at least on
> that you'll hit easily)

I'm not concerned with "easily". Telling one of our customers that we
chose a particular server becuase they won't easily hit limits is a
non-starter.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or
colleagues]


From: Martijn van Oosterhout <kleptog(at)svana(dot)org>
To: markMLl(dot)pgsql-general(at)telemetry(dot)co(dot)uk
Cc: pgsql-general(at)PostgreSQL(dot)org
Subject: Re: 2 gig file size limit
Date: 2001-07-11 12:57:39
Message-ID: 20010711225739.B2996@svana.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

On Wed, Jul 11, 2001 at 12:06:05PM +0000, markMLl(dot)pgsql-general(at)telemetry(dot)co(dot)uk wrote:
> > Linus no longer has a filesystem file size limit ( or at least on
> > that you'll hit easily)
>
> I'm not concerned with "easily". Telling one of our customers that we
> chose a particular server becuase they won't easily hit limits is a
> non-starter.

Many people would have great difficulty hitting 4 terabytes.

What the limit on NT?
--
Martijn van Oosterhout <kleptog(at)svana(dot)org>
http://svana.org/kleptog/
> It would be nice if someone came up with a certification system that
> actually separated those who can barely regurgitate what they crammed over
> the last few weeks from those who command secret ninja networking powers.


From: Tony Grant <tony(at)animaproductions(dot)com>
To: pgsql-jdbc(at)PostgreSQL(dot)org
Cc: pgsql-general(at)PostgreSQL(dot)org
Subject: JDBC and stored procedures
Date: 2001-07-11 13:06:57
Message-ID: 994856817.15478.1.camel@tonux
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

Hello,

I am trying to use a stored procedure via JDBC. The objective is to be
able to get data from more than one table. My procedure is a simple get
country name from table countries where contry code = $1 copied from
Bruces book.

Ultradev is giving me "Error calling GetProcedures: An unidentified
error has occured"

Just thought I would ask here first if I am up against a brick wall?

Cheers

Tony Grant

--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html


From: Dave Cramer <Dave(at)micro-automation(dot)net>
To: Tony Grant <tony(at)animaproductions(dot)com>, pgsql-jdbc(at)PostgreSQL(dot)org
Cc: pgsql-general(at)PostgreSQL(dot)org
Subject: Re: [JDBC] JDBC and stored procedures
Date: 2001-07-11 14:20:29
Message-ID: 01071110202900.01127@inspiron
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

Tony,

The GetProcedures function in the driver does not work.
You should be able to a simple select of the stored proc however

Dave

On July 11, 2001 09:06 am, Tony Grant wrote:
> Hello,
>
> I am trying to use a stored procedure via JDBC. The objective is to be
> able to get data from more than one table. My procedure is a simple get
> country name from table countries where contry code = $1 copied from
> Bruces book.
>
> Ultradev is giving me "Error calling GetProcedures: An unidentified
> error has occured"
>
> Just thought I would ask here first if I am up against a brick wall?
>
> Cheers
>
> Tony Grant
>
> --
> RedHat Linux on Sony Vaio C1XD/S
> http://www.animaproductions.com/linux2.html
> Macromedia UltraDev with PostgreSQL
> http://www.animaproductions.com/ultra.html
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)


From: Tony Grant <tony(at)animaproductions(dot)com>
To: Dave(at)micro-automation(dot)net
Cc: pgsql-jdbc(at)PostgreSQL(dot)org, pgsql-general(at)PostgreSQL(dot)org
Subject: Re: [JDBC] JDBC and stored procedures
Date: 2001-07-11 15:15:31
Message-ID: 994864531.15495.3.camel@tonux
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

On 11 Jul 2001 10:20:29 -0400, Dave Cramer wrote:

> The GetProcedures function in the driver does not work.

OK. I bet it is on the todo list =:-D

> You should be able to a simple select of the stored proc however

Yes! thank you very much!!!

SELECT getcountryname(director.country)

did the trick where getcountryname is the function (or stored procedure)

Cheers

Tony

--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html


From: "Dave Cramer" <Dave(at)micro-automation(dot)net>
To: "'Tony Grant'" <tony(at)animaproductions(dot)com>, <Dave(at)micro-automation(dot)net>
Cc: <pgsql-jdbc(at)PostgreSQL(dot)org>
Subject: RE: JDBC and stored procedures
Date: 2001-07-11 16:14:58
Message-ID: 003401c10a24$a19d5e30$0201a8c0@inspiron
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

The getProcedures api is on the todo list, but I don't think it returns
stored procs.

Dave

-----Original Message-----
From: pgsql-jdbc-owner(at)postgresql(dot)org
[mailto:pgsql-jdbc-owner(at)postgresql(dot)org] On Behalf Of Tony Grant
Sent: July 11, 2001 11:16 AM
To: Dave(at)micro-automation(dot)net
Cc: pgsql-jdbc(at)PostgreSQL(dot)org; pgsql-general(at)PostgreSQL(dot)org
Subject: Re: [JDBC] JDBC and stored procedures

On 11 Jul 2001 10:20:29 -0400, Dave Cramer wrote:

> The GetProcedures function in the driver does not work.

OK. I bet it is on the todo list =:-D

> You should be able to a simple select of the stored proc however

Yes! thank you very much!!!

SELECT getcountryname(director.country)

did the trick where getcountryname is the function (or stored procedure)

Cheers

Tony

--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


From: markMLl(dot)pgsql-general(at)telemetry(dot)co(dot)uk
To: pgsql-general(at)PostgreSQL(dot)org
Subject: Re: 2 gig file size limit
Date: 2001-07-12 08:02:03
Message-ID: 3B4D597B.A4C884A0@telemetry.co.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-hackers pgsql-jdbc pgsql-patches

Martijn van Oosterhout wrote:

> What the limit on NT?

I'm told 2^64 bytes. Frankly, I'd be surprised if MS has tested it :-)

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or
colleagues]