FIFO Queue Problems

Lists: pgsql-generalpgsql-sql
From: pilsl(at)goldfisch(dot)at
To: pgsql-general(at)postgresql(dot)org
Subject: 7.2.3: tuple is too big (max 8136)
Date: 2002-11-01 13:33:48
Message-ID: 20021101143348.A7426@goldfisch.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql


I got this error on older versions and was told to
upgrade. Unfortunately I run into the very same error with 7.2.3.

I have a table with 1600 cols (for testing) and insert one row into
it which results in this error:

$ psql -U peter -f /tmp/t
Password:
psql:/tmp/t:1: ERROR: Tuple is too big: size 19232, max size 8136

where /tmp/t is like:

insert into bigtable values ('testtest','testtest','testtest','testtest','testtest','testtest','testtest','testtest','testtest','testtest','testtest' .... and so on

I tried the same with a perl-program but as expected its exactely the same error.

I searched my whole systems for old libraries from a previous version
but I'm quite sure that my upgrade was quite a clean and successful
one...

thnx,
peter

--
mag. peter pilsl
IT-Consulting
tel: +43-699-1-3574035
fax: +43-699-4-3574035
pilsl(at)goldfisch(dot)at


From: Doug McNaught <doug(at)mcnaught(dot)org>
To: pilsl(at)goldfisch(dot)at
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: 7.2.3: tuple is too big (max 8136)
Date: 2002-11-01 13:50:54
Message-ID: m37kfx9z5t.fsf@varsoon.wireboard.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

pilsl(at)goldfisch(dot)at writes:

> I got this error on older versions and was told to
> upgrade. Unfortunately I run into the very same error with 7.2.3.
>
> I have a table with 1600 cols (for testing) and insert one row into
> it which results in this error:

It's possible you are hitting the maximum column limit rather than the
tuple size limit. The FAQ list the max columns in a table as:

"250-1600 depending on column types"

So try a tuple of the same size but fewer columns and see if that's
the issue.

> I searched my whole systems for old libraries from a previous version
> but I'm quite sure that my upgrade was quite a clean and successful
> one...

I'm sure you're right but it's always good to double-check. ;)

-Doug


From: Chris Gamache <cgg007(at)yahoo(dot)com>
To: pgsql-sql(at)postgresql(dot)org
Subject: FIFO Queue Problems
Date: 2002-11-01 14:56:30
Message-ID: 20021101145630.50886.qmail@web13801.mail.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

I'm having a race condition with a FIFO queue program that I've created...

CREATE TABLE fifo (
id serial,
data varchar(100),
status int4 DEFAULT 0
);

I have a program that claims a row for itself

my $processid = $$;
my $sql_update = <<EOS;
UPDATE fifo
set status=$processid
WHERE id = (SELECT min(id) FROM fifo WHERE status=0);
EOS

I can then

my $sql_get = <<EOS
SELECT data FROM fifo WHERE status=$processid
EOS

The problem occurrs when two of the processes grab the exact same row at the
exact same instant. It happens roughly 1 out of 1000 times. I'm not sure if
setting the transactions to serializable would fix the problem since it occurrs
between different postmasters.

My experience with Views and Triggers is minimal. Would they even help? Could
someone suggest a better FIFO queue method?

__________________________________________________
Do you Yahoo!?
HotJobs - Search new jobs daily now
http://hotjobs.yahoo.com/


From: Richard Huxton <dev(at)archonet(dot)com>
To: cgg007(at)yahoo(dot)com, pgsql-sql(at)postgresql(dot)org
Subject: Re: FIFO Queue Problems
Date: 2002-11-01 16:20:59
Message-ID: 200211011620.59439.dev@archonet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

On Friday 01 Nov 2002 2:56 pm, Chris Gamache wrote:
> I'm having a race condition with a FIFO queue program that I've created...

> I have a program that claims a row for itself

[code sample]

> The problem occurrs when two of the processes grab the exact same row at
> the exact same instant. It happens roughly 1 out of 1000 times. I'm not
> sure if setting the transactions to serializable would fix the problem
> since it occurrs between different postmasters.
>
> My experience with Views and Triggers is minimal. Would they even help?
> Could someone suggest a better FIFO queue method?

Check out SELECT FOR UPDATE - this locks the row and should do what you want.
This means that 1 time in 1000 your query will fail and need to be retried.

HTH
--
Richard Huxton


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: cgg007(at)yahoo(dot)com
Cc: pgsql-sql(at)postgresql(dot)org
Subject: Re: FIFO Queue Problems
Date: 2002-11-01 16:51:01
Message-ID: 26900.1036169461@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

Chris Gamache <cgg007(at)yahoo(dot)com> writes:
> I have a program that claims a row for itself

> my $processid = $$;
> my $sql_update = <<EOS;
> UPDATE fifo
> set status=$processid
> WHERE id = (SELECT min(id) FROM fifo WHERE status=0);
> EOS

> The problem occurrs when two of the processes grab the exact same row at the
> exact same instant.

Probably the best fix is to do it this way:

BEGIN;
LOCK TABLE fifo IN EXCLUSIVE MODE;
UPDATE ... as above ...
COMMIT;

The exclusive lock will ensure that only one process claims a row
at a time (while not preventing concurrent SELECTs from the table).
This way you don't need to worry about retrying.

regards, tom lane


From: Bruno Wolff III <bruno(at)wolff(dot)to>
To: Chris Gamache <cgg007(at)yahoo(dot)com>
Cc: pgsql-sql(at)postgresql(dot)org
Subject: Re: FIFO Queue Problems
Date: 2002-11-01 16:54:07
Message-ID: 20021101165407.GA32463@wolff.to
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

On Fri, Nov 01, 2002 at 06:56:30 -0800,
Chris Gamache <cgg007(at)yahoo(dot)com> wrote:
>
> The problem occurrs when two of the processes grab the exact same row at the
> exact same instant. It happens roughly 1 out of 1000 times. I'm not sure if
> setting the transactions to serializable would fix the problem since it occurrs
> between different postmasters.

I played with this a bit and it looks like you want transaction isolation
level serializable and have the application retry when necessary.

Doing a select for update won't work cleanly. What will happen is that
you will get 0 rows returned when there is a conflict and you will need
to detect that same as you would by changing the isolation level.

What happens in the select for update case is you first rewrite the select
to use order by and limit to pick the minimum row since you can't use
for update when returning an aggregate. Then when the matching row is
found there is a possible lock contention with another update. If this
happens then when the other update commits the locked row is rechecked and
will no longer match and gets removed from the list of rows returned (leaving
no rows).


From: Wei Weng <wweng(at)kencast(dot)com>
To: pgsql-sql(at)postgresql(dot)org
Subject: Re: FIFO Queue Problems
Date: 2002-11-01 19:18:37
Message-ID: 1036178317.23296.2.camel@Monet
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

Do you need to unlock the table fifo when you are done?

On Fri, 2002-11-01 at 11:51, Tom Lane wrote:
> Chris Gamache <cgg007(at)yahoo(dot)com> writes:
> > I have a program that claims a row for itself
>
> > my $processid = $$;
> > my $sql_update = <<EOS;
> > UPDATE fifo
> > set status=$processid
> > WHERE id = (SELECT min(id) FROM fifo WHERE status=0);
> > EOS
>
> > The problem occurrs when two of the processes grab the exact same row at the
> > exact same instant.
>
> Probably the best fix is to do it this way:
>
> BEGIN;
> LOCK TABLE fifo IN EXCLUSIVE MODE;
> UPDATE ... as above ...
> COMMIT;
>
> The exclusive lock will ensure that only one process claims a row
> at a time (while not preventing concurrent SELECTs from the table).
> This way you don't need to worry about retrying.
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
--
Wei Weng <wweng(at)kencast(dot)com>


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Wei Weng <wweng(at)kencast(dot)com>
Cc: pgsql-sql(at)postgresql(dot)org
Subject: Re: FIFO Queue Problems
Date: 2002-11-01 19:47:44
Message-ID: 1008.1036180064@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

Wei Weng <wweng(at)kencast(dot)com> writes:
> Do you need to unlock the table fifo when you are done?

That happens at COMMIT.

regards, tom lane


From: Bruno Wolff III <bruno(at)wolff(dot)to>
To: Wei Weng <wweng(at)kencast(dot)com>
Cc: pgsql-sql(at)postgresql(dot)org
Subject: Re: FIFO Queue Problems
Date: 2002-11-01 19:56:32
Message-ID: 20021101195632.GA1266@wolff.to
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

On Fri, Nov 01, 2002 at 14:18:37 -0500,
Wei Weng <wweng(at)kencast(dot)com> wrote:
> Do you need to unlock the table fifo when you are done?

Locks only apply for the duration of a transaction. When you commit or
roleback the lock will be released.


From: pilsl(at)goldfisch(dot)at
To: Doug McNaught <doug(at)mcnaught(dot)org>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: 7.2.3: tuple is too big (max 8136)
Date: 2002-11-04 00:11:06
Message-ID: 20021104011105.A11131@goldfisch.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

On Fri, Nov 01, 2002 at 08:50:54AM -0500, Doug McNaught wrote:
>
> It's possible you are hitting the maximum column limit rather than the
> tuple size limit. The FAQ list the max columns in a table as:
>
> "250-1600 depending on column types"
>

I already created the table with 1600 cols of varchar(30) so this
should not be the limit. And the errormessage is clearly talking about
the tuples size. (which should not have such a limit in 7.2.3).

> So try a tuple of the same size but fewer columns and see if that's
> the issue.
>

I created smaller tables (200-300 cols of varchar(1000)) and inserted
rows with long text and the error didnt occure. I cant tell if the
size of this tuple is then below the 8k-limit cause I dont know how to
calculate the size of the tuple from a given insert-command.

(The only idea about the tuple-size is when I get the error where the
actual tuple-size is printed. There seems to be no relation between
the length of the supplied command and the tuple-size reported in the
error)

To check the problem it would be very helpful to me to learn more
about tuple-size. It seem like postgres would use compression, cause a
supplied insert-command of length 620000 lead to the same tuplesize of
20200 than a insert-command of length 20000.

>
> I'm sure you're right but it's always good to double-check. ;)
>

I did double and triple.

thnx,
peter

--
mag. peter pilsl
IT-Consulting
tel: +43-699-1-3574035
fax: +43-699-4-3574035
pilsl(at)goldfisch(dot)at


From: Doug McNaught <doug(at)mcnaught(dot)org>
To: pilsl(at)goldfisch(dot)at
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: 7.2.3: tuple is too big (max 8136)
Date: 2002-11-04 00:19:56
Message-ID: m3adkq424z.fsf@varsoon.wireboard.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

pilsl(at)goldfisch(dot)at writes:

> On Fri, Nov 01, 2002 at 08:50:54AM -0500, Doug McNaught wrote:
> >
> > It's possible you are hitting the maximum column limit rather than the
> > tuple size limit. The FAQ list the max columns in a table as:
> >
> > "250-1600 depending on column types"
> >
>
> I already created the table with 1600 cols of varchar(30) so this
> should not be the limit. And the errormessage is clearly talking about
> the tuples size. (which should not have such a limit in 7.2.3).

Well, it does say "depending on column type". Perhaps it is giving a
valid error but the wrong message. The 8k tuple-size limit is
definitely gone in modern versions of PG (since 7.1 I believe) so I
don't see why that error message should appear.

-Doug


From: Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com>
To: <pilsl(at)goldfisch(dot)at>
Cc: Doug McNaught <doug(at)mcnaught(dot)org>, <pgsql-general(at)postgresql(dot)org>
Subject: Re: 7.2.3: tuple is too big (max 8136)
Date: 2002-11-04 00:38:45
Message-ID: 20021103162643.V91168-100000@megazone23.bigpanda.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

On Mon, 4 Nov 2002 pilsl(at)goldfisch(dot)at wrote:

> On Fri, Nov 01, 2002 at 08:50:54AM -0500, Doug McNaught wrote:
> >
> > It's possible you are hitting the maximum column limit rather than the
> > tuple size limit. The FAQ list the max columns in a table as:
> >
> > "250-1600 depending on column types"
> >
>
> I already created the table with 1600 cols of varchar(30) so this
> should not be the limit. And the errormessage is clearly talking about
> the tuples size. (which should not have such a limit in 7.2.3).

Technically there is still a tuple size limit. It's just that many types
can be pulled out of the main storage of the table and you get a reference
to its value in the main storage rather than the value itself. That's how
the 8k limit was circumvented. The tuple size limit is part of why
there's a maximum column count, AFAIK the main tuple still needs to fit in
one page. The create table may succeed even if it's not possible to
actually insert non-null values in every column (for example, make a table
with 1400 int8 columns, if you try to insert values into each column,
it will fail. If you try to insert say only a few values and alot of
nulls, it works).


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com>
Cc: pilsl(at)goldfisch(dot)at, Doug McNaught <doug(at)mcnaught(dot)org>, pgsql-general(at)postgresql(dot)org
Subject: Re: 7.2.3: tuple is too big (max 8136)
Date: 2002-11-04 02:19:54
Message-ID: 25508.1036376394@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-sql

Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com> writes:
> Technically there is still a tuple size limit. It's just that many types
> can be pulled out of the main storage of the table and you get a reference
> to its value in the main storage rather than the value itself. That's how
> the 8k limit was circumvented. The tuple size limit is part of why
> there's a maximum column count, AFAIK the main tuple still needs to fit in
> one page.

Right. You could fit 1600 int4 columns (6400 bytes total), but not 1600
int8 columns. TOAST pointer entries are 20 bytes each, so if all your
columns are toastable types (like varchar) then the upper limit would be
about 400 columns.

In practice, performance will probably suck with more than a couple
hundred columns anyway. You'd be better off rethinking your database
design.

regards, tom lane