Re: One source of constant annoyance identified

From: "Markus Wollny" <Markus(dot)Wollny(at)computec(dot)de>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: One source of constant annoyance identified
Date: 2002-07-02 12:54:40
Message-ID: 2266D0630E43BB4290742247C8910575014CE2C2@dozer.computec.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello!

Sorry I took so long - I attached the schema as asked.

Actually it seems to be very often the case, that certain operations
suck up more than 25% of available memory and processing capacities.

I managed to customize and install the Full Text Index-option (fti) from
the contrib-directory (substrings are at least 3 characters in length
for us and I defined a list of StopWords to be not included). Right now
I have started filling the fti-table with the substrings; I tried using
the Perl-script supplied, but the results are quite dissatisfactory
because it doesn't exclude any stopwords, nor does it limit itself to
alphanumeric - you may be able to imagine what happens when there's
loads of kiddies posting "!!!!!!!!!!!!!!!!!!" (ad nauseum) and suchlike
in quite a lot of postings. So used the already implemented trigger to
execute the fti-function:

update ct_com_board_message
set state_id=0
where state_id=0
and to_char(last_reply, 'yyyymmdd') between '20020301' and '20020830';

I took a quick look at top: Even this humble query causes memory- and
processor-load like a giant: 266M RAM, 38.3% processor time, 26.4%
memory usage. Okay, it's calling the trigger for each row which in turn
inserts some new tuples into ct_com_board_fti, but is it expected to
cause so much load?

Regards,

Markus

> -----Ursprüngliche Nachricht-----
> Von: Tom Lane [mailto:tgl(at)sss(dot)pgh(dot)pa(dot)us]
> Gesendet: Sonntag, 30. Juni 2002 21:34
> An: Markus Wollny
> Cc: pgsql-general(at)postgresql(dot)org
> Betreff: Re: [GENERAL] One source of constant annoyance identified

> That's what I thought too, but I've tried and failed to reproduce any
> memory leak with lower/LIKE and the same configuration
> options that you
> used. It might be that some other part of the query is the
> problem, or
> maybe I'm not duplicating the setup correctly. Could I
> trouble you for
> the exact schemas of the tables used by the problem query?
> (The output
> of pg_dump -s would be the best thing to send.)
>
> regards, tom lane
>

Attachment Content-Type Size
community_schema_dump.sql application/octet-stream 36.5 KB

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Thomas Hyldgaard 2002-07-02 12:59:50 Comparing PostgreSQL and Oracle stability
Previous Message Nigel J. Andrews 2002-07-02 11:19:52 Re: namespaces and schemas