Re: postgre performance question

From: "Andrew Bartley" <abartley(at)evolvosystems(dot)com>
To: "Ioannis Kappas" <Ioannis(dot)Kappas(at)dante(dot)org(dot)uk>, "Doug McNaught" <doug(at)wireboard(dot)com>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: postgre performance question
Date: 2002-03-04 22:19:32
Message-ID: 001e01c1c3ca$a8cea690$3200a8c0@abartleypc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> > Again at midnight, all the entries from the table are removed and the
> > table is vacuumed (I want to make this clear).

If you are "removing all of the the entries" from the table, and then
vacuuming/analysing, then the stats table will be updated for the object
with no rows in it. Query plans for any select from that point on, will be
forced to do a table scan.

----- Original Message -----
From: "Doug McNaught" <doug(at)wireboard(dot)com>
To: "Ioannis Kappas" <Ioannis(dot)Kappas(at)dante(dot)org(dot)uk>
Cc: <pgsql-general(at)postgresql(dot)org>
Sent: Tuesday, March 05, 2002 2:34 AM
Subject: Re: [GENERAL] postgre performance question

> Ioannis Kappas <Ioannis(dot)Kappas(at)dante(dot)org(dot)uk> writes:
>
> > ... it really does clean the table at midnight and then immediately
> > vacuums the table after it.
> > What it really does is to populate the table with two hundred thousand
> > of entries each day and
> > later on the table will be populated with million of entries each day.
> > Again at midnight, all the entries from the table are removed and the
> > table is vacuumed (I want to make this clear).
>
> Thanks for the clarification. Are you doing a lot of updates during
> the day, or just inserts?
>
> > Do you think this is the expected behaviour I am getting? Can I do
> > something to improve the
> > perfrormance? Should I try to find another database that can handle
> > such `big?' amount of entries?
> > Can I change something on the configuration of the database that will
> > speed up the queries?
>
> Well, if you're selecting every record from a table with millions of
> records, any database is going to be slow. There, the bottleneck is
> disk i/o and how fast the server can send data to the client.
>
> For more selective queries, make sure you:
>
> 1) VACUUM ANALYZE (or just ANALYZE in 7.2) after the table is populated.
> 2) Put indexes on the appropriate columns (depends on what queries you
> make).
>
> Without seeing your schema and the queries you're running, it's hard
> to give you any more advice.
>
> -Doug
> --
> Let us cross over the river, and rest under the shade of the trees.
> --T. J. Jackson, 1863
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo(at)postgresql(dot)org so that your
> message can get through to the mailing list cleanly
>
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Jon Hassen 2002-03-04 22:19:51 index item size 4496 exceeds maximum 2713
Previous Message Brian Avis 2002-03-04 22:07:04 Listing Numbers