Re: vacuum, performance, and MVCC

From: "Mark Woodward" <pgsql(at)mohawksoft(dot)com>
To: "Mark Woodward" <pgsql(at)mohawksoft(dot)com>, "Hannu Krosing" <hannu(at)skype(dot)net>, "Jonah H(dot) Harris" <jonah(dot)harris(at)gmail(dot)com>, "Christopher Browne" <cbbrowne(at)acm(dot)org>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: vacuum, performance, and MVCC
Date: 2006-06-23 14:50:05
Message-ID: 18346.24.91.171.78.1151074205.squirrel@mail.mohawksoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> Mark Woodward wrote:
>
>> > In case of the number of actively modified rows being in only tens or
>> > low hundreds of thousands of rows, (i.e. the modified set fits in
>> > memory) the continuous vacuum process shows up as just another
>> backend,
>> > not really taking order of magnitude more resources. It mainly
>> generates
>> > WAL traffic, as modified pages are already in memory/cache and are
>> > mostly synced by background writer and/or checkpoint.
>> >
>> > Of course you have to adjust vacuum_cost_* variables so as to not
>> > saturate IO.
>>
>> These sort of solutions, IMHO, don't show how good PostgreSQL is, but
>> show
>> where it is very lacking.
>
> We all know Postgres is lacking; some of us try to improve it (some with
> more success than others). People who know the current limitations but
> like the capabilities, try to find workarounds to the problems. What
> surprises me is that, if you have such a low opinion of Postgres, you
> still use it.

Actually I love PostgreSQL, I've been using it for about 10 years on a lot
of projects. There are some serious issues with it, however, and it is
important to expose them, discuss them, and resolve them. Work arounds are
great, but in the end, they are work arounds.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Meskes 2006-06-23 14:50:41 Re: [CORE] GPL Source and Copyright Questions
Previous Message Martijn van Oosterhout 2006-06-23 14:47:24 Re: Planning without reason.