Re: importance of bgwriter_percent

Lists: pgsql-general
From: "vinita bansal" <sagivini(at)hotmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: importance of bgwriter_percent
Date: 2005-04-01 06:36:09
Message-ID: BAY20-F33C3B0ECC94FD26C32F50BCB380@phx.gbl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Hi,

I have a 64 bit Linux box with 64GB RAM and 450GB HDD. I am running a
benchmark on database of size 40GB using the following settings:
- data=writeback
- Moved wal logs to seperate partition
- settings in postgresql.conf:
shared_buffers = 100000
work_mem = 100000
maintenance_work_mem = 100000
max_fsm_pages = 200000
bgwriter_percent = 2
bgwriter_maxpages = 100
fsync = false
wal_buffers = 64
checkpoint_segments = 2048
checkpoint_timeout = 3600
effective_cache_size = 1840000
random_page_cost = 2
geqo_threshold = 25
geqo_effort = 1
stats_start_collector = false
stats_command_string = false
stats_row_level = false
add_missing_from = false

I am not getting good performance here as I get when I am working on a small
database of size 1GB with the following settings :
shared_buffers = 3000
checkpoint_segments = 256
checkpoint_timeout= 1800
effective_cache_size= 250000
Rest all settings are the same as above.

Here I have reduced value of some of the parameters since this database is
very small and there is hardly any background data here while for the big
database (size 40GB) there is lots of background data. I am getting a 4x
performance improvement for small database just by setting bgwriter_percent
=2 while the same setting when used against the big database is not giving
much improvement.

Do I need to increase the value of bgwriter_percent and/or bgwriter_maxpages
or there's a problem with the other settings and I need to change one of
them??
What will be a good value of bgwriter_percent for such a big database
(running 4 processes in parallel here which write to the database all at
once and which is the major bottleneck in my case)?

Regards,
Vinita Bansal

Why is this happening. Do I need

_________________________________________________________________
Expressions unlimited! http://server1.msn.co.in/sp04/messenger/ The all new
MSN Messenger!


From: Scott Marlowe <smarlowe(at)g2switchworks(dot)com>
To: vinita bansal <sagivini(at)hotmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: importance of bgwriter_percent
Date: 2005-04-01 15:43:38
Message-ID: 1112370218.13798.22.camel@state.g2switchworks.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

On Fri, 2005-04-01 at 00:36, vinita bansal wrote:
> Hi,
>
> I have a 64 bit Linux box with 64GB RAM and 450GB HDD. I am running a
> benchmark on database of size 40GB using the following settings:
> - data=writeback

You might want to read this post about ext3 with writeback:

http://archives.postgresql.org/pgsql-general/2005-01/msg00830.php

> - Moved wal logs to seperate partition
> - settings in postgresql.conf:
> shared_buffers = 100000
> work_mem = 100000
> maintenance_work_mem = 100000
> max_fsm_pages = 200000
> bgwriter_percent = 2
> bgwriter_maxpages = 100
> fsync = false
> wal_buffers = 64
> checkpoint_segments = 2048
> checkpoint_timeout = 3600
> effective_cache_size = 1840000
> random_page_cost = 2
> geqo_threshold = 25
> geqo_effort = 1
> stats_start_collector = false
> stats_command_string = false
> stats_row_level = false
> add_missing_from = false
>
> I am not getting good performance here as I get when I am working on a small
> database of size 1GB with the following settings :
> shared_buffers = 3000
> checkpoint_segments = 256
> checkpoint_timeout= 1800
> effective_cache_size= 250000
> Rest all settings are the same as above.

What is the difference in performance on the big database if you use the
settings from the small setup instead of the ones you're using now?
Have you tried starting there and increasing each setting some
incremental amount to gauge the increase in performance you get from the
changes? Sometimes certain settings that you think will speed up the
database will actually slow it down, and without some kind of empirical
testing, you really don't know if the new setting is really "better" or
not.

I'm not familiar enough with the new bgwriter stuff yet to offer any
real advice on tuning its parameters.


From: "vinita bansal" <sagivini(at)hotmail(dot)com>
To: smarlowe(at)g2switchworks(dot)com
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: importance of bgwriter_percent
Date: 2005-04-01 18:52:12
Message-ID: BAY20-F23CACDCEB22C93B17DD38ECB380@phx.gbl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Hi,

I have not tried all the settings on the big database but yeah I have tried
most of the settings and I am not seeing any performance improvement. Rather
I would say that there was a performance degradation in my case when I used
some of the settings used for small database (like decreasing
shared_buffers, checkpoint_segments etc). I am trying to change these
parameters one by one and then see how things are getting affected.

Regards,
Vinita Bansal

>From: Scott Marlowe <smarlowe(at)g2switchworks(dot)com>
>To: vinita bansal <sagivini(at)hotmail(dot)com>
>CC: pgsql-general(at)postgresql(dot)org
>Subject: Re: [GENERAL] importance of bgwriter_percent
>Date: Fri, 01 Apr 2005 09:43:38 -0600
>
>On Fri, 2005-04-01 at 00:36, vinita bansal wrote:
> > Hi,
> >
> > I have a 64 bit Linux box with 64GB RAM and 450GB HDD. I am running a
> > benchmark on database of size 40GB using the following settings:
> > - data=writeback
>
>You might want to read this post about ext3 with writeback:
>
>http://archives.postgresql.org/pgsql-general/2005-01/msg00830.php
>
>
> > - Moved wal logs to seperate partition
> > - settings in postgresql.conf:
> > shared_buffers = 100000
> > work_mem = 100000
> > maintenance_work_mem = 100000
> > max_fsm_pages = 200000
> > bgwriter_percent = 2
> > bgwriter_maxpages = 100
> > fsync = false
> > wal_buffers = 64
> > checkpoint_segments = 2048
> > checkpoint_timeout = 3600
> > effective_cache_size = 1840000
> > random_page_cost = 2
> > geqo_threshold = 25
> > geqo_effort = 1
> > stats_start_collector = false
> > stats_command_string = false
> > stats_row_level = false
> > add_missing_from = false
> >
> > I am not getting good performance here as I get when I am working on a
>small
> > database of size 1GB with the following settings :
> > shared_buffers = 3000
> > checkpoint_segments = 256
> > checkpoint_timeout= 1800
> > effective_cache_size= 250000
> > Rest all settings are the same as above.
>
>What is the difference in performance on the big database if you use the
>settings from the small setup instead of the ones you're using now?
>Have you tried starting there and increasing each setting some
>incremental amount to gauge the increase in performance you get from the
>changes? Sometimes certain settings that you think will speed up the
>database will actually slow it down, and without some kind of empirical
>testing, you really don't know if the new setting is really "better" or
>not.
>
>I'm not familiar enough with the new bgwriter stuff yet to offer any
>real advice on tuning its parameters.
>

_________________________________________________________________
Make money with Zero Investment.
http://adfarm.mediaplex.com/ad/ck/4686-26272-10936-31?ck=RegSell Start your
business.