Lists: | pgsql-general |
---|
From: | Torsten Bronger <bronger(at)physik(dot)rwth-aachen(dot)de> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Getting time-dependent load statistics |
Date: | 2009-02-20 16:11:37 |
Message-ID: | 87zlghf5qe.fsf@physik.rwth-aachen.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general |
Hallöchen!
Yesterday I ported a web app to PG. Every 10 minutes, a cron job
scanned the log files of MySQL and generated a plot showing the
queries/sec for the last 24h. (Admittedly queries/sec is not the
holy grail of DB statistics.)
But I still like to have something like this. At the moment I just
do the same with PG's log file, with
log_statement_stats = on
But to generate these plots is costly (e.g. I don't need all the
lines starting with !), and to interpret them is equally costly. Do
you have a suggestion for a better approach?
Tschö,
Torsten.
--
Torsten Bronger, aquisgrana, europa vetus
Jabber ID: torsten(dot)bronger(at)jabber(dot)rwth-aachen(dot)de
From: | Bill Moran <wmoran(at)potentialtech(dot)com> |
---|---|
To: | Torsten Bronger <bronger(at)physik(dot)rwth-aachen(dot)de> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Getting time-dependent load statistics |
Date: | 2009-02-20 16:34:02 |
Message-ID: | 20090220113402.4f1c7ace.wmoran@potentialtech.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general |
In response to Torsten Bronger <bronger(at)physik(dot)rwth-aachen(dot)de>:
> Hallöchen!
>
> Yesterday I ported a web app to PG. Every 10 minutes, a cron job
> scanned the log files of MySQL and generated a plot showing the
> queries/sec for the last 24h. (Admittedly queries/sec is not the
> holy grail of DB statistics.)
>
> But I still like to have something like this. At the moment I just
> do the same with PG's log file, with
>
> log_statement_stats = on
>
> But to generate these plots is costly (e.g. I don't need all the
> lines starting with !), and to interpret them is equally costly. Do
> you have a suggestion for a better approach?
Turn on stats collection and have a look at the various pg_stat* tables.
They'll have stats that you can quickly access with considerably lower
overhead.
Doing it the way you're doing is driving from Pittsburgh to Maine to
get to Ohio.
--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/
From: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
---|---|
To: | Torsten Bronger <bronger(at)physik(dot)rwth-aachen(dot)de> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Getting time-dependent load statistics |
Date: | 2009-02-20 18:26:15 |
Message-ID: | 1235154375.31546.113.camel@jd-laptop.pragmaticzealot.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general |
On Fri, 2009-02-20 at 17:11 +0100, Torsten Bronger wrote:
> Hallöchen!
>
> Yesterday I ported a web app to PG. Every 10 minutes, a cron job
> scanned the log files of MySQL and generated a plot showing the
> queries/sec for the last 24h. (Admittedly queries/sec is not the
> holy grail of DB statistics.)
>
> But I still like to have something like this. At the moment I just
> do the same with PG's log file, with
>
> log_statement_stats = on
>
> But to generate these plots is costly (e.g. I don't need all the
> lines starting with !), and to interpret them is equally costly. Do
> you have a suggestion for a better approach?
>
Do you want queries, or transactions? If you want transactions you
already have that in pg_stat_database. Just do this every 10 minutes:
psql -U <user> -d <database> -c "select now() as time,sum(xact_commit)
as transactions from pg_stat_Database"
Joshua D. Drake
> Tschö,
> Torsten.
>
> --
> Torsten Bronger, aquisgrana, europa vetus
> Jabber ID: torsten(dot)bronger(at)jabber(dot)rwth-aachen(dot)de
>
>
--
PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
Consulting, Development, Support, Training
503-667-4564 - http://www.commandprompt.com/
The PostgreSQL Company, serving since 1997
From: | Torsten Bronger <bronger(at)physik(dot)rwth-aachen(dot)de> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Getting time-dependent load statistics |
Date: | 2009-02-20 18:37:27 |
Message-ID: | 87d4dd0xaw.fsf@physik.rwth-aachen.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general |
Hallöchen!
Joshua D. Drake writes:
> On Fri, 2009-02-20 at 17:11 +0100, Torsten Bronger wrote:
>
>> Yesterday I ported a web app to PG. Every 10 minutes, a cron job
>> scanned the log files of MySQL and generated a plot showing the
>> queries/sec for the last 24h. (Admittedly queries/sec is not the
>> holy grail of DB statistics.)
>>
>> But I still like to have something like this. [...]
>>
>
> Do you want queries, or transactions? If you want transactions you
> already have that in pg_stat_database. Just do this every 10
> minutes:
>
> psql -U <user> -d <database> -c "select now() as time,sum(xact_commit)
> as transactions from pg_stat_Database"
Well, I'm afraid that transactions are too different from each
other. Currently, I experiment with
SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
tup_deleted FROM pg_stat_database WHERE datname='chantal';
not being sure whether this makes sense at all. ;-) For exmaple,
does "tup_fetched" imply "tup_returned"?
Tschö,
Torsten.
--
Torsten Bronger, aquisgrana, europa vetus
Jabber ID: torsten(dot)bronger(at)jabber(dot)rwth-aachen(dot)de
From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Getting time-dependent load statistics |
Date: | 2009-02-20 18:39:54 |
Message-ID: | dcc563d10902201039h538c15a0ub8819d7305d757c@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general |
On Fri, Feb 20, 2009 at 9:11 AM, Torsten Bronger
<bronger(at)physik(dot)rwth-aachen(dot)de> wrote:
> Hallöchen!
>
> Yesterday I ported a web app to PG. Every 10 minutes, a cron job
> scanned the log files of MySQL and generated a plot showing the
> queries/sec for the last 24h. (Admittedly queries/sec is not the
> holy grail of DB statistics.)
>
> But I still like to have something like this. At the moment I just
> do the same with PG's log file, with
>
> log_statement_stats = on
>
> But to generate these plots is costly (e.g. I don't need all the
> lines starting with !), and to interpret them is equally costly. Do
> you have a suggestion for a better approach?
You can turn on log duration, which will just log the duration of
queries. That's a handy little metric to have and every so often I
turn it on and chart average query run times etc with the actual
queries. I also turn on logging long running queries of say 5 or 10
seconds or more.
From: | Torsten Bronger <bronger(at)physik(dot)rwth-aachen(dot)de> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Getting time-dependent load statistics |
Date: | 2009-02-20 21:17:35 |
Message-ID: | 87r61siz9s.fsf@physik.rwth-aachen.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Lists: | pgsql-general |
Hallöchen!
Torsten Bronger writes:
> [...] Currently, I experiment with
>
> SELECT tup_returned + tup_fetched + tup_inserted + tup_updated +
> tup_deleted FROM pg_stat_database WHERE datname='chantal';
Stangely, the statistics coming out of it are extremely high. I
just dumped my database with the built-in tool of my web framework
and got approximately 50 times as many row accesses from the command
above as I have objects in my database. The dump routine of my web
framework may do redundant things but not at this extent ...
Tschö,
Torsten.
--
Torsten Bronger, aquisgrana, europa vetus
Jabber ID: torsten(dot)bronger(at)jabber(dot)rwth-aachen(dot)de