Re: Slow first query despite LIMIT and OFFSET clause

From: Phoenix Kiula <phoenix(dot)kiula(at)gmail(dot)com>
To: Alban Hertroys <dalroi(at)solfertje(dot)student(dot)utwente(dot)nl>
Cc: PG-General Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Re: Slow first query despite LIMIT and OFFSET clause
Date: 2009-01-28 12:38:37
Message-ID: e373d31e0901280438o850f5f0ka60bd335b45c2875@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, Jan 28, 2009 at 2:37 AM, Alban Hertroys
<dalroi(at)solfertje(dot)student(dot)utwente(dot)nl> wrote:
>
> Does that query plan look any better without the select count(id) from
> testimonials?
>
> If so you may be better off keeping track of those counts in a separate
> table updated by triggers on the testimonials table. Whether that really
> helps depends on how variable your selectors are to determine those counts.
> If those counts are generally very low the benefit will probably be minimal.
>

Thanks Alban. We have now made all the triggers and such. That part is
working. I suppose not having the count(id) is helping just with a few
seconds, but the query is still taking about 15 seconds in some cases.

Here are the query and its exec plan again fyi. Any other ideas for tweaking?

explain analyze SELECT
testimonials.url
,testimonials.alias
,testimonials.aliasEntered
,testimonials.title
,testimonials.modify_date
,testimonials.id
,visitcount.visit_count
,visitcount.unique_count
,visitcount.modify_date
,coalesce( extract(epoch from now()) - extract(epoch
from visitcount.modify_date), 0)
,(select count(id) from testimonials WHERE
testimonials.user_id = 'superman' and testimonials.user_known = 1 and
testimonials.status = 'Y' ) AS total
FROM testimonials
LEFT JOIN visitcount ON testimonials.id = visitcount.id
WHERE
testimonials.user_id = 'superman'
and testimonials.user_known = 1
and testimonials.status = 'Y'
ORDER BY testimonials.modify_date desc
OFFSET 0 LIMIT 10
;

QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=224.68..224.71 rows=10 width=187) (actual
time=453.429..453.539 rows=10 loops=1)
InitPlan
-> Aggregate (cost=63.52..63.53 rows=1 width=8) (actual
time=89.268..89.271 rows=1 loops=1)
-> Index Scan using new_idx_userknown on testimonials
(cost=0.00..63.41 rows=42 width=8) (actual time=0.039..49.968
rows=10149 loops=1)
Index Cond: ((user_id)::text = 'superman'::text)
Filter: (status = 'Y'::bpchar)
-> Sort (cost=161.16..161.26 rows=42 width=187) (actual
time=453.420..453.464 rows=10 loops=1)
Sort Key: testimonials.modify_date
-> Nested Loop Left Join (cost=0.00..160.02 rows=42
width=187) (actual time=89.384..395.008 rows=10149 loops=1)
-> Index Scan using new_idx_userknown on testimonials
(cost=0.00..63.41 rows=42 width=171) (actual time=0.061..50.990
rows=10149 loops=1)
Index Cond: ((user_id)::text = 'superman'::text)
Filter: (status = 'Y'::bpchar)
-> Index Scan using visitcount_pkey1 on visitcount
(cost=0.00..2.28 rows=1 width=24) (actual time=0.007..0.010 rows=1
loops=10149)
Index Cond: (testimonials.id = visitcount.id)
Total runtime: 461.
682 ms
(15 rows)

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Phoenix Kiula 2009-01-28 13:27:01 PG's suitability for high volume environment (many INSERTs and lots of aggregation reporting)
Previous Message Andrés Robinet 2009-01-28 10:40:28 PostgreSQL for a mission critical system