Re: how could select id=xx so slow?

From: Craig Ringer <ringerc(at)ringerc(dot)id(dot)au>
To: Yan Chunlu <springrider(at)gmail(dot)com>
Cc: Albe Laurenz <laurenz(dot)albe(at)wien(dot)gv(dot)at>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: how could select id=xx so slow?
Date: 2012-07-12 14:39:28
Message-ID: 4FFEE1A0.7060500@ringerc.id.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 07/12/2012 08:48 PM, Yan Chunlu wrote:
>
>
> explain analyze INSERT INTO vote_content ( thing1_id, thing2_id, name,
> date) VALUES (1,1, E'1', '2012-07-12T12:34:29.926863+00:00'::timestamptz)
>
> QUERY PLAN
> ------------------------------------------------------------------------------------------
> Insert (cost=0.00..0.01 rows=1 width=0) (actual time=79.610..79.610
> rows=0 loops=1)
> -> Result (cost=0.00..0.01 rows=1 width=0) (actual
> time=0.058..0.060 rows=1 loops=1)
> Total runtime: 79.656 ms
>
> it is a table with *50 million* rows, so not sure if it is too
> large... I have attached the schema below:

You have eight indexes on that table according to the schema you showed.
Three of them cover three columns. Those indexes are going to be
expensive to update; frankly I'm amazed it's that FAST to update them
when they're that big.

Use pg_size_pretty(pg_relation_size('index_name')) to get the index
sizes and compare to the pg_relation_size of the table. It might be
informative.

You may see some insert performance benefits with a non-100% fill factor
on the indexes, but with possible performance costs to index scans.

--
Craig Ringer

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Ants Aasma 2012-07-12 16:07:17 Re: how could select id=xx so slow?
Previous Message Yan Chunlu 2012-07-12 12:48:01 Re: how could select id=xx so slow?