From: | Jan Kesten <jan(dot)kesten(at)web(dot)de> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Improving performance on multicolumn query |
Date: | 2005-11-09 12:08:07 |
Message-ID: | 4371E6A7.50808@web.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi, all!
I've been using postgresql for a long time now, but today I had some
problem I couldn't solve properly - hope here some more experienced
users have some hint's for me.
First, I'm using postgresql 7.4.7 on a 2GHz machine having 1.5GByte RAM
and I have a table with about 220 columns and 20000 rows - and the first
five columns build a primary key (and a unique index).
Now my problem: I need really many queries of rows using it's primary
key and fetching about five different columns but these are quite slow
(about 10 queries per second and as I have some other databases which
can have about 300 queries per second I think this is slow):
transfer=> explain analyse SELECT * FROM test WHERE test_a=9091150001
AND test_b=1 AND test_c=2 AND test_d=0 AND test_e=0;
Index Scan using test_idx on test (cost=0.00..50.27 rows=1 width=1891)
(actual time=0.161..0.167 rows=1 loops=1)
Index Cond: (test_a = 9091150001::bigint)
Filter: ((test_b = 1) AND (test_c = 2) AND (test_d = 0) AND (test_e 0))
So, what to do to speed things up? If I understand correctly this
output, the planner uses my index (test_idx is the same as test_pkey
created along with the table), but only for the first column.
Accidently I can't refactor these tables as they were given to me.
Thanks for any hint!
Jan
From | Date | Subject | |
---|---|---|---|
Next Message | Steinar H. Gunderson | 2005-11-09 12:44:29 | Re: Improving performance on multicolumn query |
Previous Message | Simon Riggs | 2005-11-09 09:35:55 | Re: Sort performance on large tables |