Re: Performance Problem with postgresql 9.03, 8GB RAM, Quadcore Processor Server--Need help!!!!!!!

Lists: pgsql-generalpgsql-performance
From: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org, php-db(at)lists(dot)php(dot)net
Cc: nmdhashim <nmdhashim(at)gmail(dot)com>, "Karthi(dot)(dot)" <karthiisforu(at)gmail(dot)com>
Subject: Performance Problem with postgresql 9.03, 8GB RAM, Quadcore Processor Server--Need help!!!!!!!
Date: 2011-10-28 07:02:03
Message-ID: CACBfhZPO1GJv4nVunPXGA2EZq923On-CBGtxyhtQYAaSMCuBEQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

I have Quadcore server with 8GB RAM

vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5607 @ 2.27GHz
stepping : 2
cpu MHz : 1197.000
cache size : 8192 KB

MemTotal: 8148636 kB
MemFree: 4989116 kB
Buffers: 8464 kB
Cached: 2565456 kB
SwapCached: 81196 kB
Active: 2003796 kB
Inactive: 843896 kB
Active(anon): 1826176 kB
Inactive(anon): 405964 kB
Active(file): 177620 kB
Inactive(file): 437932 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 16779260 kB
SwapFree: 16303356 kB
Dirty: 1400 kB
Writeback: 0 kB
AnonPages: 208260 kB
Mapped: 1092008 kB
Shmem: 1958368 kB
Slab: 224964 kB
SReclaimable: 60136 kB
SUnreclaim: 164828 kB
KernelStack: 2864 kB
PageTables: 35684 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 20853576 kB
Committed_AS: 3672176 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 303292 kB
VmallocChunk: 34359429308 kB
HardwareCorrupted: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 6144 kB
DirectMap2M: 2082816 kB
DirectMap1G: 6291456 kB

My database size is

pg_size_pretty
----------------
21 GB

i have one table which has data more than 160500460 rows almost.......and i
have partioned with yearwise in different schemas

stk_source
Table "_100410.stk_source"
Column | Type |
Modifiers | Storage | Description
-----------------------+-----------+-----------------------------------------------------+----------+-------------
source_id | integer | not null default
nextval('source_id_seq'::regclass) | plain |
stock_id | integer
| | plain |
source_detail | integer[]
| | extended |
transaction_reference | integer
| | plain |
is_user_set | boolean | default
false | plain |
Triggers:
insert_stk_source_trigger BEFORE INSERT ON stk_source FOR EACH ROW
EXECUTE PROCEDURE stk_source_insert_trigger()
Child tables: _100410_200809.stk_source,
_100410_200910.stk_source,
_100410_201011.stk_source,
_100410_201112.stk_source
Has OIDs: yes

Also have indexes

ss_source_id_pk" PRIMARY KEY, btree (source_id)
"stk_source_stock_id_idx" btree (stock_id)

First two years data is very less so no issues

and next two years table size is 2GB & 10 GB respectively.

EXPLAIN select * from stk_source ;
QUERY
PLAN
-------------------------------------------------------------------------------------
Result (cost=0.00..6575755.39 rows=163132513 width=42)
-> Append (cost=0.00..6575755.39 rows=163132513 width=42)
-> Seq Scan on stk_source (cost=0.00..42.40 rows=1080 width=45)
-> Seq Scan on stk_source (cost=0.00..20928.37 rows=519179
width=42)
-> Seq Scan on stk_source (cost=0.00..85125.82 rows=2111794
width=42)
-> Seq Scan on stk_source (cost=0.00..6469658.80 rows=160500460
width=42)

because of this table my total database performance got affected i want to
optimize the settings by reading the below blogs i have changed some
configurations but no use still sytem is slow
http://comments.gmane.org/gmane.comp.db.postgresql.performance/29561

Actually we are using one *PHP* application in that we have used *Postgresql
9.0.3* database.The server is accessing 40 -50 users daily....so want to
have more performance....my config details are below....

Could any one help how to tune the settings for better performance???

Thanks in advance..........

# - Memory -

*shared_buffers = 2GB * # min 128kB
# (change requires
restart)
#temp_buffers = 8MB # min 800kB
*max_prepared_transactions = 0 * # zero disables the feature
# (change requires
restart)

# Note: Increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.

*work_mem = 48MB * # min 64kB
*maintenance_work_mem = 256MB* # min 1MB
*max_stack_depth = 6MB * # min 100kB

# - Planner Cost Constants -

*seq_page_cost = 1.0 * # measured on an arbitrary scale
*random_page_cost = 3.0* # same scale as above
*cpu_tuple_cost = 0.03 * # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
e*ffective_cache_size = 4GB*
------------------------------------------------------------------------
*free -t -m*
total used free shared buffers cached
Mem: 7957 3111 4845 0 10 2670
-/+ buffers/cache: 430 7527
Swap: 16385 458 15927
Total: 24343 3570 20773

*ipcs -l*

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 18014398509481983
max total shared memory (kbytes) = 4611686018427386880
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 256000
max ops per semop call = 32
semaphore max value = 32767

------ Messages Limits --------
max queues system wide = 3977
max size of message (bytes) = 65536
default max size of queue (bytes) = 65536

--
Regards
Mohamed Hashim.N
Mobile:09894587678


From: Gregg Jaskiewicz <gryzman(at)gmail(dot)com>
To: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org, php-db(at)lists(dot)php(dot)net, "Karthi(dot)(dot)" <karthiisforu(at)gmail(dot)com>
Subject: Re: [PERFORM] Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-10-28 07:58:26
Message-ID: CAJY59_h8Y+Bu_EJwYeagh41BzEU3Zt=ovGCxv-nmMCiMXL4LAA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

what sort of queries you are running against it ? the select * from..
is not really (hopefully) a query you are running from your php app.


From: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
To: Gregg Jaskiewicz <gryzman(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org, "Karthi(dot)(dot)" <karthiisforu(at)gmail(dot)com>
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-10-28 08:21:22
Message-ID: CACBfhZOty8=C=vjoq1Pxib_KMgPeFhEJfq6=YXVin3i14-LFnw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Actually we are using various views and functions to get the info for
reporting purpose in that views or functions we have used or joined the
above table mentioned.

I thought of will get reply from any one from the lists so only i put anyway
i will continue with only pgsql-performance mailing lists.

Regards
Hashim

On Fri, Oct 28, 2011 at 1:28 PM, Gregg Jaskiewicz <gryzman(at)gmail(dot)com> wrote:

> what sort of queries you are running against it ? the select * from..
> is not really (hopefully) a query you are running from your php app.
>

--
Regards
Mohamed Hashim.N
Mobile:09894587678


From: Alban Hertroys <haramrae(at)gmail(dot)com>
To: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org, php-db(at)lists(dot)php(dot)net, "Karthi(dot)(dot)" <karthiisforu(at)gmail(dot)com>
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-10-28 11:37:09
Message-ID: CAF-3MvOjY17pVZXSWgYDB5dMy-B07KiVYexR4RRutxZyXeKjbA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

On 28 October 2011 09:02, Mohamed Hashim <nmdhashim(at)gmail(dot)com> wrote:
> EXPLAIN select * from stk_source ;
>                                      QUERY
> PLAN
> -------------------------------------------------------------------------------------
>  Result  (cost=0.00..6575755.39 rows=163132513 width=42)
>    ->  Append  (cost=0.00..6575755.39 rows=163132513 width=42)
>          ->  Seq Scan on stk_source  (cost=0.00..42.40 rows=1080 width=45)
>          ->  Seq Scan on stk_source  (cost=0.00..20928.37 rows=519179
> width=42)
>          ->  Seq Scan on stk_source  (cost=0.00..85125.82 rows=2111794
> width=42)
>          ->  Seq Scan on stk_source  (cost=0.00..6469658.80 rows=160500460
> width=42)

That plan gives you the best possible performance given your query.
Your example probably doesn't fit the problem you're investigating.

--
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


From: Alban Hertroys <haramrae(at)gmail(dot)com>
To: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org, "Karthi(dot)(dot)" <karthiisforu(at)gmail(dot)com>
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-10-28 12:08:10
Message-ID: CAF-3MvPB5t-FXSE8POnxXqMWXXY5cxyCsSGQKXkeww4wZJpQ6g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

On 28 October 2011 13:37, Alban Hertroys <haramrae(at)gmail(dot)com> wrote:
> On 28 October 2011 09:02, Mohamed Hashim <nmdhashim(at)gmail(dot)com> wrote:

Please don't cross-post to mailing lists for multiple projects.

--
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


From: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
To: Alban Hertroys <haramrae(at)gmail(dot)com>, Gregg Jaskiewicz <gryzman(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org, "Karthi(dot)(dot)" <karthiisforu(at)gmail(dot)com>
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-10-29 04:10:12
Message-ID: CACBfhZNBwjB=3xQrvqy6ManJUaRyKEOoV8dMv7jJYPtXn9XFcQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Thanks Alban & Gregg.

i will describe little more about that table

- We are using PHP application with Apache server & Postgresql 9.0.3 in a
dedicated server.
- stk_source table is mainly used to track the transactions from parent
to child

Table "_100410.stk_source"
Column | Type |
Modifiers
-----------------------+-----------+-----------------------------------------------------
source_id | integer | not null default
nextval('source_id_seq'::regclass)
stock_id | integer |
source_detail | integer[] |
transaction_reference | integer |
is_user_set | boolean | default false

We store transaction_type and transaction_id in source_detail column which
is an interger array for each transactions

We use various functions to get the info based on transaction type

For eg:

In function to get the batch details we have used as

FOR batch_id_rec in select distinct(batch_id) from order_status_batches osb
join batch_status_stock bss on osb.status_id=bss.batch_status_id where
stock_id in (select source_detail[2] from stk_source where stock_id IN
(SELECT
std_i.stock_id

FROM order_details_shipments
ods

JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id AND
ods.order_id=sps.order_id AND ods.item_id=sps.item_id
JOIN stock_transaction_detail_106 std ON
std.transaction_id=sps.transaction_id
JOIN stock_transaction_detail_106 std_i ON std.stock_id =
std_i.stock_id AND std_i.transaction_type = 'i'::bpchar
WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP

...............................

................................

......................................

Similarly we have used in php pages and views

SELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS
date,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS
rate,
FROM acc_bill_items_106 abi
JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id AND
abd.bill_status='act'
JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id and
ss.source_detail[1]=1
JOIN stock_transaction_detail_106_table std ON std.stock_id=ss.stock_id
JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id
JOIN master_product_106_table mp ON mp.product_id= sd106.product_id
JOIN receipt_item_price_106_table rip ON rip.receipt_item_id=abi.item_id
WHERE abi.bill_id=$bill_id AND std.transaction_type='o' ;

So where ever we have JOIN or used in functions the performance is very low
some times query returns results takes more than 45 mints.

Normally if we fetch Select * from some_table..........it returns very fast
because it has less records.

But when i put Select * from stk_source or to find the actual_cost

EXPLAIN ANALYZE SELECT * FROM stk_source;

i couln't able to retrieve the planner details waited for more than 50 to 60
mints

so question is in spite of having good server with high configuration and
also changed the postgresql configuration settings then why the system is
crawling?

*What are the other parameters have to look out or what are the other config
settings to be change to have the best performance??*

Kindly help to sort out this problem......

Thanks in advance..................!!!!!!

Regards
Hashim

On Fri, Oct 28, 2011 at 5:07 PM, Alban Hertroys <haramrae(at)gmail(dot)com> wrote:

> On 28 October 2011 09:02, Mohamed Hashim <nmdhashim(at)gmail(dot)com> wrote:
> > EXPLAIN select * from stk_source ;
> > QUERY
> > PLAN
> >
> -------------------------------------------------------------------------------------
> > Result (cost=0.00..6575755.39 rows=163132513 width=42)
> > -> Append (cost=0.00..6575755.39 rows=163132513 width=42)
> > -> Seq Scan on stk_source (cost=0.00..42.40 rows=1080
> width=45)
> > -> Seq Scan on stk_source (cost=0.00..20928.37 rows=519179
> > width=42)
> > -> Seq Scan on stk_source (cost=0.00..85125.82 rows=2111794
> > width=42)
> > -> Seq Scan on stk_source (cost=0.00..6469658.80
> rows=160500460
> > width=42)
>
> That plan gives you the best possible performance given your query.
> Your example probably doesn't fit the problem you're investigating.
>
> --
> If you can't see the forest for the trees,
> Cut the trees and you'll see there is no forest.
>

--
Regards
Mohamed Hashim.N
Mobile:09894587678


From: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
To: Alban Hertroys <haramrae(at)gmail(dot)com>, Gregg Jaskiewicz <gryzman(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org, "Karthi(dot)(dot)" <karthiisforu(at)gmail(dot)com>
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-01 03:03:51
Message-ID: CACBfhZNr5qnMN0=8hp1Fb3hA8jwoEXnJAahua1JT5f5gXT9tuA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Any idea or suggestions how to improve my database best
performance.................???

Regards
Hashim

On Sat, Oct 29, 2011 at 9:40 AM, Mohamed Hashim <nmdhashim(at)gmail(dot)com> wrote:

> Thanks Alban & Gregg.
>
>
> i will describe little more about that table
>
>
> - We are using PHP application with Apache server & Postgresql 9.0.3
> in a dedicated server.
> - stk_source table is mainly used to track the transactions from
> parent to child
>
> Table "_100410.stk_source"
> Column | Type |
> Modifiers
> -----------------------+-----------+-----------------------------------------------------
>
> source_id | integer | not null default
> nextval('source_id_seq'::regclass)
> stock_id | integer |
> source_detail | integer[] |
> transaction_reference | integer |
> is_user_set | boolean | default false
>
>
> We store transaction_type and transaction_id in source_detail column which
> is an interger array for each transactions
>
> We use various functions to get the info based on transaction type
>
> For eg:
>
> In function to get the batch details we have used as
>
> FOR batch_id_rec in select distinct(batch_id) from order_status_batches
> osb join batch_status_stock bss on osb.status_id=bss.batch_status_id where
> stock_id in (select source_detail[2] from stk_source where stock_id IN
> (SELECT
> std_i.stock_id
>
> FROM order_details_shipments
> ods
>
> JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id AND
> ods.order_id=sps.order_id AND ods.item_id=sps.item_id
> JOIN stock_transaction_detail_106 std ON
> std.transaction_id=sps.transaction_id
> JOIN stock_transaction_detail_106 std_i ON std.stock_id =
> std_i.stock_id AND std_i.transaction_type = 'i'::bpchar
> WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP
>
> ...............................
>
> ................................
>
> ......................................
>
> Similarly we have used in php pages and views
>
> SELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS
> date,mp.product_desc as product_desc,std.quantity,std.area,rip.price AS
> rate,
> FROM acc_bill_items_106 abi
> JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id AND
> abd.bill_status='act'
> JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id and
> ss.source_detail[1]=1
> JOIN stock_transaction_detail_106_table std ON std.stock_id=ss.stock_id
> JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id
> JOIN master_product_106_table mp ON mp.product_id= sd106.product_id
> JOIN receipt_item_price_106_table rip ON
> rip.receipt_item_id=abi.item_id
> WHERE abi.bill_id=$bill_id AND std.transaction_type='o' ;
>
> So where ever we have JOIN or used in functions the performance is very
> low some times query returns results takes more than 45 mints.
>
> Normally if we fetch Select * from some_table..........it returns very
> fast because it has less records.
>
> But when i put Select * from stk_source or to find the actual_cost
>
> EXPLAIN ANALYZE SELECT * FROM stk_source;
>
> i couln't able to retrieve the planner details waited for more than 50 to
> 60 mints
>
> so question is in spite of having good server with high configuration and
> also changed the postgresql configuration settings then why the system is
> crawling?
>
>
> *What are the other parameters have to look out or what are the other
> config settings to be change to have the best performance??*
>
> Kindly help to sort out this problem......
>
>
> Thanks in advance..................!!!!!!
>
> Regards
> Hashim
>
>
>
>
>
>
>
>
> On Fri, Oct 28, 2011 at 5:07 PM, Alban Hertroys <haramrae(at)gmail(dot)com>wrote:
>
>> On 28 October 2011 09:02, Mohamed Hashim <nmdhashim(at)gmail(dot)com> wrote:
>> > EXPLAIN select * from stk_source ;
>> > QUERY
>> > PLAN
>> >
>> -------------------------------------------------------------------------------------
>> > Result (cost=0.00..6575755.39 rows=163132513 width=42)
>> > -> Append (cost=0.00..6575755.39 rows=163132513 width=42)
>> > -> Seq Scan on stk_source (cost=0.00..42.40 rows=1080
>> width=45)
>> > -> Seq Scan on stk_source (cost=0.00..20928.37 rows=519179
>> > width=42)
>> > -> Seq Scan on stk_source (cost=0.00..85125.82 rows=2111794
>> > width=42)
>> > -> Seq Scan on stk_source (cost=0.00..6469658.80
>> rows=160500460
>> > width=42)
>>
>> That plan gives you the best possible performance given your query.
>> Your example probably doesn't fit the problem you're investigating.
>>
>> --
>> If you can't see the forest for the trees,
>> Cut the trees and you'll see there is no forest.
>>
>
>
>
> --
> Regards
> Mohamed Hashim.N
> Mobile:09894587678
>

--
Regards
Mohamed Hashim.N
Mobile:09894587678


From: Marcus Engene <mengpg2(at)engene(dot)se>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-01 09:57:46
Message-ID: 4EAFC29A.40509@engene.se
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Hi Hashim,

After I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of
problems with queries with many joins. Queries that used to take 1ms
suddenly take half a minute for no apparent reason.

I have 72GB which I think makes the planner go bonkers and be too eager
doing a seq scan. I tried to compensate with ridiculously low
cpu_index_tuple_cost but it had little effect.

If I were you, I would try to remove some of the joined tables and see
what happens. When does it start to run very slowly? How does the plan
look right before it's super slow?

One workaround I've done is if something looking like this....

select
...
from
table_linking_massive_table tlmt
,massive_table mt
,some_table1 st1
,some_table2 st2
,some_table3 st3
,some_table4 st4
where
tlmt.group_id = 123223 AND
mt.id = tmlt.massive_table AND
st1.massive_table = mt.id AND
st2.massive_table = mt.id AND
st3.massive_table = mt.id AND
st4.massive_table = mt.id

...suddenly gets slow, it has helped to rewrite it as....

select
...
from
(
select
...
from
table_linking_massive_table tlmt
,massive_table mt
where
tlmt.group_id = 123223 AND
mt.id = tmlt.massive_table AND
) as mt
,some_table1 st1
,some_table2 st2
,some_table3 st3
,some_table4 st4
where
tlmt.group_id = 123223 AND
mt.id = tmlt.massive_table AND
st1.massive_table = mt.id AND
st2.massive_table = mt.id AND
st3.massive_table = mt.id AND
st4.massive_table = mt.id

This seems to force Postgres to evaluate the mt subselect first and not
get ideas about how to join. It was a few years ago since I used Oracle
but if I remember correctly Oracle looked at the order of the things in
the where section. In this example Oracle would be encourage to use tlmt
as base table and take it from there. It doesn't seem to me that
Postgres cares about this order. Not caring would possibly be more
forgiving with automatically generated sql but it also implies the
planner always makes the best decisions which it obviously is not. I
might be talking rubbish here, these are my empirical observations.

I'm sure you'll get better answers, but this is what I've done.

I assume you have done your analyze & indexing correctly etc.

Best regards,
Marcus

On 11/1/11 4:03 , Mohamed Hashim wrote:
> Any idea or suggestions how to improve my database best
> performance.................???
>
> Regards
> Hashim
>
> On Sat, Oct 29, 2011 at 9:40 AM, Mohamed Hashim <nmdhashim(at)gmail(dot)com
> <mailto:nmdhashim(at)gmail(dot)com>> wrote:
>
> Thanks Alban & Gregg.
>
>
> i will describe little more about that table
>
> * We are using PHP application with Apache server & Postgresql
> 9.0.3 in a dedicated server.
> * stk_source table is mainly used to track the transactions
> from parent to child
>
> Table "_100410.stk_source"
> Column | Type | Modifiers
> -----------------------+-----------+-----------------------------------------------------
>
>
> source_id | integer | not null default
> nextval('source_id_seq'::regclass)
> stock_id | integer |
> source_detail | integer[] |
> transaction_reference | integer |
> is_user_set | boolean | default false
>
>
> We store transaction_type and transaction_id in source_detail
> column which is an interger array for each transactions
>
> We use various functions to get the info based on transaction type
>
> For eg:
>
> In function to get the batch details we have used as
>
> FOR batch_id_rec in select distinct(batch_id) from
> order_status_batches osb join batch_status_stock bss on
> osb.status_id=bss.batch_status_id where stock_id in (select
> source_detail[2] from stk_source where stock_id IN (SELECT
> std_i.stock_id
> FROM order_details_shipments ods
> JOIN shipment_pack_stock sps ON sps.pack_id=ods.pack_id
> AND ods.order_id=sps.order_id AND ods.item_id=sps.item_id
> JOIN stock_transaction_detail_106 std ON
> std.transaction_id=sps.transaction_id
> JOIN stock_transaction_detail_106 std_i ON std.stock_id =
> std_i.stock_id AND std_i.transaction_type = 'i'::bpchar
> WHERE shipment_item_id=$1 ) and source_detail[1]=3) LOOP
>
> ...............................
>
> ................................
>
> ......................................
>
> Similarly we have used in php pages and views
>
> SELECT abd.bill_no as bill_no,to_char(abd.bill_date,'dd/mm/yyyy')
> AS date,mp.product_desc as
> product_desc,std.quantity,std.area,rip.price AS rate,
> FROM acc_bill_items_106 abi
> JOIN acc_bill_details_106_table abd ON abd.bill_id=abi.bill_id
> AND abd.bill_status='act'
> JOIN stk_source_table ss ON ss.source_detail[2]=abi.item_id
> and ss.source_detail[1]=1
> JOIN stock_transaction_detail_106_table std ON
> std.stock_id=ss.stock_id
> JOIN stock_details_106_table sd106 ON sd106.stock_id=std.stock_id
> JOIN master_product_106_table mp ON mp.product_id=
> sd106.product_id
> JOIN receipt_item_price_106_table rip ON
> rip.receipt_item_id=abi.item_id
> WHERE abi.bill_id=$bill_id AND std.transaction_type='o' ;
>
> So where ever we have JOIN or used in functions the performance is
> very low some times query returns results takes more than 45 mints.
>
> Normally if we fetch Select * from some_table..........it returns
> very fast because it has less records.
>
> But when i put Select * from stk_source or to find the actual_cost
>
> EXPLAIN ANALYZE SELECT * FROM stk_source;
>
> i couln't able to retrieve the planner details waited for more
> than 50 to 60 mints
>
> so question is in spite of having good server with high
> configuration and also changed the postgresql configuration
> settings then why the system is crawling?
>
>
> *What are the other parameters have to look out or what are the
> other config settings to be change to have the best performance??*
>
> Kindly help to sort out this problem......
>
>
> Thanks in advance..................!!!!!!
>
> Regards
> Hashim
>
>
>
>
>
>
>
>
> On Fri, Oct 28, 2011 at 5:07 PM, Alban Hertroys
> <haramrae(at)gmail(dot)com <mailto:haramrae(at)gmail(dot)com>> wrote:
>
> On 28 October 2011 09:02, Mohamed Hashim <nmdhashim(at)gmail(dot)com
> <mailto:nmdhashim(at)gmail(dot)com>> wrote:
> > EXPLAIN select * from stk_source ;
> > QUERY
> > PLAN
> >
> -------------------------------------------------------------------------------------
> > Result (cost=0.00..6575755.39 rows=163132513 width=42)
> > -> Append (cost=0.00..6575755.39 rows=163132513 width=42)
> > -> Seq Scan on stk_source (cost=0.00..42.40
> rows=1080 width=45)
> > -> Seq Scan on stk_source (cost=0.00..20928.37
> rows=519179
> > width=42)
> > -> Seq Scan on stk_source (cost=0.00..85125.82
> rows=2111794
> > width=42)
> > -> Seq Scan on stk_source (cost=0.00..6469658.80
> rows=160500460
> > width=42)
>
> That plan gives you the best possible performance given your
> query.
> Your example probably doesn't fit the problem you're
> investigating.
>
> --
> If you can't see the forest for the trees,
> Cut the trees and you'll see there is no forest.
>
>
>
>
> --
> Regards
> Mohamed Hashim.N
> Mobile:09894587678
>
>
>
>
> --
> Regards
> Mohamed Hashim.N
> Mobile:09894587678


From: "ktm(at)rice(dot)edu" <ktm(at)rice(dot)edu>
To: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
Cc: Alban Hertroys <haramrae(at)gmail(dot)com>, Gregg Jaskiewicz <gryzman(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org, "Karthi(dot)(dot)" <karthiisforu(at)gmail(dot)com>
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-01 12:53:42
Message-ID: 20111101125342.GT10975@staff-mud-56-27.rice.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

On Tue, Nov 01, 2011 at 08:33:51AM +0530, Mohamed Hashim wrote:
> Any idea or suggestions how to improve my database best
> performance.................???
>
> Regards
> Hashim
>
Hi Hashim,

Ignoring the description of your tables, you should probably try
updating to the latest release 9.0.5. You are two point releases
back and they really, really, really fix bugs in each release or
they do not bother releasing.

Regards,
Ken


From: "Tomas Vondra" <tv(at)fuzzy(dot)cz>
To: "Marcus Engene" <mengpg2(at)engene(dot)se>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-01 13:13:34
Message-ID: c759e7996874c981952264043aa8a5ee.squirrel@sq.gransy.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

On 1 Listopad 2011, 10:57, Marcus Engene wrote:
> Hi Hashim,
>
> One workaround I've done is if something looking like this....
>
> select
> ...
> from
> table_linking_massive_table tlmt
> ,massive_table mt
> ,some_table1 st1
> ,some_table2 st2
> ,some_table3 st3
> ,some_table4 st4
> where
> tlmt.group_id = 123223 AND
> mt.id = tmlt.massive_table AND
> st1.massive_table = mt.id AND
> st2.massive_table = mt.id AND
> st3.massive_table = mt.id AND
> st4.massive_table = mt.id
>
> ...suddenly gets slow, it has helped to rewrite it as....
>
> select
> ...
> from
> (
> select
> ...
> from
> table_linking_massive_table tlmt
> ,massive_table mt
> where
> tlmt.group_id = 123223 AND
> mt.id = tmlt.massive_table AND
> ) as mt
> ,some_table1 st1
> ,some_table2 st2
> ,some_table3 st3
> ,some_table4 st4
> where
> tlmt.group_id = 123223 AND
> mt.id = tmlt.massive_table AND
> st1.massive_table = mt.id AND
> st2.massive_table = mt.id AND
> st3.massive_table = mt.id AND
> st4.massive_table = mt.id
>

Can you please post EXPLAIN ANALYZE of those queries? It's difficult to
see what's wrong when we don't know the plan (and the actual stats
gathered during execution). Use explain.depesz.com to post the output.

Tomas


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Marcus Engene <mengpg2(at)engene(dot)se>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM, Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-01 13:43:00
Message-ID: 24830.1320154980@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Marcus Engene <mengpg2(at)engene(dot)se> writes:
> After I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of
> problems with queries with many joins. Queries that used to take 1ms
> suddenly take half a minute for no apparent reason.

Could we see a concrete test case, rather than hand waving? If there's
really a problem in 9.0, it's impossible to fix it on so little detail.

> One workaround I've done is if something looking like this....

The only way that should make a difference is if the total number
of tables in the query exceeds from_collapse_limit (or maybe
join_collapse_limit, depending on exactly how you wrote the query).
Perhaps you'd been running with nonstandard values of those settings
in 8.x, and forgot to transfer them into the new DB?

regards, tom lane


From: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, tv(at)fuzzy(dot)cz, ktm(at)rice(dot)edu
Cc: Marcus Engene <mengpg2(at)engene(dot)se>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-02 07:12:20
Message-ID: CACBfhZNoq_hjZyMnBTnrxR8rif31QvJ5dtdgQMdPwO0+x2AKpQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Dear All

Thanks for your suggestions & replies.

The below are the sample query which i put for particular one bill_id

EXPLAIN ANALYZE SELECT abd.bill_no as
bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as
product_desc,std.quantity,std.area,rip.price AS rate
FROM acc_bill_items_106 abi
JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id
JOIN stk_source ss ON ss.source_detail[1]=1 and
ss.source_detail[2]=abi.item_id
JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id
JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id
JOIN master_product_106 mp ON mp.product_id= sd106.product_id
JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id
WHERE abi.bill_id=12680;

QUERY
PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.00..7230339.59 rows=54369 width=39) (actual
time=158156.895..158157.206 rows=1 loops=1)
-> Nested Loop (cost=0.00..7149579.10 rows=8192 width=32) (actual
time=158156.863..158157.172 rows=1 loops=1)
-> Nested Loop (cost=0.00..7119922.60 rows=8192 width=27)
(actual time=158156.855..158157.164 rows=1 loops=1)
-> Nested Loop (cost=0.00..7086865.70 rows=8192 width=19)
(actual time=158156.835..158157.143 rows=1 loops=1)
Join Filter: (abi.item_id = ss.source_detail[2])
-> Nested Loop (cost=0.00..604.54 rows=2 width=23)
(actual time=2.782..2.786 rows=1 loops=1)
-> Index Scan using acc_bill_details_106_pkey
on acc_bill_details_106 abd (cost=0.00..6.29 rows=1 width=12) (actual
time=0.010..0.012 rows=1 loops=1)
Index Cond: (bill_id = 12680)
-> Nested Loop (cost=0.00..598.19 rows=2
width=19) (actual time=2.770..2.772 rows=1 loops=1)
Join Filter: (abi.item_id =
rip.receipt_item_id)
-> Seq Scan on receipt_item_price_106
rip (cost=0.00..162.48 rows=4216 width=11) (actual time=0.005..0.562
rows=4218 loops=1)
-> Materialize (cost=0.00..140.59 rows=2
width=8) (actual time=0.000..0.000 rows=1 loops=4218)
-> Seq Scan on acc_bill_items_106
abi (cost=0.00..140.58 rows=2 width=8) (actual time=0.412..0.412 rows=1
loops=1)
Filter: (bill_id = 12680)
-> Materialize (cost=0.00..7024562.68 rows=819222
width=33) (actual time=0.035..153869.575 rows=19010943 loops=1)
-> Append (cost=0.00..7014065.57 rows=819222
width=33) (actual time=0.034..145403.828 rows=19010943 loops=1)
-> Seq Scan on stk_source ss
(cost=0.00..45.10 rows=5 width=36) (actual time=0.001..0.001 rows=0 loops=1)
Filter: (source_detail[1] = 1)
-> Seq Scan on stk_source ss
(cost=0.00..22226.32 rows=2596 width=33) (actual time=0.033..118.019
rows=66356 loops=1)
Filter: (source_detail[1] = 1)
-> Seq Scan on stk_source ss
(cost=0.00..90405.31 rows=10559 width=33) (actual time=0.010..490.712
rows=288779 loops=1)
Filter: (source_detail[1] = 1)
-> Seq Scan on stk_source ss
(cost=0.00..6901388.84 rows=806062 width=33) (actual
time=13.382..142493.302 rows=18655808 loops=1)
Filter: (source_detail[1] = 1)
-> Index Scan using sd106_stock_id_idx on stock_details_106
sd106 (cost=0.00..4.00 rows=1 width=8) (actual time=0.014..0.014 rows=1
loops=1)
Index Cond: (sd106.stock_id = ss.stock_id)
-> Index Scan using master_product_pkey on master_product_106 mp
(cost=0.00..3.59 rows=1 width=13) (actual time=0.006..0.006 rows=1 loops=1)
Index Cond: (mp.product_id = sd106.product_id)
-> Index Scan using std106_stock_id_idx on stock_transaction_detail_106
std (cost=0.00..9.70 rows=4 width=19) (actual time=0.007..0.009 rows=1
loops=1)
Index Cond: (std.stock_id = ss.stock_id)
Total runtime: 158240.795 ms

<http://goog_1591150719>*http://explain.depesz.com/s/Tyc

*Similarly i have used the queries on various details pages and views that
too if i go for one month transactions its taking so much times.

I will try to upgrade to latest version and will try to tune more my
queries so changing the conf settings wouldn't help for better performance??

Thanks & Regards
Hashim

On Tue, Nov 1, 2011 at 7:13 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> Marcus Engene <mengpg2(at)engene(dot)se> writes:
> > After I upgraded from Postgres 8.3/8.4 to 9.0 I had all sorts of
> > problems with queries with many joins. Queries that used to take 1ms
> > suddenly take half a minute for no apparent reason.
>
> Could we see a concrete test case, rather than hand waving? If there's
> really a problem in 9.0, it's impossible to fix it on so little detail.
>
> > One workaround I've done is if something looking like this....
>
> The only way that should make a difference is if the total number
> of tables in the query exceeds from_collapse_limit (or maybe
> join_collapse_limit, depending on exactly how you wrote the query).
> Perhaps you'd been running with nonstandard values of those settings
> in 8.x, and forgot to transfer them into the new DB?
>
> regards, tom lane
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

--
Regards
Mohamed Hashim.N
Mobile:09894587678


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
Cc: tv(at)fuzzy(dot)cz, ktm(at)rice(dot)edu, Marcus Engene <mengpg2(at)engene(dot)se>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM, Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-02 14:26:50
Message-ID: 29599.1320244010@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Mohamed Hashim <nmdhashim(at)gmail(dot)com> writes:
> The below are the sample query which i put for particular one bill_id

> EXPLAIN ANALYZE SELECT abd.bill_no as
> bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as
> product_desc,std.quantity,std.area,rip.price AS rate
> FROM acc_bill_items_106 abi
> JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id
> JOIN stk_source ss ON ss.source_detail[1]=1 and
> ss.source_detail[2]=abi.item_id
> JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id
> JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id
> JOIN master_product_106 mp ON mp.product_id= sd106.product_id
> JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id
> WHERE abi.bill_id=12680;

All the time seems to be going into the seqscan on stk_source and its
child tables. It looks like it would help if "ss.source_detail[1]=1 and
ss.source_detail[2]=abi.item_id" were indexable (particularly the
latter). Which probably means you need to rethink your data
representation. Putting things that you need to index on into an array
is not a very good design. I suppose you can do it if you're absolutely
set on it (functional indexes on (source_detail[1]) and (source_detail[2]))
but it appears to suck from a notational point of view too. Six months
from now, when you look at this code, are you going to remember what's
the difference between source_detail[1] and source_detail[2]? Not
without consulting your notes, I bet.

regards, tom lane


From: Mario Weilguni <roadrunner6(at)gmx(dot)at>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-03 15:02:07
Message-ID: 4EB2ACEF.8090305@gmx.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Am 02.11.2011 08:12, schrieb Mohamed Hashim:
> Dear All
>
> Thanks for your suggestions & replies.
>
> The below are the sample query which i put for particular one bill_id
>
> EXPLAIN ANALYZE SELECT abd.bill_no as
> bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as
> product_desc,std.quantity,std.area,rip.price AS rate
> FROM acc_bill_items_106 abi
> JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id
> JOIN stk_source ss ON ss.source_detail[1]=1 and
> ss.source_detail[2]=abi.item_id
> JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id
> JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id
> JOIN master_product_106 mp ON mp.product_id= sd106.product_id
> JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id
> WHERE abi.bill_id=12680;

First I would try this:
explain analyze select * from stk_source where source_detail[1] = 1;
explain analyze select * from stk_source where source_detail[2] = 12356;

Both times you'll get sequential scans, and that's the root of the
problem. Oh, you mentioned that you use partitioning, but there seems to
be no condition for that.

You should really rethink your database schema, at least try to pull out
all indexable fields out of that int[] into columns, and use indices on
those fields.

Regards
Mario


From: "Tomas Vondra" <tv(at)fuzzy(dot)cz>
To: "Mario Weilguni" <roadrunner6(at)gmx(dot)at>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-03 16:08:11
Message-ID: 65a8332e0128eda27f0345da80946dc6.squirrel@sq.gransy.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
> Am 02.11.2011 08:12, schrieb Mohamed Hashim:
>> Dear All
>>
>> Thanks for your suggestions & replies.
>>
>> The below are the sample query which i put for particular one bill_id
>>
>> EXPLAIN ANALYZE SELECT abd.bill_no as
>> bill_no,to_char(abd.bill_date,'dd/mm/yyyy') AS date,mp.product_desc as
>> product_desc,std.quantity,std.area,rip.price AS rate
>> FROM acc_bill_items_106 abi
>> JOIN acc_bill_details_106 abd ON abd.bill_id=abi.bill_id
>> JOIN stk_source ss ON ss.source_detail[1]=1 and
>> ss.source_detail[2]=abi.item_id
>> JOIN stock_transaction_detail_106 std ON std.stock_id=ss.stock_id
>> JOIN stock_details_106 sd106 ON sd106.stock_id=std.stock_id
>> JOIN master_product_106 mp ON mp.product_id= sd106.product_id
>> JOIN receipt_item_price_106 rip ON rip.receipt_item_id=abi.item_id
>> WHERE abi.bill_id=12680;
>
> First I would try this:
> explain analyze select * from stk_source where source_detail[1] = 1;
> explain analyze select * from stk_source where source_detail[2] = 12356;
>
> Both times you'll get sequential scans, and that's the root of the
> problem. Oh, you mentioned that you use partitioning, but there seems to
> be no condition for that.
>
> You should really rethink your database schema, at least try to pull out
> all indexable fields out of that int[] into columns, and use indices on
> those fields.

No doubt about that, querying tables using conditions on array columns is
not the best direction in most cases, especially when those tables are
huge.

Still, the interesting part here is that the OP claims this worked just
fine in the older version and after an upgrade the performance suddenly
dropped. This could be caused by many things, and we're just guessing
because we don't have any plans from the old version.

Tomas


From: Mario Weilguni <roadrunner6(at)gmx(dot)at>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-03 20:07:59
Message-ID: 4EB2F49F.3080903@gmx.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Am 03.11.2011 17:08, schrieb Tomas Vondra:
> On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
> <snip>
> No doubt about that, querying tables using conditions on array columns is
> not the best direction in most cases, especially when those tables are
> huge.
>
> Still, the interesting part here is that the OP claims this worked just
> fine in the older version and after an upgrade the performance suddenly
> dropped. This could be caused by many things, and we're just guessing
> because we don't have any plans from the old version.
>
> Tomas
>
>

Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about
problems after the migration from 8.x to 9.x. Or did I miss something here?

Regards,
Mario


From: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
To: Mario Weilguni <roadrunner6(at)gmx(dot)at>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-08 03:21:35
Message-ID: CACBfhZP7OUL+UpauYYSuNbOG-tGbB+TwKzhY7FkwrdvxDaM8Jg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Hi all,

Thanks for all your responses.

Sorry for late response

Earlier we used Postgres8.3.10 with Desktop computer (as server) and
configuration of the system (I2 core with 4GB RAM) and also the application
was slow i dint change any postgres config settings.

May be because of low config We thought the aplication is slow so we opted
to go for higher configuration server(with RAID 1) which i mentioned
earlier.

I thought the application will go fast but unfortunately there is no
improvement so i tried to change the postgres config settings and trying to
tune my queries wherever possible but still i was not able
to..........improve the performance..

So will it helpful if we try GIST or GIN for integer array[] colum
(source_detail) with enable_seqscan=off and default_statistics_target=1000?

Regards
Hashim

On Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni <roadrunner6(at)gmx(dot)at> wrote:

> Am 03.11.2011 17:08, schrieb Tomas Vondra:
>
>> On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
>> <snip>
>>
>> No doubt about that, querying tables using conditions on array columns is
>> not the best direction in most cases, especially when those tables are
>> huge.
>>
>> Still, the interesting part here is that the OP claims this worked just
>> fine in the older version and after an upgrade the performance suddenly
>> dropped. This could be caused by many things, and we're just guessing
>> because we don't have any plans from the old version.
>>
>> Tomas
>>
>>
>>
> Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about
> problems after the migration from 8.x to 9.x. Or did I miss something here?
>
> Regards,
> Mario
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)**
> org <pgsql-performance(at)postgresql(dot)org>)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>
>

--
Regards
Mohamed Hashim.N
Mobile:09894587678


From: Sam Gendler <sgendler(at)ideasculptor(dot)com>
To: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
Cc: Mario Weilguni <roadrunner6(at)gmx(dot)at>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM, Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-08 07:19:08
Message-ID: 30423C8E-DB87-4E07-B053-0AD141F1D8B9@ideasculptor.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Sent from my iPhone

On Nov 7, 2011, at 7:21 PM, Mohamed Hashim <nmdhashim(at)gmail(dot)com> wrote:

> Hi all,
>
> Thanks for all your responses.
>
> Sorry for late response
>
> Earlier we used Postgres8.3.10 with Desktop computer (as server) and configuration of the system (I2 core with 4GB RAM) and also the application was slow i dint change any postgres config settings.
>
> May be because of low config We thought the aplication is slow so we opted to go for higher configuration server(with RAID 1) which i mentioned earlier.
>
> I thought the application will go fast but unfortunately there is no improvement so i tried to change the postgres config settings and trying to tune my queries wherever possible but still i was not able to..........improve the performance..
>
>
> So will it helpful if we try GIST or GIN for integer array[] colum (source_detail) with enable_seqscan=off and default_statistics_target=1000?

Oh dear! Where to even begin? There is no way to suggest possible solutions without knowing a lot more about how things are currently configured and what, exactly, about your application is slow. Just to address your particular suggestions, increasing the default statistics target would only help if an explain analyze for a slow query indicates that the query planner is using inaccurate row count estimates for one or more steps in a query plan. Depending upon the frequency of this problem it may be better to increase statistics target just for individual columns rather than across the entire db cluster. Setting enable_seqscan to off is almost never a good solution to a problem, especially db-wide. If the planner is selecting a sequential scan when an alternative strategy would perform much better, then it is doing so because your configuration is not telling the query planner accurate values for the cost of sequential access vs random access - or else the statistics are inaccurate causing it to select a seq scan because it thinks it will traverse more rows than it actually will.

In short, you need to read a lot more about performance tuning Postgres rather than taking stab-in-the-dark guesses for solutions. I believe it was pointed out that at least one query that is problematic for you is filtering based on the value of individual indexes of an array column - which means you actually need break those values into separate columns with indexes on them or create an index on column[x] so that the planner can use that. But if the problem is general slowness across your whole app, it is possible that the way your app uses the db access API is inefficient or you may have a misconfiguration that causes all db access to be slow. Depending on your hardware and platform, using the default configuration will result in db performance that is far from optimal. The default config is pretty much a minimal config.

I'd suggest you spend at least a day or two reading up on Postgres performance tuning and investigating your particular problems. You may make quite a bit of improvement without our help and you'll be much more knowledgable about your db installation when you are done. At the very least, please look at the mailing list page on the Postgres website and read the links about how to ask performance questions so that you at least provide the list with enough information about your problems that others can offer useful feedback. I'd provide a link, but I'm on a phone.

--sam

>
> Regards
> Hashim
>
>
>
> On Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni <roadrunner6(at)gmx(dot)at> wrote:
> Am 03.11.2011 17:08, schrieb Tomas Vondra:
> On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
> <snip>
>
> No doubt about that, querying tables using conditions on array columns is
> not the best direction in most cases, especially when those tables are
> huge.
>
> Still, the interesting part here is that the OP claims this worked just
> fine in the older version and after an upgrade the performance suddenly
> dropped. This could be caused by many things, and we're just guessing
> because we don't have any plans from the old version.
>
> Tomas
>
>
>
> Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about problems after the migration from 8.x to 9.x. Or did I miss something here?
>
> Regards,
> Mario
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
>
>
> --
> Regards
> Mohamed Hashim.N
> Mobile:09894587678


From: Lucas Mocellin <lucasmocellin(at)kktua(dot)org>
To: Sam Gendler <sgendler(at)ideasculptor(dot)com>
Cc: Mohamed Hashim <nmdhashim(at)gmail(dot)com>, Mario Weilguni <roadrunner6(at)gmx(dot)at>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-08 10:34:10
Message-ID: CAFsUHabK2zMgLtR2kgM2e2yw-6STGfFjH2o7UPrPgM8yJ8-8CQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

how about your harddisks??

you could get a little help from a RAID10 SAS 15k disks. if you don't even
have RAID, it would help a lot!

Lucas.

2011/11/8 Sam Gendler <sgendler(at)ideasculptor(dot)com>

>
>
> Sent from my iPhone
>
> On Nov 7, 2011, at 7:21 PM, Mohamed Hashim <nmdhashim(at)gmail(dot)com> wrote:
>
> Hi all,
>
> Thanks for all your responses.
>
> Sorry for late response
>
> Earlier we used Postgres8.3.10 with Desktop computer (as server) and
> configuration of the system (I2 core with 4GB RAM) and also the application
> was slow i dint change any postgres config settings.
>
> May be because of low config We thought the aplication is slow so we opted
> to go for higher configuration server(with RAID 1) which i mentioned
> earlier.
>
> I thought the application will go fast but unfortunately there is no
> improvement so i tried to change the postgres config settings and trying to
> tune my queries wherever possible but still i was not able
> to..........improve the performance..
>
>
> So will it helpful if we try GIST or GIN for integer array[] colum
> (source_detail) with enable_seqscan=off and default_statistics_target=1000?
>
>
> Oh dear! Where to even begin? There is no way to suggest possible
> solutions without knowing a lot more about how things are currently
> configured and what, exactly, about your application is slow. Just to
> address your particular suggestions, increasing the default statistics
> target would only help if an explain analyze for a slow query indicates
> that the query planner is using inaccurate row count estimates for one or
> more steps in a query plan. Depending upon the frequency of this problem it
> may be better to increase statistics target just for individual columns
> rather than across the entire db cluster. Setting enable_seqscan to off is
> almost never a good solution to a problem, especially db-wide. If the
> planner is selecting a sequential scan when an alternative strategy would
> perform much better, then it is doing so because your configuration is
> not telling the query planner accurate values for the cost of sequential
> access vs random access - or else the statistics are inaccurate causing it
> to select a seq scan because it thinks it will traverse more rows than it
> actually will.
>
> In short, you need to read a lot more about performance tuning Postgres
> rather than taking stab-in-the-dark guesses for solutions. I believe it was
> pointed out that at least one query that is problematic for you is
> filtering based on the value of individual indexes of an array column -
> which means you actually need break those values into separate columns
> with indexes on them or create an index on column[x] so that the planner
> can use that. But if the problem is general slowness across your whole app,
> it is possible that the way your app uses the db access API is inefficient
> or you may have a misconfiguration that causes all db access to be slow.
> Depending on your hardware and platform, using the default configuration
> will result in db performance that is far from optimal. The default config
> is pretty much a minimal config.
>
> I'd suggest you spend at least a day or two reading up on Postgres
> performance tuning and investigating your particular problems. You may make
> quite a bit of improvement without our help and you'll be much more knowledgable
> about your db installation when you are done. At the very least, please
> look at the mailing list page on the Postgres website and read the links
> about how to ask performance questions so that you at least provide the
> list with enough information about your problems that others can offer
> useful feedback. I'd provide a link, but I'm on a phone.
>
> --sam
>
>
>
> Regards
> Hashim
>
>
>
> On Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni <roadrunner6(at)gmx(dot)at> wrote:
>
>> Am 03.11.2011 17:08, schrieb Tomas Vondra:
>>
>>> On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
>>> <snip>
>>>
>>> No doubt about that, querying tables using conditions on array columns is
>>> not the best direction in most cases, especially when those tables are
>>> huge.
>>>
>>> Still, the interesting part here is that the OP claims this worked just
>>> fine in the older version and after an upgrade the performance suddenly
>>> dropped. This could be caused by many things, and we're just guessing
>>> because we don't have any plans from the old version.
>>>
>>> Tomas
>>>
>>>
>>>
>> Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about
>> problems after the migration from 8.x to 9.x. Or did I miss something here?
>>
>> Regards,
>> Mario
>>
>>
>>
>> --
>> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)**
>> org <pgsql-performance(at)postgresql(dot)org>)
>> To make changes to your subscription:
>> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>
>>
>
>
>
> --
> Regards
> Mohamed Hashim.N
> Mobile:09894587678
>
>


From: "Tomas Vondra" <tv(at)fuzzy(dot)cz>
To: "Mohamed Hashim" <nmdhashim(at)gmail(dot)com>
Cc: "Mario Weilguni" <roadrunner6(at)gmx(dot)at>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-08 11:50:08
Message-ID: c5df85c09ae21f15f73e510f51b3c128.squirrel@sq.gransy.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

On 8 Listopad 2011, 4:21, Mohamed Hashim wrote:
> Hi all,
>
> Thanks for all your responses.
>
> Sorry for late response
>
> Earlier we used Postgres8.3.10 with Desktop computer (as server) and
> configuration of the system (I2 core with 4GB RAM) and also the
> application
> was slow i dint change any postgres config settings.
>
> May be because of low config We thought the aplication is slow so we opted
> to go for higher configuration server(with RAID 1) which i mentioned
> earlier.
>
> I thought the application will go fast but unfortunately there is no
> improvement so i tried to change the postgres config settings and trying
> to
> tune my queries wherever possible but still i was not able
> to..........improve the performance..

As Sam Gendler already wrote, we really can't help you until you post all
the relevant info. So far we've seen a single EXPLAIN ANALYZE output and
very vague description of the hardware.

We need to know more about the hardware and the basic config options
(shared buffers, effective cache size, work mem, etc.). We need to know
how much memory is actually available to PostgreSQL and page cache (how
much is consumed by the application - as I understand it it runs on the
same machine). We need to know what OS it's running on, and we need to see
iostat/vmstat output collected when the app is slow.

Please read this and perform the basic tuning (and let us know what values
you've used):

http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

Also post

> So will it helpful if we try GIST or GIN for integer array[] colum
> (source_detail) with enable_seqscan=off and
> default_statistics_target=1000?

This is severely premature - it might help, but you should do the basic
tuning first. It might actually cause you more trouble. You've already
done this mistake - fixing something withouth veryfying it's actually a
problem - by requesting a RAID1 config. Don't do that mistake again.

Tomas


From: Mohamed Hashim <nmdhashim(at)gmail(dot)com>
To: Tomas Vondra <tv(at)fuzzy(dot)cz>
Cc: Mario Weilguni <roadrunner6(at)gmx(dot)at>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-08 12:15:50
Message-ID: CACBfhZMajgneA6x4r63bNayYtsfpmYV8UUOhBRcr5h-emmgxKw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Hi Sam,Tomas

In my first post i have mentioned all how much shared (shared buffers,
effective cache size, work mem, etc.) and my OS and hardware information
and what are the basic settings i have changed

and regarding Explain analyze i gave one sample query because if i tune
that particular table which has records almost 16crore i thought my problem
will solve...

Regards
Hashim

On Tue, Nov 8, 2011 at 5:20 PM, Tomas Vondra <tv(at)fuzzy(dot)cz> wrote:

> On 8 Listopad 2011, 4:21, Mohamed Hashim wrote:
> > Hi all,
> >
> > Thanks for all your responses.
> >
> > Sorry for late response
> >
> > Earlier we used Postgres8.3.10 with Desktop computer (as server) and
> > configuration of the system (I2 core with 4GB RAM) and also the
> > application
> > was slow i dint change any postgres config settings.
> >
> > May be because of low config We thought the aplication is slow so we
> opted
> > to go for higher configuration server(with RAID 1) which i mentioned
> > earlier.
> >
> > I thought the application will go fast but unfortunately there is no
> > improvement so i tried to change the postgres config settings and trying
> > to
> > tune my queries wherever possible but still i was not able
> > to..........improve the performance..
>
> As Sam Gendler already wrote, we really can't help you until you post all
> the relevant info. So far we've seen a single EXPLAIN ANALYZE output and
> very vague description of the hardware.
>
> We need to know more about the hardware and the basic config options
> (shared buffers, effective cache size, work mem, etc.). We need to know
> how much memory is actually available to PostgreSQL and page cache (how
> much is consumed by the application - as I understand it it runs on the
> same machine). We need to know what OS it's running on, and we need to see
> iostat/vmstat output collected when the app is slow.
>
> Please read this and perform the basic tuning (and let us know what values
> you've used):
>
> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
>
> Also post
>
> > So will it helpful if we try GIST or GIN for integer array[] colum
> > (source_detail) with enable_seqscan=off and
> > default_statistics_target=1000?
>
> This is severely premature - it might help, but you should do the basic
> tuning first. It might actually cause you more trouble. You've already
> done this mistake - fixing something withouth veryfying it's actually a
> problem - by requesting a RAID1 config. Don't do that mistake again.
>
> Tomas
>
>

--
Regards
Mohamed Hashim.N
Mobile:09894587678


From: "Tomas Vondra" <tv(at)fuzzy(dot)cz>
To: "Mohamed Hashim" <nmdhashim(at)gmail(dot)com>
Cc: "Tomas Vondra" <tv(at)fuzzy(dot)cz>, "Mario Weilguni" <roadrunner6(at)gmx(dot)at>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-08 12:40:25
Message-ID: 1f1660cbba9479276aa885f027cbbf0d.squirrel@sq.gransy.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

On 8 Listopad 2011, 13:15, Mohamed Hashim wrote:
> Hi Sam,Tomas
>
> In my first post i have mentioned all how much shared (shared buffers,
> effective cache size, work mem, etc.) and my OS and hardware information
> and what are the basic settings i have changed

Sorry, I've missed that first message - the archive did not list it for
some reason.

> and regarding Explain analyze i gave one sample query because if i tune
> that particular table which has records almost 16crore i thought my
> problem
> will solve...

OK, so the problem is either the WHERE conditions referencing the array
column, or a bug in 9.0.x.

Can't help you with the upgrade issue, but using an array like this is a
bad design and will cause you all sorts of problems - why are you not
using regular columns, anyway?

I see you usually reference source_detail[1] - what is the expected
selectivity of this condition? What portion of the table matches it? How
many possible values are there?

And it does not make sense to me to partition by date when you're not
querying the data by date.

Try this (one by one):

1) CREATE INDEX src_idx ON stk_source(source_detail[1]) for each partition
2) add a regular column source_detail_val with the value of
source_detail[1] and create an index on it
3) repartition the table by source_detail[1] instead of date

Tomas


From: Mario Weilguni <roadrunner6(at)gmx(dot)at>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!
Date: 2011-11-08 12:49:06
Message-ID: 4EB92542.2030502@gmx.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general pgsql-performance

Am 08.11.2011 13:15, schrieb Mohamed Hashim:
> Hi Sam,Tomas
>
> In my first post i have mentioned all how much shared (shared buffers,
> effective cache size, work mem, etc.) and my OS and hardware
> information and what are the basic settings i have changed
>
> and regarding Explain analyze i gave one sample query because if i
> tune that particular table which has records almost 16crore i thought
> my problem will solve...
>
Just curios, are those array items [1] and [2] just samples and you
actually use more which are performance-related (used as condition)? If
just those two are relevant I would change the schema to use real
columns instead. And you seem to use partitioning, but you have no
partition condition?