Very large record sizes and resource usage

From: jtkells(at)verizon(dot)net
To: pgsql-performance(at)postgresql(dot)org
Subject: Very large record sizes and resource usage
Date: 2011-07-08 00:02:26
Message-ID: g6ic17pnltbr4nd7gerhp2m574j1q8cfl4@4ax.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Is there any guidelines to sizing work_mem, shared_bufferes and other
configuration parameters etc., with regards to very large records? I
have a table that has a bytea column and I am told that some of these
columns contain over 400MB of data. I am having a problem on several
servers reading and more specifically dumping these records (table)
using pg_dump

Thanks

Browse pgsql-performance by date

  From Date Subject
Next Message Dean Rasheed 2011-07-08 09:05:47 Re: DELETE taking too much memory
Previous Message lars 2011-07-07 23:56:13 UPDATEDs slowing SELECTs in a fully cached database