Re: Loading the entire DB into RAM

From: Scott Marlowe <smarlowe(at)g2switchworks(dot)com>
To: Matt Davies | Postgresql List <matt-postgresql(at)mattdavies(dot)net>
Cc: "Charles A(dot) Landemaine" <landemaine(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Loading the entire DB into RAM
Date: 2006-04-07 15:49:14
Message-ID: 1144424954.32266.129.camel@state.g2switchworks.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, 2006-04-07 at 09:54, Matt Davies | Postgresql List wrote:
> If memory serves me correctly I have seen several posts about this in
> the past.
>
> I'll try to recall highlights.
>
> 1. Create a md in linux sufficiently large enough to handle the data set
> you are wanting to store.
> 2. Create a HD based copy somewhere as your permanent storage mechanism.
> 3. Start up your PostgreSQL instance with the MD as the data store
> 4. Load your data to the MD instance.
> 5. Figure out how you will change indexes _and_ ensure that your disk
> storage is consistent with your MD instance.

SNIP

> Either way you do it, I can't think of an out of the box method to doing
> it. Somehow one has to transfer data from permanent storage to the md
> instance, and, likewise, back to permanent storage.

dd could do that. Just have a third partition that holds the drive
image. Start up the mirror set, dd the file system into place on the md
device. When you're ready to shut the machine down or back it up, shut
down the postmaster, sync the md drive, dd the filesystem back off to
the image backup drive.

But I'd really just recommend getting a LOT of RAM and letting the
kernel do all the caching. If you've got a 2 gig database and 4 gigs of
ram, you should be gold.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message PFC 2006-04-07 17:12:12 Re: Loading the entire DB into RAM
Previous Message Merlin Moncure 2006-04-07 15:29:57 Re: Loading the entire DB into RAM