Re: tracking commit timestamps

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Petr Jelinek <petr(at)2ndquadrant(dot)com>
Cc: Steve Singer <steve(at)ssinger(dot)info>, Andres Freund <andres(at)2ndquadrant(dot)com>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Anssi Kääriäinen <anssi(dot)kaariainen(at)thl(dot)fi>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Jaime Casanova <jaime(at)2ndquadrant(dot)com>
Subject: Re: tracking commit timestamps
Date: 2014-11-08 14:42:40
Message-ID: CA+TgmoZ_trmwOrs19CTByxVLtaJ5gstyfP708h=QTb3GCWwUxg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-www

On Sat, Nov 8, 2014 at 5:35 AM, Petr Jelinek <petr(at)2ndquadrant(dot)com> wrote:
> That's not what I said. I am actually ok with adding the LSN if people see
> it useful.
> I was just wondering if we can make the record smaller somehow - 24bytes per
> txid is around 96GB of data for whole txid range and won't work with pages
> smaller than ~4kBs unless we add 6 char support to SLRU (which is not hard
> and we could also not allow track_commit_timestamps to be turned on with
> smaller pagesize...).
>
> I remember somebody was worried about this already during the original patch
> submission and it can't be completely ignored in the discussion about adding
> more stuff into the record.

Fair point. Sorry I misunderstood.

I think the key question here is the time for which the data needs to
be retained. 2^32 of anything is a lot, but why keep around that
number of records rather than more (after all, we have epochs to
distinguish one use of a given txid from another) or fewer? Obvious
alternatives include:

- Keep the data for some period of time; discard the data when it's
older than some threshold.
- Keep a certain amount of total data; every time we create a new
file, discard the oldest one.
- Let consumers of the data say how much they need, and throw away
data when it's no longer needed by the oldest consumer.
- Some combination of the above.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2014-11-08 15:08:12 Re: row_to_json bug with index only scans: empty keys!
Previous Message Robert Haas 2014-11-08 14:30:18 Re: Convert query plan to sql query

Browse pgsql-www by date

  From Date Subject
Next Message Robins Tharakan 2014-11-08 17:31:45 Reattempt download when fetch gets a 301 - during Registration
Previous Message Petr Jelinek 2014-11-08 10:35:17 Re: tracking commit timestamps