Re: POC: Sharing record typmods between backends

Lists: pgsql-hackers
From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: POC: Sharing record typmods between backends
Date: 2017-04-07 05:21:35
Message-ID: CAEepm=0ZtQ-SpsgCyzzYpsXS6e=kZWqk3g5Ygn3MDV7A8dabUA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi hackers,

Tuples can have type RECORDOID and a typmod number that identifies a
"blessed" TupleDesc in a backend-private cache. To support the
sharing of such tuples through shared memory and temporary files, I
think we need a typmod registry in shared memory. Here's a
proof-of-concept patch for discussion. I'd be grateful for any
feedback and/or flames.

This is a problem I ran into in my parallel hash join project. Robert
pointed it out to me and told me to go read tqueue.c for details, and
my first reaction was: I'll code around this by teaching the planner
to avoid sharing tuples from paths that produce transient record types
based on tlist analysis[1]. Aside from being a cop-out, that approach
doesn't work because the planner doesn't actually know what types the
executor might come up with since some amount of substitution for
structurally-similar records seems to be allowed[2] (though I'm not
sure I can explain that). So... we're gonna need a bigger boat.

The patch uses typcache.c's backend-private cache still, but if the
backend is currently "attached" to a shared registry then it functions
as a write though cache. There is no cache-invalidation problem
because registered typmods are never unregistered. parallel.c exports
the leader's existing record typmods into a shared registry, and
attaches to it in workers. A DSM detach hook returns backends to
private cache mode when parallelism ends.

Some thoughts:

* Maybe it would be better to have just one DSA area, rather than the
one controlled by execParallel.c (for executor nodes to use) and this
new one controlled by parallel.c (for the ParallelContext). Those
scopes are approximately the same at least in the parallel query case,
but...

* It would be nice for the SharedRecordTypeRegistry to be able to
survive longer than a single parallel query, perhaps in a per-session
DSM segment. Perhaps eventually we will want to consider a
query-scoped area, a transaction-scoped area and a session-scoped
area? I didn't investigate that for this POC.

* It seemed to be a reasonable goal to avoid allocating an extra DSM
segment for every parallel query, so the new DSA area is created
in-place. 192KB turns out to be enough to hold an empty
SharedRecordTypmodRegistry due to dsa.c's superblock allocation scheme
(that's two 64KB size class superblocks + some DSA control
information). It'll create a new DSM segment as soon as you start
using blessed records, and will do so for every parallel query you
start from then on with the same backend. Erm, maybe adding 192KB to
every parallel query DSM segment won't be popular...

* Perhaps simplehash + an LWLock would be better than dht, but I
haven't looked into that. Can it be convinced to work in DSA memory
and to grow on demand?

Here's one way to hit the new code path, so that record types blessed
in a worker are accessed from the leader:

CREATE TABLE foo AS SELECT generate_series(1, 10) AS x;
CREATE OR REPLACE FUNCTION make_record(n int)
RETURNS RECORD LANGUAGE plpgsql PARALLEL SAFE AS
$$
BEGIN
RETURN CASE n
WHEN 1 THEN ROW(1)
WHEN 2 THEN ROW(1, 2)
WHEN 3 THEN ROW(1, 2, 3)
WHEN 4 THEN ROW(1, 2, 3, 4)
ELSE ROW(1, 2, 3, 4, 5)
END;
END;
$$;
SET force_parallel_mode = 1;
SELECT make_record(x) FROM foo;

PATCH

1. Apply dht-v3.patch[3].
2. Apply shared-record-typmod-registry-v1.patch.
3. Apply rip-out-tqueue-remapping-v1.patch.

[1] https://www.postgresql.org/message-id/CAEepm%3D2%2Bzf7L_-eZ5hPW5%3DUS%2Butdo%3D9tMVD4wt7ZSM-uOoSxWg%40mail.gmail.com
[2] https://www.postgresql.org/message-id/CA+TgmoZMH6mJyXX=YLSOvJ8jULFqGgXWZCr_rbkc1nJ+177VSQ@mail.gmail.com
[3] https://www.postgresql.org/message-id/flat/CAEepm%3D3d8o8XdVwYT6O%3DbHKsKAM2pu2D6sV1S_%3D4d%2BjStVCE7w%40mail.gmail.com

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
rip-out-tqueue-remapping-v1.patch application/octet-stream 38.3 KB
shared-record-typmod-registry-v1.patch application/octet-stream 35.9 KB

From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-05-30 05:09:22
Message-ID: CAEepm=03JUH2xFtMsuqfp-83iJTgXpGHzwb18--5XHJTVBexdg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Apr 7, 2017 at 5:21 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> * It would be nice for the SharedRecordTypeRegistry to be able to
> survive longer than a single parallel query, perhaps in a per-session
> DSM segment. Perhaps eventually we will want to consider a
> query-scoped area, a transaction-scoped area and a session-scoped
> area? I didn't investigate that for this POC.

This seems like the right way to go. I think there should be one
extra patch in this patch stack, to create a per-session DSA area (and
perhaps a "SharedSessionState" struct?) that worker backends can
attach to. It could be created when you first run a parallel query,
and then reused for all parallel queries for the rest of your session.
So, after you've run one parallel query, all future record typmod
registrations would get pushed (write-through style) into shmem, for
use by other backends that you might start in future parallel queries.
That will avoid having to copy the leader's registered record typmods
into shmem for every query going forward (the behaviour of the current
POC patch).

> * Perhaps simplehash + an LWLock would be better than dht, but I
> haven't looked into that. Can it be convinced to work in DSA memory
> and to grow on demand?

Any views on this?

> 1. Apply dht-v3.patch[3].
> 2. Apply shared-record-typmod-registry-v1.patch.
> 3. Apply rip-out-tqueue-remapping-v1.patch.

Here's a rebased version of the second patch (the other two still
apply). It's still POC code only and still uses a
per-parallel-context DSA area for space, not the per-session one I am
now proposing we develop, if people are in favour of the approach.

In case it wasn't clear from my earlier description, a nice side
effect of using a shared typmod registry is that you can delete 85% of
tqueue.c (see patch #3), so if you don't count the hash table
implementation we come out about even in terms of lines of code.

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
shared-record-typmod-registry-v2.patch application/octet-stream 35.9 KB

From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-05-30 06:45:31
Message-ID: CAFiTN-u4uX2imtMVn_Q=7-TibM95gZkrs1Ep_Hnug=aO+r-YLA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, May 30, 2017 at 1:09 AM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>> * Perhaps simplehash + an LWLock would be better than dht, but I
>> haven't looked into that. Can it be convinced to work in DSA memory
>> and to grow on demand?

Simplehash provides an option to provide your own allocator function
to it. So in the allocator function, you can allocate memory from DSA.
After it reaches some threshold it expands the size (double) and it
will again call the allocator function to allocate the bigger memory.
You can refer pagetable_allocate in tidbitmap.c.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-05-31 14:57:49
Message-ID: CA+TgmobBiHK9huYxA6ErKwhNiUAMWvWCdxN74fb=2uo=diSOvw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, May 30, 2017 at 2:45 AM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> On Tue, May 30, 2017 at 1:09 AM, Thomas Munro
> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>>> * Perhaps simplehash + an LWLock would be better than dht, but I
>>> haven't looked into that. Can it be convinced to work in DSA memory
>>> and to grow on demand?
>
> Simplehash provides an option to provide your own allocator function
> to it. So in the allocator function, you can allocate memory from DSA.
> After it reaches some threshold it expands the size (double) and it
> will again call the allocator function to allocate the bigger memory.
> You can refer pagetable_allocate in tidbitmap.c.

That only allows the pagetable to be shared, not the hash table itself.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-05-31 15:16:34
Message-ID: CAFiTN-vmRpxzQdHDeW1iMQZjTYtd3aHkRxOxJ_xbZe-FS5Hu_A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, May 31, 2017 at 10:57 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> Simplehash provides an option to provide your own allocator function
>> to it. So in the allocator function, you can allocate memory from DSA.
>> After it reaches some threshold it expands the size (double) and it
>> will again call the allocator function to allocate the bigger memory.
>> You can refer pagetable_allocate in tidbitmap.c.
>
> That only allows the pagetable to be shared, not the hash table itself.

I agree with you. But, if I understand the use case correctly we need
to store the TupleDesc for the RECORD in shared hash so that it can be
shared across multiple processes. I think this can be achieved with
the simplehash as well.

For getting this done, we need some fixed shared memory for holding
static members of SH_TYPE and the process which creates the simplehash
will be responsible for copying these static members to the shared
location so that other processes can access the SH_TYPE. And, the
dynamic part (the actual hash entries) can be allocated using DSA by
registering SH_ALLOCATE function.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-05-31 16:53:49
Message-ID: CA+TgmobcKsMw4wQK+0Rz96zSVnDzsUo2x+RPOEYqaVFg9ejBCw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, May 31, 2017 at 11:16 AM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> I agree with you. But, if I understand the use case correctly we need
> to store the TupleDesc for the RECORD in shared hash so that it can be
> shared across multiple processes. I think this can be achieved with
> the simplehash as well.
>
> For getting this done, we need some fixed shared memory for holding
> static members of SH_TYPE and the process which creates the simplehash
> will be responsible for copying these static members to the shared
> location so that other processes can access the SH_TYPE. And, the
> dynamic part (the actual hash entries) can be allocated using DSA by
> registering SH_ALLOCATE function.

Well, SH_TYPE's members SH_ELEMENT_TYPE *data and void *private_data
are not going to work in DSM, because they are pointers. You can
doubtless come up with a way around that problem, but I guess the
question is whether that's actually any better than just using DHT.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-05-31 17:27:28
Message-ID: CAFiTN-vSnkC59ffRa17pSV--gymRbX7WzNU2=mW2tjSedvyGJw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, May 31, 2017 at 12:53 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Well, SH_TYPE's members SH_ELEMENT_TYPE *data and void *private_data
> are not going to work in DSM, because they are pointers. You can
> doubtless come up with a way around that problem, but I guess the
> question is whether that's actually any better than just using DHT.

Probably I misunderstood the question. I assumed that we need to bring
in DHT only for achieving this goal. But, if the question is simply
the comparison of DHT vs simplehash for this particular case then I
agree that DHT is a more appropriate choice.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


From: Andres Freund <andres(at)anarazel(dot)de>
To: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-05-31 17:46:36
Message-ID: 20170531174636.53sgh7thgfqeybqb@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-05-31 13:27:28 -0400, Dilip Kumar wrote:
> On Wed, May 31, 2017 at 12:53 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> > Well, SH_TYPE's members SH_ELEMENT_TYPE *data and void *private_data
> > are not going to work in DSM, because they are pointers. You can
> > doubtless come up with a way around that problem, but I guess the
> > question is whether that's actually any better than just using DHT.
>
> Probably I misunderstood the question. I assumed that we need to bring
> in DHT only for achieving this goal. But, if the question is simply
> the comparison of DHT vs simplehash for this particular case then I
> agree that DHT is a more appropriate choice.

Yea, I don't think simplehash is the best choice here. It's worthwhile
to use it for performance critical bits, but using it for everything
would just increase code size without much benefit. I'd tentatively
assume that anonymous record type aren't going to be super common, and
that this is going to be the biggest bottleneck if you use them.

- Andres


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-05-31 18:28:18
Message-ID: CA+TgmoabQPECSL0eZV=t6iHy8M53Y=ZW+-31-akjykM02Cq=_g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, May 31, 2017 at 1:46 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2017-05-31 13:27:28 -0400, Dilip Kumar wrote:
>> On Wed, May 31, 2017 at 12:53 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> > Well, SH_TYPE's members SH_ELEMENT_TYPE *data and void *private_data
>> > are not going to work in DSM, because they are pointers. You can
>> > doubtless come up with a way around that problem, but I guess the
>> > question is whether that's actually any better than just using DHT.
>>
>> Probably I misunderstood the question. I assumed that we need to bring
>> in DHT only for achieving this goal. But, if the question is simply
>> the comparison of DHT vs simplehash for this particular case then I
>> agree that DHT is a more appropriate choice.
>
> Yea, I don't think simplehash is the best choice here. It's worthwhile
> to use it for performance critical bits, but using it for everything
> would just increase code size without much benefit. I'd tentatively
> assume that anonymous record type aren't going to be super common, and
> that this is going to be the biggest bottleneck if you use them.

Did you mean "not going to be"?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Dilip Kumar <dilipbalaut(at)gmail(dot)com>,Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>,Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-05-31 18:29:12
Message-ID: E1EA0227-ED80-4806-BF99-AA2A6645E286@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On May 31, 2017 11:28:18 AM PDT, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>On Wed, May 31, 2017 at 1:46 PM, Andres Freund <andres(at)anarazel(dot)de>
>wrote:
>> On 2017-05-31 13:27:28 -0400, Dilip Kumar wrote:
>>> On Wed, May 31, 2017 at 12:53 PM, Robert Haas
><robertmhaas(at)gmail(dot)com> wrote:
>>> > Well, SH_TYPE's members SH_ELEMENT_TYPE *data and void
>*private_data
>>> > are not going to work in DSM, because they are pointers. You can
>>> > doubtless come up with a way around that problem, but I guess the
>>> > question is whether that's actually any better than just using
>DHT.
>>>
>>> Probably I misunderstood the question. I assumed that we need to
>bring
>>> in DHT only for achieving this goal. But, if the question is simply
>>> the comparison of DHT vs simplehash for this particular case then I
>>> agree that DHT is a more appropriate choice.
>>
>> Yea, I don't think simplehash is the best choice here. It's
>worthwhile
>> to use it for performance critical bits, but using it for everything
>> would just increase code size without much benefit. I'd tentatively
>> assume that anonymous record type aren't going to be super common,
>and
>> that this is going to be the biggest bottleneck if you use them.
>
>Did you mean "not going to be"?

Err, yes. Thanks
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-07-10 09:39:09
Message-ID: CAEepm=1Z+GEAg=LaKPe02FmzWsLGMS14wkcT=EyQ129hv5xPxg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, Jun 1, 2017 at 6:29 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On May 31, 2017 11:28:18 AM PDT, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>>> On 2017-05-31 13:27:28 -0400, Dilip Kumar wrote:
[ ... various discussion in support of using DHT ... ]

Ok, good.

Here's a new version that introduces a per-session DSM segment to hold
the shared record typmod registry (and maybe more things later). The
per-session segment is created the first time you run a parallel query
(though there is handling for failure to allocate that allows the
parallel query to continue with no workers) and lives until your
leader backend exits. When parallel workers start up, they see its
handle in the per-query segment and attach to it, which puts
typcache.c into write-through cache mode so their idea of record
typmods stays in sync with the leader (and each other).

I also noticed that I could delete even more of tqueue.c than before:
it doesn't seem to have any remaining reason to need to know the
TupleDesc.

One way to test this code is to apply just
0003-rip-out-tqueue-remapping-v3.patch and then try the example from
the first message in this thread to see it break, and then try again
with the other two patches applied. By adding debugging trace you can
see that the worker pushes a bunch of TupleDescs into shmem, they get
pulled out by the leader when it sees the tuples, and then on a second
invocation the (new) worker can reuse them: it finds matches already
in shmem from the first invocation.

I used a DSM segment with a TOC and a DSA area inside that, like the
existing per-query DSM segment, but obviously you could spin it
various different ways. One example: just have a DSA area and make a
new kind of TOC thing that deals in dsa_pointers. Better ideas?

I believe combo CIDs should also go in there, to enable parallel
write, but I'm not 100% sure: that's neither per-session nor per-query
data, that's per-transaction. So perhaps the per-session DSM could
hold a per-session DSA and a per-transaction DSA, where the latter is
reset for each transaction, just like TopTransactionContext (though
dsa.c doesn't have a 'reset thyself' function currently). That seems
like a good place to store a shared combo CID hash table using DHT.
Thoughts?

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
shared-record-typmod-registry-v3.patchset.tgz application/x-gzip 32.0 KB

From: Andres Freund <andres(at)anarazel(dot)de>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-07-25 10:09:06
Message-ID: 20170725100906.rroaddzbhxge7ubt@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-07-10 21:39:09 +1200, Thomas Munro wrote:
> Here's a new version that introduces a per-session DSM segment to hold
> the shared record typmod registry (and maybe more things later).

You like to switch it up. *.patchset.tgz??? ;)

It does concern me that we're growing yet another somewhat different
hashtable implementation. Yet I don't quite see how we could avoid
it. dynahash relies on proper pointers, simplehash doesn't do locking
(and shouldn't) and also relies on pointers, although to a much lesser
degree. All the open coded tables aren't a good match either. So I
don't quite see an alternative, but I'd love one.

Regards,

Andres

diff --git a/src/backend/lib/dht.c b/src/backend/lib/dht.c
new file mode 100644
index 00000000000..2fec70f7742
--- /dev/null
+++ b/src/backend/lib/dht.c

FWIW, not a big fan of dht as a filename (nor of dsa.c). For one DHT
usually refers to distributed hash tables, which this is not, and for
another the abbreviation is so short it's not immediately
understandable, and likely to conflict further. I think it'd possibly
ok to have dht as symbol prefixes, but rename the file to be longer.

+ * To deal with currency, it has a fixed size set of partitions, each of which
+ * is independently locked.

s/currency/concurrency/ I presume.

+ * Each bucket maps to a partition; so insert, find
+ * and iterate operations normally only acquire one lock. Therefore, good
+ * concurrency is achieved whenever they don't collide at the lock partition

s/they/operations/?

+ * level. However, when a resize operation begins, all partition locks must
+ * be acquired simultaneously for a brief period. This is only expected to
+ * happen a small number of times until a stable size is found, since growth is
+ * geometric.

I'm a bit doubtful that we need partitioning at this point, and that it
doesn't actually *degrade* performance for your typmod case.

+ * Resizing is done incrementally so that no individual insert operation pays
+ * for the potentially large cost of splitting all buckets.

I'm not sure this is a reasonable tradeoff for the use-case suggested so
far, it doesn't exactly make things simpler. We're not going to grow
much.

+/* The opaque type used for tracking iterator state. */
+struct dht_iterator;
+typedef struct dht_iterator dht_iterator;

Isn't it actually the iterator state? Rather than tracking it? Also, why
is it opaque given you're actually defining it below? Guess you'd moved
it at some point.

+/*
+ * The set of parameters needed to create or attach to a hash table. The
+ * members tranche_id and tranche_name do not need to be initialized when
+ * attaching to an existing hash table.
+ */
+typedef struct
+{
+ Size key_size; /* Size of the key (initial bytes of entry) */
+ Size entry_size; /* Total size of entry */

Let's use size_t, like we kind of concluded in the thread you started:
http://archives.postgresql.org/message-id/25076.1489699457%40sss.pgh.pa.us
:)

+ dht_compare_function compare_function; /* Compare function */
+ dht_hash_function hash_function; /* Hash function */

Might be worth explaining that these need to provided when attaching
because they're possibly process local. Did you test this with
EXEC_BACKEND?

+ int tranche_id; /* The tranche ID to use for locks. */
+} dht_parameters;

+struct dht_iterator
+{
+ dht_hash_table *hash_table; /* The hash table we are iterating over. */
+ bool exclusive; /* Whether to lock buckets exclusively. */
+ Size partition; /* The index of the next partition to visit. */
+ Size bucket; /* The index of the next bucket to visit. */
+ dht_hash_table_item *item; /* The most recently returned item. */
+ dsa_pointer last_item_pointer; /* The last item visited. */
+ Size table_size_log2; /* The table size when we started iterating. */
+ bool locked; /* Whether the current partition is locked. */

Haven't gotten to the actual code yet, but this kinda suggest we leave a
partition locked when iterating? Hm, that seems likely to result in a
fair bit of pain...

+/* Iterating over the whole hash table. */
+extern void dht_iterate_begin(dht_hash_table *hash_table,
+ dht_iterator *iterator, bool exclusive);
+extern void *dht_iterate_next(dht_iterator *iterator);
+extern void dht_iterate_delete(dht_iterator *iterator);

s/delete/delete_current/? Otherwise it looks like it's part of
manipulating just the iterator.

+extern void dht_iterate_release(dht_iterator *iterator);

I'd add lock to to the name.

+/*
+ * An item in the hash table. This wraps the user's entry object in an
+ * envelop that holds a pointer back to the bucket and a pointer to the next
+ * item in the bucket.
+ */
+struct dht_hash_table_item
+{
+ /* The hashed key, to avoid having to recompute it. */
+ dht_hash hash;
+ /* The next item in the same bucket. */
+ dsa_pointer next;
+ /* The user's entry object follows here. */
+ char entry[FLEXIBLE_ARRAY_MEMBER];

What's the point of using FLEXIBLE_ARRAY_MEMBER here? And isn't using a
char going to lead to alignment problems?

+/* The number of partitions for locking purposes. */
+#define DHT_NUM_PARTITIONS_LOG2 7

Could use some justification.

+/*
+ * The head object for a hash table. This will be stored in dynamic shared
+ * memory.
+ */
+typedef struct
+{

Why anonymous? Not that it hurts much, but seems weird to deviate just
here.

+/*
+ * Create a new hash table backed by the given dynamic shared area, with the
+ * given parameters.
+ */
+dht_hash_table *
+dht_create(dsa_area *area, const dht_parameters *params)
+{
+ dht_hash_table *hash_table;
+ dsa_pointer control;
+
+ /* Allocate the backend-local object representing the hash table. */
+ hash_table = palloc(sizeof(dht_hash_table));

Should be documented that this uses caller's MemoryContext.

+ /* Set up the array of lock partitions. */
+ {
+ int i;
+
+ for (i = 0; i < DHT_NUM_PARTITIONS; ++i)
+ {
+ LWLockInitialize(PARTITION_LOCK(hash_table, i),
+ hash_table->control->lwlock_tranche_id);
+ hash_table->control->partitions[i].count = 0;
+ }

I'd store hash_table->control->lwlock_tranche_id and partitions[i] in
local vars. Possibly hash_table->control too.

+/*
+ * Detach from a hash table. This frees backend-local resources associated
+ * with the hash table, but the hash table will continue to exist until it is
+ * either explicitly destroyed (by a backend that is still attached to it), or
+ * the area that backs it is returned to the operating system.
+ */
+void
+dht_detach(dht_hash_table *hash_table)
+{
+ /* The hash table may have been destroyed. Just free local memory. */
+ pfree(hash_table);
+}

Somewhat inclined to add debugging refcount - seems like bugs around
that might be annoying to find. Maybe also add an assert ensuring that
no locks are held?

+/*
+ * Look up an entry, given a key. Returns a pointer to an entry if one can be
+ * found with the given key. Returns NULL if the key is not found. If a
+ * non-NULL value is returned, the entry is locked and must be released by
+ * calling dht_release. If an error is raised before dht_release is called,
+ * the lock will be released automatically, but the caller must take care to
+ * ensure that the entry is not left corrupted. The lock mode is either
+ * shared or exclusive depending on 'exclusive'.

This API seems a bit fragile.

+/*
+ * Returns a pointer to an exclusively locked item which must be released with
+ * dht_release. If the key is found in the hash table, 'found' is set to true
+ * and a pointer to the existing entry is returned. If the key is not found,
+ * 'found' is set to false, and a pointer to a newly created entry is related.

"is related"?

+ */
+void *
+dht_find_or_insert(dht_hash_table *hash_table,
+ const void *key,
+ bool *found)
+{
+ size_t hash;
+ size_t partition_index;
+ dht_partition *partition;
+ dht_hash_table_item *item;
+
+ hash = hash_table->params.hash_function(key, hash_table->params.key_size);
+ partition_index = PARTITION_FOR_HASH(hash);
+ partition = &hash_table->control->partitions[partition_index];
+
+ Assert(hash_table->control->magic == DHT_MAGIC);
+ Assert(!hash_table->exclusively_locked);

Why just exclusively locked? Why'd shared be ok?

+/*
+ * Unlock an entry which was locked by dht_find or dht_find_or_insert.
+ */
+void
+dht_release(dht_hash_table *hash_table, void *entry)
+{
+ dht_hash_table_item *item = ITEM_FROM_ENTRY(entry);
+ size_t partition_index = PARTITION_FOR_HASH(item->hash);
+ bool deferred_resize_work = false;
+
+ Assert(hash_table->control->magic == DHT_MAGIC);

Assert lock held (LWLockHeldByMe())

+/*
+ * Begin iterating through the whole hash table. The caller must supply a
+ * dht_iterator object, which can then be used to call dht_iterate_next to get
+ * values until the end is reached.
+ */
+void
+dht_iterate_begin(dht_hash_table *hash_table,
+ dht_iterator *iterator,
+ bool exclusive)
+{
+ Assert(hash_table->control->magic == DHT_MAGIC);
+
+ iterator->hash_table = hash_table;
+ iterator->exclusive = exclusive;
+ iterator->partition = 0;
+ iterator->bucket = 0;
+ iterator->item = NULL;
+ iterator->last_item_pointer = InvalidDsaPointer;
+ iterator->locked = false;
+
+ /* Snapshot the size (arbitrary lock to prevent size changing). */
+ LWLockAcquire(PARTITION_LOCK(hash_table, 0), LW_SHARED);
+ ensure_valid_bucket_pointers(hash_table);
+ iterator->table_size_log2 = hash_table->size_log2;
+ LWLockRelease(PARTITION_LOCK(hash_table, 0));

Hm. So we're introducing some additional contention on partition 0 -
probably ok.

+/*
+ * Move to the next item in the hash table. Returns a pointer to an entry, or
+ * NULL if the end of the hash table has been reached. The item is locked in
+ * exclusive or shared mode depending on the argument given to
+ * dht_iterate_begin. The caller can optionally release the lock by calling
+ * dht_iterate_release, and then call dht_iterate_next again to move to the
+ * next entry. If the iteration is in exclusive mode, client code can also
+ * call dht_iterate_delete. When the end of the hash table is reached, or at
+ * any time, the client may call dht_iterate_end to abandon iteration.
+ */

I'd just shorten the end to "at any time the client may call
dht_iterate_end to ..."

+/*
+ * Release the most recently obtained lock. This can optionally be called in
+ * between calls to dht_iterator_next to allow other processes to access the
+ * same partition of the hash table.
+ */
+void
+dht_iterate_release(dht_iterator *iterator)
+{
+ Assert(iterator->locked);
+ LWLockRelease(PARTITION_LOCK(iterator->hash_table, iterator->partition));
+ iterator->locked = false;
+}
+
+/*
+ * Terminate iteration. This must be called after iteration completes,
+ * whether or not the end was reached. The iterator object may then be reused
+ * for another iteration.
+ */
+void
+dht_iterate_end(dht_iterator *iterator)
+{
+ Assert(iterator->hash_table->control->magic == DHT_MAGIC);
+ if (iterator->locked)
+ LWLockRelease(PARTITION_LOCK(iterator->hash_table,
+ iterator->partition));
+}
+
+/*
+ * Print out debugging information about the internal state of the hash table.
+ */
+void
+dht_dump(dht_hash_table *hash_table)
+{
+ size_t i;
+ size_t j;
+
+ Assert(hash_table->control->magic == DHT_MAGIC);
+
+ for (i = 0; i < DHT_NUM_PARTITIONS; ++i)
+ LWLockAcquire(PARTITION_LOCK(hash_table, i), LW_SHARED);

Should probably assert & document that no locks are held - otherwise
there's going to be ugly deadlocks. And that's an unlikely thing to try.

+ ensure_valid_bucket_pointers(hash_table);
+
+ fprintf(stderr,
+ "hash table size = %zu\n", (size_t) 1 << hash_table->size_log2);
+ for (i = 0; i < DHT_NUM_PARTITIONS; ++i)
+ {
+ dht_partition *partition = &hash_table->control->partitions[i];
+ size_t begin = BUCKET_INDEX_FOR_PARTITION(i, hash_table->size_log2);
+ size_t end = BUCKET_INDEX_FOR_PARTITION(i + 1, hash_table->size_log2);
+
+ fprintf(stderr, " partition %zu\n", i);
+ fprintf(stderr,
+ " active buckets (key count = %zu)\n", partition->count);
+
+ for (j = begin; j < end; ++j)
+ {
+ size_t count = 0;
+ dsa_pointer bucket = hash_table->buckets[j];
+
+ while (DsaPointerIsValid(bucket))
+ {
+ dht_hash_table_item *item;
+
+ item = dsa_get_address(hash_table->area, bucket);
+
+ bucket = item->next;
+ ++count;
+ }
+ fprintf(stderr, " bucket %zu (key count = %zu)\n", j, count);
+ }
+ if (RESIZE_IN_PROGRESS(hash_table))
+ {
+ size_t begin;
+ size_t end;
+
+ begin = BUCKET_INDEX_FOR_PARTITION(i, hash_table->size_log2 - 1);
+ end = BUCKET_INDEX_FOR_PARTITION(i + 1,
+ hash_table->size_log2 - 1);
+
+ fprintf(stderr, " old buckets (key count = %zu)\n",
+ partition->old_count);
+
+ for (j = begin; j < end; ++j)
+ {
+ size_t count = 0;
+ dsa_pointer bucket = hash_table->old_buckets[j];
+
+ while (DsaPointerIsValid(bucket))
+ {
+ dht_hash_table_item *item;
+
+ item = dsa_get_address(hash_table->area, bucket);
+
+ bucket = item->next;
+ ++count;
+ }
+ fprintf(stderr,
+ " bucket %zu (key count = %zu)\n", j, count);
+ }
+ }
+ }
+
+ for (i = 0; i < DHT_NUM_PARTITIONS; ++i)
+ LWLockRelease(PARTITION_LOCK(hash_table, i));
+}

I'd put this below actual "production" code.

- Andres


From: Andres Freund <andres(at)anarazel(dot)de>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-07-31 21:08:44
Message-ID: 20170731210844.3cwrkmsmbbpt4rjc@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

diff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c
index 9fd7b4e019b..97c0125a4ba 100644
--- a/src/backend/access/common/tupdesc.c
+++ b/src/backend/access/common/tupdesc.c
@@ -337,17 +337,75 @@ DecrTupleDescRefCount(TupleDesc tupdesc)
{
Assert(tupdesc->tdrefcount > 0);

- ResourceOwnerForgetTupleDesc(CurrentResourceOwner, tupdesc);
+ if (CurrentResourceOwner != NULL)
+ ResourceOwnerForgetTupleDesc(CurrentResourceOwner, tupdesc);
if (--tupdesc->tdrefcount == 0)
FreeTupleDesc(tupdesc);
}

What's this about? CurrentResourceOwner should always be valid here, no?
If so, why did that change? I don't think it's good to detach this from
the resowner infrastructure...

/*
- * Compare two TupleDesc structures for logical equality
+ * Compare two TupleDescs' attributes for logical equality
*
* Note: we deliberately do not check the attrelid and tdtypmod fields.
* This allows typcache.c to use this routine to see if a cached record type
* matches a requested type, and is harmless for relcache.c's uses.
+ */
+bool
+equalTupleDescAttrs(Form_pg_attribute attr1, Form_pg_attribute attr2)
+{

comment not really accurate, this routine afaik isn't used by
typcache.c?

/*
- * Magic numbers for parallel state sharing. Higher-level code should use
- * smaller values, leaving these very large ones for use by this module.
+ * Magic numbers for per-context parallel state sharing. Higher-level code
+ * should use smaller values, leaving these very large ones for use by this
+ * module.
*/
#define PARALLEL_KEY_FIXED UINT64CONST(0xFFFFFFFFFFFF0001)
#define PARALLEL_KEY_ERROR_QUEUE UINT64CONST(0xFFFFFFFFFFFF0002)
@@ -63,6 +74,16 @@
#define PARALLEL_KEY_ACTIVE_SNAPSHOT UINT64CONST(0xFFFFFFFFFFFF0007)
#define PARALLEL_KEY_TRANSACTION_STATE UINT64CONST(0xFFFFFFFFFFFF0008)
#define PARALLEL_KEY_ENTRYPOINT UINT64CONST(0xFFFFFFFFFFFF0009)
+#define PARALLEL_KEY_SESSION_DSM UINT64CONST(0xFFFFFFFFFFFF000A)
+
+/* Magic number for per-session DSM TOC. */
+#define PARALLEL_SESSION_MAGIC 0xabb0fbc9
+
+/*
+ * Magic numbers for parallel state sharing in the per-session DSM area.
+ */
+#define PARALLEL_KEY_SESSION_DSA UINT64CONST(0xFFFFFFFFFFFF0001)
+#define PARALLEL_KEY_RECORD_TYPMOD_REGISTRY UINT64CONST(0xFFFFFFFFFFFF0002)

Not this patch's fault, but this infrastructure really isn't great. We
should really replace it with a shmem.h style infrastructure, using a
dht hashtable as backing...

+/* The current per-session DSM segment, if attached. */
+static dsm_segment *current_session_segment = NULL;
+

I think it'd be better if we had a proper 'SessionState' and
'BackendSessionState' infrastructure that then contains the dsm segment
etc. I think we'll otherwise just end up with a bunch of parallel
infrastructures.

+/*
+ * A mechanism for sharing record typmods between backends.
+ */
+struct SharedRecordTypmodRegistry
+{
+ dht_hash_table_handle atts_index_handle;
+ dht_hash_table_handle typmod_index_handle;
+ pg_atomic_uint32 next_typmod;
+};
+

I think the code needs to explain better how these are intended to be
used. IIUC, atts_index is used to find typmods by "identity", and
typmod_index by the typmod, right? And we need both to avoid
all workers generating different tupledescs, right? Kinda guessable by
reading typecache.c, but that shouldn't be needed.

+/*
+ * A flattened/serialized representation of a TupleDesc for use in shared
+ * memory. Can be converted to and from regular TupleDesc format. Doesn't
+ * support constraints and doesn't store the actual type OID, because this is
+ * only for use with RECORD types as created by CreateTupleDesc(). These are
+ * arranged into a linked list, in the hash table entry corresponding to the
+ * OIDs of the first 16 attributes, so we'd expect to get more than one entry
+ * in the list when named and other properties differ.
+ */
+typedef struct SerializedTupleDesc
+{
+ dsa_pointer next; /* next with the same same attribute OIDs */
+ int natts; /* number of attributes in the tuple */
+ int32 typmod; /* typmod for tuple type */
+ bool hasoid; /* tuple has oid attribute in its header */
+
+ /*
+ * The attributes follow. We only ever access the first
+ * ATTRIBUTE_FIXED_PART_SIZE bytes of each element, like the code in
+ * tupdesc.c.
+ */
+ FormData_pg_attribute attributes[FLEXIBLE_ARRAY_MEMBER];
+} SerializedTupleDesc;

Not a fan of a separate tupledesc representation, that's just going to
lead to divergence over time. I think we should rather change the normal
tupledesc representation to be compatible with this, and 'just' have a
wrapper struct for the parallel case (with next and such).

+/*
+ * An entry in SharedRecordTypmodRegistry's attribute index. The key is the
+ * first REC_HASH_KEYS attribute OIDs. That means that collisions are
+ * possible, but that's OK because SerializedTupleDesc objects are arranged
+ * into a list.
+ */

+/* Parameters for SharedRecordTypmodRegistry's attributes hash table. */
+const static dht_parameters srtr_atts_index_params = {
+ sizeof(Oid) * REC_HASH_KEYS,
+ sizeof(SRTRAttsIndexEntry),
+ memcmp,
+ tag_hash,
+ LWTRANCHE_SHARED_RECORD_ATTS_INDEX
+};
+
+/* Parameters for SharedRecordTypmodRegistry's typmod hash table. */
+const static dht_parameters srtr_typmod_index_params = {
+ sizeof(uint32),
+ sizeof(SRTRTypmodIndexEntry),
+ memcmp,
+ tag_hash,
+ LWTRANCHE_SHARED_RECORD_TYPMOD_INDEX
+};
+

I'm very much not a fan of this representation. I know you copied the
logic, but I think it's a bad idea. I think the key should just be a
dsa_pointer, and then we can have a proper tag_hash that hashes the
whole thing, and a proper comparator too. Just have

/*
* Combine two hash values, resulting in another hash value, with decent bit
* mixing.
*
* Similar to boost's hash_combine().
*/
static inline uint32
hash_combine(uint32 a, uint32 b)
{
a ^= b + 0x9e3779b9 + (a << 6) + (a >> 2);
return a;
}

and then hash everything.

+/*
+ * Make sure that RecordCacheArray is large enough to store 'typmod'.
+ */
+static void
+ensure_record_cache_typmod_slot_exists(int32 typmod)
+{
+ if (RecordCacheArray == NULL)
+ {
+ RecordCacheArray = (TupleDesc *)
+ MemoryContextAllocZero(CacheMemoryContext, 64 * sizeof(TupleDesc));
+ RecordCacheArrayLen = 64;
+ }
+
+ if (typmod >= RecordCacheArrayLen)
+ {
+ int32 newlen = RecordCacheArrayLen * 2;
+
+ while (typmod >= newlen)
+ newlen *= 2;
+
+ RecordCacheArray = (TupleDesc *) repalloc(RecordCacheArray,
+ newlen * sizeof(TupleDesc));
+ memset(RecordCacheArray + RecordCacheArrayLen, 0,
+ (newlen - RecordCacheArrayLen) * sizeof(TupleDesc *));
+ RecordCacheArrayLen = newlen;
+ }
+}

Do we really want to keep this? Could just have an equivalent dynahash
for the non-parallel case?

/*
* lookup_rowtype_tupdesc_internal --- internal routine to lookup a rowtype
@@ -1229,15 +1347,49 @@ lookup_rowtype_tupdesc_internal(Oid type_id, int32 typmod, bool noError)
/*
* It's a transient record type, so look in our record-type table.
*/
- if (typmod < 0 || typmod >= NextRecordTypmod)
+ if (typmod >= 0)
{
- if (!noError)
- ereport(ERROR,
- (errcode(ERRCODE_WRONG_OBJECT_TYPE),
- errmsg("record type has not been registered")));
- return NULL;
+ /* It is already in our local cache? */
+ if (typmod < RecordCacheArrayLen &&
+ RecordCacheArray[typmod] != NULL)
+ return RecordCacheArray[typmod];
+
+ /* Are we attached to a SharedRecordTypmodRegistry? */
+ if (CurrentSharedRecordTypmodRegistry.shared != NULL)

Why do we want to do lookups in both? I don't think it's a good idea to
have a chance that you could have the same typmod in both the local
registry (because it'd been created before the shared one) and in the
shared (because it was created in a worker). Ah, that's for caching
purposes? If so, see my above point that we shouldn't have a serialized
version of typdesc (yesyes, constraints will be a bit ugly).

+/*
+ * If we are attached to a SharedRecordTypmodRegistry, find or create a
+ * SerializedTupleDesc that matches 'tupdesc', and return its typmod.
+ * Otherwise return -1.
+ */
+static int32
+find_or_allocate_shared_record_typmod(TupleDesc tupdesc)
+{
+ SRTRAttsIndexEntry *atts_index_entry;
+ SRTRTypmodIndexEntry *typmod_index_entry;
+ SerializedTupleDesc *serialized;
+ dsa_pointer serialized_dp;
+ Oid hashkey[REC_HASH_KEYS];
+ bool found;
+ int32 typmod;
+ int i;
+
+ /* If not even attached, nothing to do. */
+ if (CurrentSharedRecordTypmodRegistry.shared == NULL)
+ return -1;
+
+ /* Try to find a match. */
+ memset(hashkey, 0, sizeof(hashkey));
+ for (i = 0; i < tupdesc->natts; ++i)
+ hashkey[i] = tupdesc->attrs[i]->atttypid;
+ atts_index_entry = (SRTRAttsIndexEntry *)
+ dht_find_or_insert(CurrentSharedRecordTypmodRegistry.atts_index,
+ hashkey,
+ &found);
+ if (!found)
+ {
+ /* Making a new entry. */
+ memcpy(atts_index_entry->leading_attr_oids,
+ hashkey,
+ sizeof(hashkey));
+ atts_index_entry->serialized_tupdesc = InvalidDsaPointer;
+ }
+
+ /* Scan the list we found for a matching serialized one. */
+ serialized_dp = atts_index_entry->serialized_tupdesc;
+ while (DsaPointerIsValid(serialized_dp))
+ {
+ serialized =
+ dsa_get_address(CurrentSharedRecordTypmodRegistry.area,
+ serialized_dp);
+ if (serialized_tupledesc_matches(serialized, tupdesc))
+ {
+ /* Found a match, we are finished. */
+ typmod = serialized->typmod;
+ dht_release(CurrentSharedRecordTypmodRegistry.atts_index,
+ atts_index_entry);
+ return typmod;
+ }
+ serialized_dp = serialized->next;
+ }
+
+ /* We didn't find a matching entry, so let's allocate a new one. */
+ typmod = (int)
+ pg_atomic_fetch_add_u32(&CurrentSharedRecordTypmodRegistry.shared->next_typmod,
+ 1);
+
+ /* Allocate shared memory and serialize the TupleDesc. */
+ serialized_dp = serialize_tupledesc(CurrentSharedRecordTypmodRegistry.area,
+ tupdesc);
+ serialized = (SerializedTupleDesc *)
+ dsa_get_address(CurrentSharedRecordTypmodRegistry.area, serialized_dp);
+ serialized->typmod = typmod;
+
+ /*
+ * While we still hold the atts_index entry locked, add this to
+ * typmod_index. That's important because we don't want anyone to be able
+ * to find a typmod via the former that can't yet be looked up in the
+ * latter.
+ */
+ typmod_index_entry =
+ dht_find_or_insert(CurrentSharedRecordTypmodRegistry.typmod_index,
+ &typmod, &found);
+ if (found)
+ elog(ERROR, "cannot create duplicate shared record typmod");
+ typmod_index_entry->typmod = typmod;
+ typmod_index_entry->serialized_tupdesc = serialized_dp;
+ dht_release(CurrentSharedRecordTypmodRegistry.typmod_index,
+ typmod_index_entry);

What if we fail to allocate memory for the entry in typmod_index?

- Andres


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-11 08:39:13
Message-ID: CAEepm=0R4t-_Jrk9hWaOj9JLdQc2kzdkVZXXCkqd=XFGs41hzw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

Please find attached a new patch series. I apologise in advance for
0001 and note that the patchset now weighs in at ~75kB compressed.
Here are my in-line replies to your two reviews:

On Tue, Jul 25, 2017 at 10:09 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> It does concern me that we're growing yet another somewhat different
> hashtable implementation. Yet I don't quite see how we could avoid
> it. dynahash relies on proper pointers, simplehash doesn't do locking
> (and shouldn't) and also relies on pointers, although to a much lesser
> degree. All the open coded tables aren't a good match either. So I
> don't quite see an alternative, but I'd love one.

Yeah, I agree. To deal with data structures with different pointer
types, locking policy, inlined hash/eq functions etc, perhaps there is
a way we could eventually do 'policy based design' using the kind of
macro trickery you started where we generate N different hash table
variations but only have to maintain source code for one chaining hash
table implementation? Or perl scripts that effectively behave as a
cfront^H^H^H nevermind. I'm not planning to investigate that for this
cycle.

>
> diff --git a/src/backend/lib/dht.c b/src/backend/lib/dht.c
> new file mode 100644
> index 00000000000..2fec70f7742
> --- /dev/null
> +++ b/src/backend/lib/dht.c
>
> FWIW, not a big fan of dht as a filename (nor of dsa.c). For one DHT
> usually refers to distributed hash tables, which this is not, and for
> another the abbreviation is so short it's not immediately
> understandable, and likely to conflict further. I think it'd possibly
> ok to have dht as symbol prefixes, but rename the file to be longer.

OK. Now it's ds_hash_table.{c,h}, where "ds" stands for "dynamic
shared". Better? If we were to do other data structures in DSA
memory they could follow that style: ds_red_black_tree.c, ds_vector.c,
ds_deque.c etc and their identifier prefix would be drbt_, dv_, dd_
etc.

Do you want to see a separate patch to rename dsa.c? Got a better
name? You could have spoken up earlier :-) It does sound like a bit
like the thing from crypto or perhaps a scary secret government
department.

> + * To deal with currency, it has a fixed size set of partitions, each of which
> + * is independently locked.
>
> s/currency/concurrency/ I presume.

Fixed.

> + * Each bucket maps to a partition; so insert, find
> + * and iterate operations normally only acquire one lock. Therefore, good
> + * concurrency is achieved whenever they don't collide at the lock partition
>
> s/they/operations/?

Fixed.

> + * level. However, when a resize operation begins, all partition locks must
> + * be acquired simultaneously for a brief period. This is only expected to
> + * happen a small number of times until a stable size is found, since growth is
> + * geometric.
>
> I'm a bit doubtful that we need partitioning at this point, and that it
> doesn't actually *degrade* performance for your typmod case.

Yeah, partitioning not needed for this case, but this is supposed to
be more generally useful. I thought about making the number of
partitions a construction parameter, but it doesn't really hurt does
it?

> + * Resizing is done incrementally so that no individual insert operation pays
> + * for the potentially large cost of splitting all buckets.
>
> I'm not sure this is a reasonable tradeoff for the use-case suggested so
> far, it doesn't exactly make things simpler. We're not going to grow
> much.

Yeah, designed to be more generally useful. Are you saying you would
prefer to see the DHT patch split into an initial submission that does
the simplest thing possible, so that the unlucky guy who causes the
hash table to grow has to do all the work of moving buckets to a
bigger hash table? Then we could move the more complicated
incremental growth stuff to a later patch.

> +/* The opaque type used for tracking iterator state. */
> +struct dht_iterator;
> +typedef struct dht_iterator dht_iterator;
>
> Isn't it actually the iterator state? Rather than tracking it? Also, why
> is it opaque given you're actually defining it below? Guess you'd moved
> it at some point.

Improved comment. The iterator state is defined below in the .h, but
with a warning that client code mustn't access it; it exists in the
header only because it's very useful to be able to but dht_iterator on
the stack which requires the client code to have its definition, but I
want to reserve the right to change it arbitrarily in future.

> +/*
> + * The set of parameters needed to create or attach to a hash table. The
> + * members tranche_id and tranche_name do not need to be initialized when
> + * attaching to an existing hash table.
> + */
> +typedef struct
> +{
> + Size key_size; /* Size of the key (initial bytes of entry) */
> + Size entry_size; /* Total size of entry */
>
> Let's use size_t, like we kind of concluded in the thread you started:
> http://archives.postgresql.org/message-id/25076.1489699457%40sss.pgh.pa.us
> :)

Sold.

> + dht_compare_function compare_function; /* Compare function */
> + dht_hash_function hash_function; /* Hash function */
>
> Might be worth explaining that these need to provided when attaching
> because they're possibly process local. Did you test this with
> EXEC_BACKEND?

Added explanation. I haven't personally tested with EXEC_BACKEND but
I believe one of my colleagues had something else that uses DHT this
running on a Windows box and didn't shout at me, and I see no reason
to think it shouldn't work: as explained in the new comment, every
attacher has to supply the function pointers from their own process
space (and standard footgun rules apply if you don't supply compatible
functions).

> + int tranche_id; /* The tranche ID to use for locks. */
> +} dht_parameters;
>
>
> +struct dht_iterator
> +{
> + dht_hash_table *hash_table; /* The hash table we are iterating over. */
> + bool exclusive; /* Whether to lock buckets exclusively. */
> + Size partition; /* The index of the next partition to visit. */
> + Size bucket; /* The index of the next bucket to visit. */
> + dht_hash_table_item *item; /* The most recently returned item. */
> + dsa_pointer last_item_pointer; /* The last item visited. */
> + Size table_size_log2; /* The table size when we started iterating. */
> + bool locked; /* Whether the current partition is locked. */
>
> Haven't gotten to the actual code yet, but this kinda suggest we leave a
> partition locked when iterating? Hm, that seems likely to result in a
> fair bit of pain...

By default yes, but you can release the lock with
dht_iterate_release_lock() and it'll be reacquired when you call
dht_iterate_next(). If you do that, then you'll continue iterating
after where you left off without visiting any item that you've already
visited, because the pointers are stored in pointer order (even though
the most recently visited item may have been freed rendering the
pointer invalid, we can still use its pointer to skip everything
already visited by numerical comparison without dereferencing it, and
it's indeterminate whether anything added while you were unlocked is
visible to you).

Maintaining linked lists in a certain order sucks, but DHT doesn't
allow duplicate keys and grows when load factor exceeds X so unless
your hash function is busted...

This is complicated, and in the category that I would normally want a
stack of heavy unit tests for. If you don't feel like making
decisions about this now, perhaps iteration (and incremental resize?)
could be removed, leaving only the most primitive get/put hash table
facilities -- just enough for this purpose? Then a later patch could
add them back, with a set of really convincing unit tests...

> +/* Iterating over the whole hash table. */
> +extern void dht_iterate_begin(dht_hash_table *hash_table,
> + dht_iterator *iterator, bool exclusive);
> +extern void *dht_iterate_next(dht_iterator *iterator);
> +extern void dht_iterate_delete(dht_iterator *iterator);
>
> s/delete/delete_current/? Otherwise it looks like it's part of
> manipulating just the iterator.

Done.

> +extern void dht_iterate_release(dht_iterator *iterator);
>
> I'd add lock to to the name.

Done.

> +/*
> + * An item in the hash table. This wraps the user's entry object in an
> + * envelop that holds a pointer back to the bucket and a pointer to the next
> + * item in the bucket.
> + */
> +struct dht_hash_table_item
> +{
> + /* The hashed key, to avoid having to recompute it. */
> + dht_hash hash;
> + /* The next item in the same bucket. */
> + dsa_pointer next;
> + /* The user's entry object follows here. */
> + char entry[FLEXIBLE_ARRAY_MEMBER];
>
> What's the point of using FLEXIBLE_ARRAY_MEMBER here? And isn't using a
> char going to lead to alignment problems?

Fixed. No longer using a member 'entry', just a comment that user
data follows and a macro to find it based on
MAXALIGN(sizeof(dht_hash_table_item)).

> +/* The number of partitions for locking purposes. */
> +#define DHT_NUM_PARTITIONS_LOG2 7
>
> Could use some justification.

Added. Short version: if it's good enough for the buffer pool...

> +/*
> + * The head object for a hash table. This will be stored in dynamic shared
> + * memory.
> + */
> +typedef struct
> +{
>
> Why anonymous? Not that it hurts much, but seems weird to deviate just
> here.

Fixed.

> +/*
> + * Create a new hash table backed by the given dynamic shared area, with the
> + * given parameters.
> + */
> +dht_hash_table *
> +dht_create(dsa_area *area, const dht_parameters *params)
> +{
> + dht_hash_table *hash_table;
> + dsa_pointer control;
> +
> + /* Allocate the backend-local object representing the hash table. */
> + hash_table = palloc(sizeof(dht_hash_table));
>
> Should be documented that this uses caller's MemoryContext.

Done.

> + /* Set up the array of lock partitions. */
> + {
> + int i;
> +
> + for (i = 0; i < DHT_NUM_PARTITIONS; ++i)
> + {
> + LWLockInitialize(PARTITION_LOCK(hash_table, i),
> + hash_table->control->lwlock_tranche_id);
> + hash_table->control->partitions[i].count = 0;
> + }
>
> I'd store hash_table->control->lwlock_tranche_id and partitions[i] in
> local vars. Possibly hash_table->control too.

Tidied up. I made local vars for partitions and tranche_id.

> +/*
> + * Detach from a hash table. This frees backend-local resources associated
> + * with the hash table, but the hash table will continue to exist until it is
> + * either explicitly destroyed (by a backend that is still attached to it), or
> + * the area that backs it is returned to the operating system.
> + */
> +void
> +dht_detach(dht_hash_table *hash_table)
> +{
> + /* The hash table may have been destroyed. Just free local memory. */
> + pfree(hash_table);
> +}
>
> Somewhat inclined to add debugging refcount - seems like bugs around
> that might be annoying to find. Maybe also add an assert ensuring that
> no locks are held?

Added assert that not locks are held.

In an earlier version I had reference counts. Then I realised that it
wasn't really helping anything. The state of being 'attached' to a
dht_hash_table isn't really the same as holding a heavyweight resource
like a DSM segment or a file which is backed by kernel resources.
'Attaching' is just something you have to do to get a backend-local
palloc()-ated object required to interact with the hash table, and
since it's just a bit of memory there is no strict requirement to
detach from it, if you're happy to let MemoryContext do that for you.
To put it in GC terms, there is no important finalizer here. Here I
am making the same distinction that we make between stuff managed by
resowner.c (files etc) and stuff managed by MemoryContext (memory); in
the former case it's an elog()-gable offence not to close things
explicitly in non-error paths, but in the latter you're free to do
that, or pfree earlier. If in future we create more things that can
live in DSA memory, I'd like them to be similarly free-and-easy. Make
sense?

In any case this use user of DHT remains attached for the backend's lifetime.

> +/*
> + * Look up an entry, given a key. Returns a pointer to an entry if one can be
> + * found with the given key. Returns NULL if the key is not found. If a
> + * non-NULL value is returned, the entry is locked and must be released by
> + * calling dht_release. If an error is raised before dht_release is called,
> + * the lock will be released automatically, but the caller must take care to
> + * ensure that the entry is not left corrupted. The lock mode is either
> + * shared or exclusive depending on 'exclusive'.
>
> This API seems a bit fragile.

Do you mean "... the caller must take care to ensure that the entry is
not left corrupted"? This is the same as anything protected by
LWLocks including shared buffers. If you error out, locks are
released and you had better not have left things in a bad state. I
guess this comment is really just about what C++ people call "basic
exception safety".

Or something else?

> +/*
> + * Returns a pointer to an exclusively locked item which must be released with
> + * dht_release. If the key is found in the hash table, 'found' is set to true
> + * and a pointer to the existing entry is returned. If the key is not found,
> + * 'found' is set to false, and a pointer to a newly created entry is related.
>
> "is related"?

Fixed.

> + */
> +void *
> +dht_find_or_insert(dht_hash_table *hash_table,
> + const void *key,
> + bool *found)
> +{
> + size_t hash;
> + size_t partition_index;
> + dht_partition *partition;
> + dht_hash_table_item *item;
> +
> + hash = hash_table->params.hash_function(key, hash_table->params.key_size);
> + partition_index = PARTITION_FOR_HASH(hash);
> + partition = &hash_table->control->partitions[partition_index];
> +
> + Assert(hash_table->control->magic == DHT_MAGIC);
> + Assert(!hash_table->exclusively_locked);
>
> Why just exclusively locked? Why'd shared be ok?

It wouldn't be OK. I just didn't have the state required to assert
that. Fixed.

I think in future it should be allowed to lock more than one partition
(conceptually more than one entry) at a time, but only after figuring
out a decent API to support doing that in deadlock-avoiding order. I
don't have a need or a plan for that yet. For the same reason it's
not OK to use dht_find[_or_insert] while any iterator has locked a
partition, which wasn't documented (is now) and isn't currently
assertable.

> +/*
> + * Unlock an entry which was locked by dht_find or dht_find_or_insert.
> + */
> +void
> +dht_release(dht_hash_table *hash_table, void *entry)
> +{
> + dht_hash_table_item *item = ITEM_FROM_ENTRY(entry);
> + size_t partition_index = PARTITION_FOR_HASH(item->hash);
> + bool deferred_resize_work = false;
> +
> + Assert(hash_table->control->magic == DHT_MAGIC);
>
> Assert lock held (LWLockHeldByMe())

Added this, and a couple more.

> +/*
> + * Begin iterating through the whole hash table. The caller must supply a
> + * dht_iterator object, which can then be used to call dht_iterate_next to get
> + * values until the end is reached.
> + */
> +void
> +dht_iterate_begin(dht_hash_table *hash_table,
> + dht_iterator *iterator,
> + bool exclusive)
> +{
> + Assert(hash_table->control->magic == DHT_MAGIC);
> +
> + iterator->hash_table = hash_table;
> + iterator->exclusive = exclusive;
> + iterator->partition = 0;
> + iterator->bucket = 0;
> + iterator->item = NULL;
> + iterator->last_item_pointer = InvalidDsaPointer;
> + iterator->locked = false;
> +
> + /* Snapshot the size (arbitrary lock to prevent size changing). */
> + LWLockAcquire(PARTITION_LOCK(hash_table, 0), LW_SHARED);
> + ensure_valid_bucket_pointers(hash_table);
> + iterator->table_size_log2 = hash_table->size_log2;
> + LWLockRelease(PARTITION_LOCK(hash_table, 0));
>
> Hm. So we're introducing some additional contention on partition 0 -
> probably ok.

It would be cute to use MyProcPid % DHT_NUM_PARTITIONS, but that might
be a deadlock hazard if you have multiple iterators on the go at once.
Otherwise iterators only ever lock partitions in order.

> +/*
> + * Move to the next item in the hash table. Returns a pointer to an entry, or
> + * NULL if the end of the hash table has been reached. The item is locked in
> + * exclusive or shared mode depending on the argument given to
> + * dht_iterate_begin. The caller can optionally release the lock by calling
> + * dht_iterate_release, and then call dht_iterate_next again to move to the
> + * next entry. If the iteration is in exclusive mode, client code can also
> + * call dht_iterate_delete. When the end of the hash table is reached, or at
> + * any time, the client may call dht_iterate_end to abandon iteration.
> + */
>
> I'd just shorten the end to "at any time the client may call
> dht_iterate_end to ..."

Done.

> [snip]
> +
> +/*
> + * Print out debugging information about the internal state of the hash table.
> + */
> +void
> +dht_dump(dht_hash_table *hash_table)
> +{
> + size_t i;
> + size_t j;
> +
> + Assert(hash_table->control->magic == DHT_MAGIC);
> +
> + for (i = 0; i < DHT_NUM_PARTITIONS; ++i)
> + LWLockAcquire(PARTITION_LOCK(hash_table, i), LW_SHARED);
>
> Should probably assert & document that no locks are held - otherwise
> there's going to be ugly deadlocks. And that's an unlikely thing to try.

OK.

> [snip]
> +}
>
> I'd put this below actual "production" code.

Done.

On Tue, Aug 1, 2017 at 9:08 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Hi,
>
> diff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c
> index 9fd7b4e019b..97c0125a4ba 100644
> --- a/src/backend/access/common/tupdesc.c
> +++ b/src/backend/access/common/tupdesc.c
> @@ -337,17 +337,75 @@ DecrTupleDescRefCount(TupleDesc tupdesc)
> {
> Assert(tupdesc->tdrefcount > 0);
>
> - ResourceOwnerForgetTupleDesc(CurrentResourceOwner, tupdesc);
> + if (CurrentResourceOwner != NULL)
> + ResourceOwnerForgetTupleDesc(CurrentResourceOwner, tupdesc);
> if (--tupdesc->tdrefcount == 0)
> FreeTupleDesc(tupdesc);
> }
>
> What's this about? CurrentResourceOwner should always be valid here, no?
> If so, why did that change? I don't think it's good to detach this from
> the resowner infrastructure...

The reason is that I install a detach hook
shared_record_typmod_registry_detach() in worker processes to clear
out their typmod registry. It runs at a time when there is no
CurrentResourceOwner. It's a theoretical concern only today, because
workers are not reused. If a workers lingered in a waiting room and
then attached to a new session DSM from a different leader, then it
needs to remember nothing of the previous leader's typmods.

> /*
> - * Compare two TupleDesc structures for logical equality
> + * Compare two TupleDescs' attributes for logical equality
> *
> * Note: we deliberately do not check the attrelid and tdtypmod fields.
> * This allows typcache.c to use this routine to see if a cached record type
> * matches a requested type, and is harmless for relcache.c's uses.
> + */
> +bool
> +equalTupleDescAttrs(Form_pg_attribute attr1, Form_pg_attribute attr2)
> +{
>
> comment not really accurate, this routine afaik isn't used by
> typcache.c?

I removed this whole hunk and left equalTupleDescs() alone, because I
no longer needed to make that change in this new version. See below.

> /*
> - * Magic numbers for parallel state sharing. Higher-level code should use
> - * smaller values, leaving these very large ones for use by this module.
> + * Magic numbers for per-context parallel state sharing. Higher-level code
> + * should use smaller values, leaving these very large ones for use by this
> + * module.
> */
> #define PARALLEL_KEY_FIXED UINT64CONST(0xFFFFFFFFFFFF0001)
> #define PARALLEL_KEY_ERROR_QUEUE UINT64CONST(0xFFFFFFFFFFFF0002)
> @@ -63,6 +74,16 @@
> #define PARALLEL_KEY_ACTIVE_SNAPSHOT UINT64CONST(0xFFFFFFFFFFFF0007)
> #define PARALLEL_KEY_TRANSACTION_STATE UINT64CONST(0xFFFFFFFFFFFF0008)
> #define PARALLEL_KEY_ENTRYPOINT UINT64CONST(0xFFFFFFFFFFFF0009)
> +#define PARALLEL_KEY_SESSION_DSM UINT64CONST(0xFFFFFFFFFFFF000A)
> +
> +/* Magic number for per-session DSM TOC. */
> +#define PARALLEL_SESSION_MAGIC 0xabb0fbc9
> +
> +/*
> + * Magic numbers for parallel state sharing in the per-session DSM area.
> + */
> +#define PARALLEL_KEY_SESSION_DSA UINT64CONST(0xFFFFFFFFFFFF0001)
> +#define PARALLEL_KEY_RECORD_TYPMOD_REGISTRY UINT64CONST(0xFFFFFFFFFFFF0002)
>
> Not this patch's fault, but this infrastructure really isn't great. We
> should really replace it with a shmem.h style infrastructure, using a
> dht hashtable as backing...

Well, I am trying to use the established programming style. We
already have a per-query DSM with a TOC indexed by magic numbers (and
executor node IDs). I add a per-session DSM with a TOC indexed by a
different set of magic numbers. We could always come up with
something better and fix it in both places later?

> +/* The current per-session DSM segment, if attached. */
> +static dsm_segment *current_session_segment = NULL;
> +
>
> I think it'd be better if we had a proper 'SessionState' and
> 'BackendSessionState' infrastructure that then contains the dsm segment
> etc. I think we'll otherwise just end up with a bunch of parallel
> infrastructures.

I'll have to come back to you on this one.

> +/*
> + * A mechanism for sharing record typmods between backends.
> + */
> +struct SharedRecordTypmodRegistry
> +{
> + dht_hash_table_handle atts_index_handle;
> + dht_hash_table_handle typmod_index_handle;
> + pg_atomic_uint32 next_typmod;
> +};
> +
>
> I think the code needs to explain better how these are intended to be
> used. IIUC, atts_index is used to find typmods by "identity", and
> typmod_index by the typmod, right? And we need both to avoid
> all workers generating different tupledescs, right? Kinda guessable by
> reading typecache.c, but that shouldn't be needed.

Fixed.

> +/*
> + * A flattened/serialized representation of a TupleDesc for use in shared
> + * memory. Can be converted to and from regular TupleDesc format. Doesn't
> + * support constraints and doesn't store the actual type OID, because this is
> + * only for use with RECORD types as created by CreateTupleDesc(). These are
> + * arranged into a linked list, in the hash table entry corresponding to the
> + * OIDs of the first 16 attributes, so we'd expect to get more than one entry
> + * in the list when named and other properties differ.
> + */
> +typedef struct SerializedTupleDesc
> +{
> + dsa_pointer next; /* next with the same same attribute OIDs */
> + int natts; /* number of attributes in the tuple */
> + int32 typmod; /* typmod for tuple type */
> + bool hasoid; /* tuple has oid attribute in its header */
> +
> + /*
> + * The attributes follow. We only ever access the first
> + * ATTRIBUTE_FIXED_PART_SIZE bytes of each element, like the code in
> + * tupdesc.c.
> + */
> + FormData_pg_attribute attributes[FLEXIBLE_ARRAY_MEMBER];
> +} SerializedTupleDesc;
>
> Not a fan of a separate tupledesc representation, that's just going to
> lead to divergence over time. I think we should rather change the normal
> tupledesc representation to be compatible with this, and 'just' have a
> wrapper struct for the parallel case (with next and such).

OK. I killed this. Instead I flattened tupleDesc to make it usable
directly in shared memory as long as there are no constraints. There
is still a small wrapper SharedTupleDesc, but that's just to bolt a
'next' pointer to them so we can chain together TupleDescs with the
same OIDs.

The new 0001 patch changes tupdesc->attrs[i]->foo with
TupleDescAttr(tupdesc, i)->foo everywhere in the tree, so that the
change from attrs[i] to &attrs[i] can be hidden.

> +/*
> + * An entry in SharedRecordTypmodRegistry's attribute index. The key is the
> + * first REC_HASH_KEYS attribute OIDs. That means that collisions are
> + * possible, but that's OK because SerializedTupleDesc objects are arranged
> + * into a list.
> + */
>
> +/* Parameters for SharedRecordTypmodRegistry's attributes hash table. */
> +const static dht_parameters srtr_atts_index_params = {
> + sizeof(Oid) * REC_HASH_KEYS,
> + sizeof(SRTRAttsIndexEntry),
> + memcmp,
> + tag_hash,
> + LWTRANCHE_SHARED_RECORD_ATTS_INDEX
> +};
> +
> +/* Parameters for SharedRecordTypmodRegistry's typmod hash table. */
> +const static dht_parameters srtr_typmod_index_params = {
> + sizeof(uint32),
> + sizeof(SRTRTypmodIndexEntry),
> + memcmp,
> + tag_hash,
> + LWTRANCHE_SHARED_RECORD_TYPMOD_INDEX
> +};
> +
>
> I'm very much not a fan of this representation. I know you copied the
> logic, but I think it's a bad idea. I think the key should just be a
> dsa_pointer, and then we can have a proper tag_hash that hashes the
> whole thing, and a proper comparator too. Just have
>
> /*
> * Combine two hash values, resulting in another hash value, with decent bit
> * mixing.
> *
> * Similar to boost's hash_combine().
> */
> static inline uint32
> hash_combine(uint32 a, uint32 b)
> {
> a ^= b + 0x9e3779b9 + (a << 6) + (a >> 2);
> return a;
> }
>
> and then hash everything.

Hmm. I'm not sure I understand. I know what hash_combine is for but
what do you mean when you say they key should just be a dsa_pointer?
What's wrong with providing the key size, whole entry size, compare
function and hash function like this?

> +/*
> + * Make sure that RecordCacheArray is large enough to store 'typmod'.
> + */
> +static void
> +ensure_record_cache_typmod_slot_exists(int32 typmod)
> +{
> + if (RecordCacheArray == NULL)
> + {
> + RecordCacheArray = (TupleDesc *)
> + MemoryContextAllocZero(CacheMemoryContext, 64 * sizeof(TupleDesc));
> + RecordCacheArrayLen = 64;
> + }
> +
> + if (typmod >= RecordCacheArrayLen)
> + {
> + int32 newlen = RecordCacheArrayLen * 2;
> +
> + while (typmod >= newlen)
> + newlen *= 2;
> +
> + RecordCacheArray = (TupleDesc *) repalloc(RecordCacheArray,
> + newlen * sizeof(TupleDesc));
> + memset(RecordCacheArray + RecordCacheArrayLen, 0,
> + (newlen - RecordCacheArrayLen) * sizeof(TupleDesc *));
> + RecordCacheArrayLen = newlen;
> + }
> +}
>
> Do we really want to keep this? Could just have an equivalent dynahash
> for the non-parallel case?

Hmm. Well the plain old array makes a lot of sense in the
non-parallel case, since we allocate typmods starting from zero. What
don't you like about it? The reason for using an array for
backend-local lookup (aside from "that's how it is already") is that
it's actually the best data structure for the job; the reason for
using a hash table in the shared case is that it gives you locking and
coordinates growth for free. (For the OID index it has to be a hash
table in both cases.)

> /*
> * lookup_rowtype_tupdesc_internal --- internal routine to lookup a rowtype
> @@ -1229,15 +1347,49 @@ lookup_rowtype_tupdesc_internal(Oid type_id, int32 typmod, bool noError)
> /*
> * It's a transient record type, so look in our record-type table.
> */
> - if (typmod < 0 || typmod >= NextRecordTypmod)
> + if (typmod >= 0)
> {
> - if (!noError)
> - ereport(ERROR,
> - (errcode(ERRCODE_WRONG_OBJECT_TYPE),
> - errmsg("record type has not been registered")));
> - return NULL;
> + /* It is already in our local cache? */
> + if (typmod < RecordCacheArrayLen &&
> + RecordCacheArray[typmod] != NULL)
> + return RecordCacheArray[typmod];
> +
> + /* Are we attached to a SharedRecordTypmodRegistry? */
> + if (CurrentSharedRecordTypmodRegistry.shared != NULL)
>
> Why do we want to do lookups in both? I don't think it's a good idea to
> have a chance that you could have the same typmod in both the local
> registry (because it'd been created before the shared one) and in the
> shared (because it was created in a worker). Ah, that's for caching
> purposes? If so, see my above point that we shouldn't have a serialized
> version of typdesc (yesyes, constraints will be a bit ugly).

Right, that's what I've now done. It's basically a write-through
cache: we'll try to find it in the backend local structures and then
fall back to the shared one. But if we find it in shared memory,
we'll just copy the pointer it into our local data structures.

In the last version I'd build a new TupleDesc from the serialized
form, but now there is no serialized form, just TupleDesc objects
which are now shmem-friendly (except for constraints, which do not
survive the matter transfer into shmem; see TupleDescCopy).

> + /*
> + * While we still hold the atts_index entry locked, add this to
> + * typmod_index. That's important because we don't want anyone to be able
> + * to find a typmod via the former that can't yet be looked up in the
> + * latter.
> + */
> + typmod_index_entry =
> + dht_find_or_insert(CurrentSharedRecordTypmodRegistry.typmod_index,
> + &typmod, &found);
> + if (found)
> + elog(ERROR, "cannot create duplicate shared record typmod");
> + typmod_index_entry->typmod = typmod;
> + typmod_index_entry->serialized_tupdesc = serialized_dp;
> + dht_release(CurrentSharedRecordTypmodRegistry.typmod_index,
> + typmod_index_entry);
>
> What if we fail to allocate memory for the entry in typmod_index?

Well, I was careful to make sure that it was only pushed onto the list
in the atts_index entry until after we'd successfully allocated
entries in both indexes, so there was no way to exit from this
function leaving a TupleDesc in one index but not the other. In other
words, it might have created an atts_index entry but it'd have an
empty list. But yeah, on reflection we shouldn't leak shared_dp in
that case. In this version I added PG_CATCH() to dsa_free() it and
PG_RETHROW().

More testing and review needed.

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
shared-record-typmods-v4.patchset.tgz application/x-gzip 74.3 KB

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-11 15:14:44
Message-ID: CA+TgmoYFA5ojiYdWx_K4ToGvZec1MyYDMREZwTXo+GqyvPdMvQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Aug 11, 2017 at 4:39 AM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> OK. Now it's ds_hash_table.{c,h}, where "ds" stands for "dynamic
> shared". Better? If we were to do other data structures in DSA
> memory they could follow that style: ds_red_black_tree.c, ds_vector.c,
> ds_deque.c etc and their identifier prefix would be drbt_, dv_, dd_
> etc.
>
> Do you want to see a separate patch to rename dsa.c? Got a better
> name? You could have spoken up earlier :-) It does sound like a bit
> like the thing from crypto or perhaps a scary secret government
> department.

I doubt that we really want to have accessor functions with names like
dynamic_shared_hash_table_insert or ds_hash_table_insert. Long names
are fine, even desirable, for APIs that aren't too widely used,
because they're relatively self-documenting, but a 30-character
function name gets annoying in a hurry if you have to call it very
often, and this is intended to be reusable for other things that want
a dynamic shared memory hash table. I think we should (a) pick some
reasonably short prefix for all the function names, like dht or dsht
or ds_hash, but not ds_hash_table or dynamic_shared_hash_table and (b)
also use that prefix as the name for the .c and .h files.

Right now, we've got a situation where the most widely-used hash table
implementation uses dynahash.c for the code, hsearch.h for the
interface, and "hash" as the prefix for the names, and that's really
hard to remember. I think having a consistent naming scheme
throughout would be a lot better.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-12 00:17:05
Message-ID: 20170812001705.npckbi2ebd4ifevw@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-08-11 11:14:44 -0400, Robert Haas wrote:
> On Fri, Aug 11, 2017 at 4:39 AM, Thomas Munro
> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> > OK. Now it's ds_hash_table.{c,h}, where "ds" stands for "dynamic
> > shared". Better? If we were to do other data structures in DSA
> > memory they could follow that style: ds_red_black_tree.c, ds_vector.c,
> > ds_deque.c etc and their identifier prefix would be drbt_, dv_, dd_
> > etc.
> >
> > Do you want to see a separate patch to rename dsa.c? Got a better
> > name? You could have spoken up earlier :-) It does sound like a bit
> > like the thing from crypto or perhaps a scary secret government
> > department.

I, and I bet a lot of other people, kind of missed dsa being merged for
a while...

> I doubt that we really want to have accessor functions with names like
> dynamic_shared_hash_table_insert or ds_hash_table_insert. Long names
> are fine, even desirable, for APIs that aren't too widely used,
> because they're relatively self-documenting, but a 30-character
> function name gets annoying in a hurry if you have to call it very
> often, and this is intended to be reusable for other things that want
> a dynamic shared memory hash table. I think we should (a) pick some
> reasonably short prefix for all the function names, like dht or dsht
> or ds_hash, but not ds_hash_table or dynamic_shared_hash_table and (b)
> also use that prefix as the name for the .c and .h files.

Yea, I agree with this. Something dsmhash_{insert,...}... seems like
it'd kinda work without being too ambiguous like dht imo is, while still
being reasonably short.

> Right now, we've got a situation where the most widely-used hash table
> implementation uses dynahash.c for the code, hsearch.h for the
> interface, and "hash" as the prefix for the names, and that's really
> hard to remember. I think having a consistent naming scheme
> throughout would be a lot better.

Yea, that situation still occasionally confuses me, a good 10 years
after starting to look at pg... There's even a a dynahash.h, except
it's useless. And dynahash.c doesn't even include hsearch.h directly
(included via shmem.h)! Personally I'd actually in favor of moving
hsearch.h stuff into dynahash.h and leave hsearch as a wrapper.

- Andres


From: Andres Freund <andres(at)anarazel(dot)de>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-12 01:55:04
Message-ID: 20170812015504.rgq7ljlwanhslrll@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2017-08-11 20:39:13 +1200, Thomas Munro wrote:
> Please find attached a new patch series. I apologise in advance for
> 0001 and note that the patchset now weighs in at ~75kB compressed.
> Here are my in-line replies to your two reviews:

Replying to a few points here, then I'll do a pass through your your
submission...

> On Tue, Jul 25, 2017 at 10:09 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > It does concern me that we're growing yet another somewhat different
> > hashtable implementation. Yet I don't quite see how we could avoid
> > it. dynahash relies on proper pointers, simplehash doesn't do locking
> > (and shouldn't) and also relies on pointers, although to a much lesser
> > degree. All the open coded tables aren't a good match either. So I
> > don't quite see an alternative, but I'd love one.
>
> Yeah, I agree. To deal with data structures with different pointer
> types, locking policy, inlined hash/eq functions etc, perhaps there is
> a way we could eventually do 'policy based design' using the kind of
> macro trickery you started where we generate N different hash table
> variations but only have to maintain source code for one chaining hash
> table implementation? Or perl scripts that effectively behave as a
> cfront^H^H^H nevermind. I'm not planning to investigate that for this
> cycle.

Whaaa, what have I done!!!! But more seriously, I'm doubtful it's worth
going there.

> > + * level. However, when a resize operation begins, all partition locks must
> > + * be acquired simultaneously for a brief period. This is only expected to
> > + * happen a small number of times until a stable size is found, since growth is
> > + * geometric.
> >
> > I'm a bit doubtful that we need partitioning at this point, and that it
> > doesn't actually *degrade* performance for your typmod case.
>
> Yeah, partitioning not needed for this case, but this is supposed to
> be more generally useful. I thought about making the number of
> partitions a construction parameter, but it doesn't really hurt does
> it?

Well, using multiple locks and such certainly isn't free. An exclusively
owned cacheline mutex is nearly an order of magnitude faster than one
that's currently shared, not to speak of modified. Also it does increase
the size overhead, which might end up happening for a few other cases.

> > + * Resizing is done incrementally so that no individual insert operation pays
> > + * for the potentially large cost of splitting all buckets.
> >
> > I'm not sure this is a reasonable tradeoff for the use-case suggested so
> > far, it doesn't exactly make things simpler. We're not going to grow
> > much.
>
> Yeah, designed to be more generally useful. Are you saying you would
> prefer to see the DHT patch split into an initial submission that does
> the simplest thing possible, so that the unlucky guy who causes the
> hash table to grow has to do all the work of moving buckets to a
> bigger hash table? Then we could move the more complicated
> incremental growth stuff to a later patch.

Well, most of the potential usecases for dsmhash I've heard about so
far, don't actually benefit much from incremental growth. In nearly all
the implementations I've seen incremental move ends up requiring more
total cycles than doing it at once, and for parallelism type usecases
the stall isn't really an issue. So yes, I think this is something
worth considering. If we were to actually use DHT for shared caches or
such, this'd be different, but that seems darned far off.

> This is complicated, and in the category that I would normally want a
> stack of heavy unit tests for. If you don't feel like making
> decisions about this now, perhaps iteration (and incremental resize?)
> could be removed, leaving only the most primitive get/put hash table
> facilities -- just enough for this purpose? Then a later patch could
> add them back, with a set of really convincing unit tests...

I'm inclined to go for that, yes.

> > +/*
> > + * Detach from a hash table. This frees backend-local resources associated
> > + * with the hash table, but the hash table will continue to exist until it is
> > + * either explicitly destroyed (by a backend that is still attached to it), or
> > + * the area that backs it is returned to the operating system.
> > + */
> > +void
> > +dht_detach(dht_hash_table *hash_table)
> > +{
> > + /* The hash table may have been destroyed. Just free local memory. */
> > + pfree(hash_table);
> > +}
> >
> > Somewhat inclined to add debugging refcount - seems like bugs around
> > that might be annoying to find. Maybe also add an assert ensuring that
> > no locks are held?
>
> Added assert that not locks are held.
>
> In an earlier version I had reference counts. Then I realised that it
> wasn't really helping anything. The state of being 'attached' to a
> dht_hash_table isn't really the same as holding a heavyweight resource
> like a DSM segment or a file which is backed by kernel resources.
> 'Attaching' is just something you have to do to get a backend-local
> palloc()-ated object required to interact with the hash table, and
> since it's just a bit of memory there is no strict requirement to
> detach from it, if you're happy to let MemoryContext do that for you.
> To put it in GC terms, there is no important finalizer here. Here I
> am making the same distinction that we make between stuff managed by
> resowner.c (files etc) and stuff managed by MemoryContext (memory); in
> the former case it's an elog()-gable offence not to close things
> explicitly in non-error paths, but in the latter you're free to do
> that, or pfree earlier. If in future we create more things that can
> live in DSA memory, I'd like them to be similarly free-and-easy. Make
> sense?

I don't quite follow. You're saying that just because there could be
local bugs (which'd easily be found via the mcxt/aset debugging stuff
and/or valgrind) making sure about the shared resources still being
there isn't useful? I don't quite find that convincing...

> > +/*
> > + * Look up an entry, given a key. Returns a pointer to an entry if one can be
> > + * found with the given key. Returns NULL if the key is not found. If a
> > + * non-NULL value is returned, the entry is locked and must be released by
> > + * calling dht_release. If an error is raised before dht_release is called,
> > + * the lock will be released automatically, but the caller must take care to
> > + * ensure that the entry is not left corrupted. The lock mode is either
> > + * shared or exclusive depending on 'exclusive'.
> >
> > This API seems a bit fragile.
>
> Do you mean "... the caller must take care to ensure that the entry is
> not left corrupted"?

Yes.

> This is the same as anything protected by LWLocks including shared
> buffers. If you error out, locks are released and you had better not
> have left things in a bad state. I guess this comment is really just
> about what C++ people call "basic exception safety".

Kind. Although it's not impossible to make this bit less error
prone. E.g. by zeroing the entry before returning.

Now that I think about it, it's possibly also worthwhile to note that
any iterators and such are invalid after errors (given that ->locked is
going to be wrong)?

> > diff --git a/src/backend/access/common/tupdesc.c b/src/backend/access/common/tupdesc.c
> > index 9fd7b4e019b..97c0125a4ba 100644
> > --- a/src/backend/access/common/tupdesc.c
> > +++ b/src/backend/access/common/tupdesc.c
> > @@ -337,17 +337,75 @@ DecrTupleDescRefCount(TupleDesc tupdesc)
> > {
> > Assert(tupdesc->tdrefcount > 0);
> >
> > - ResourceOwnerForgetTupleDesc(CurrentResourceOwner, tupdesc);
> > + if (CurrentResourceOwner != NULL)
> > + ResourceOwnerForgetTupleDesc(CurrentResourceOwner, tupdesc);
> > if (--tupdesc->tdrefcount == 0)
> > FreeTupleDesc(tupdesc);
> > }
> >
> > What's this about? CurrentResourceOwner should always be valid here, no?
> > If so, why did that change? I don't think it's good to detach this from
> > the resowner infrastructure...
>
> The reason is that I install a detach hook
> shared_record_typmod_registry_detach() in worker processes to clear
> out their typmod registry. It runs at a time when there is no
> CurrentResourceOwner. It's a theoretical concern only today, because
> workers are not reused. If a workers lingered in a waiting room and
> then attached to a new session DSM from a different leader, then it
> needs to remember nothing of the previous leader's typmods.

Hm. I'm not sure I like adhoc code like this. Wouldn't it be better to
have a 'per worker' rather than 'per transaction" resowner for things
like this? Otherwise we end up with various datastructures to keep track
of things.

> > /*
> > - * Magic numbers for parallel state sharing. Higher-level code should use
> > - * smaller values, leaving these very large ones for use by this module.
> > + * Magic numbers for per-context parallel state sharing. Higher-level code
> > + * should use smaller values, leaving these very large ones for use by this
> > + * module.
> > */
> > #define PARALLEL_KEY_FIXED UINT64CONST(0xFFFFFFFFFFFF0001)
> > #define PARALLEL_KEY_ERROR_QUEUE UINT64CONST(0xFFFFFFFFFFFF0002)
> > @@ -63,6 +74,16 @@
> > #define PARALLEL_KEY_ACTIVE_SNAPSHOT UINT64CONST(0xFFFFFFFFFFFF0007)
> > #define PARALLEL_KEY_TRANSACTION_STATE UINT64CONST(0xFFFFFFFFFFFF0008)
> > #define PARALLEL_KEY_ENTRYPOINT UINT64CONST(0xFFFFFFFFFFFF0009)
> > +#define PARALLEL_KEY_SESSION_DSM UINT64CONST(0xFFFFFFFFFFFF000A)
> > +
> > +/* Magic number for per-session DSM TOC. */
> > +#define PARALLEL_SESSION_MAGIC 0xabb0fbc9
> > +
> > +/*
> > + * Magic numbers for parallel state sharing in the per-session DSM area.
> > + */
> > +#define PARALLEL_KEY_SESSION_DSA UINT64CONST(0xFFFFFFFFFFFF0001)
> > +#define PARALLEL_KEY_RECORD_TYPMOD_REGISTRY UINT64CONST(0xFFFFFFFFFFFF0002)
> >
> > Not this patch's fault, but this infrastructure really isn't great. We
> > should really replace it with a shmem.h style infrastructure, using a
> > dht hashtable as backing...
>
> Well, I am trying to use the established programming style. We
> already have a per-query DSM with a TOC indexed by magic numbers (and
> executor node IDs). I add a per-session DSM with a TOC indexed by a
> different set of magic numbers. We could always come up with
> something better and fix it in both places later?

I guess so. I know Robert is a bit tired of me harping about this, but I
really don't think this is great...

> > +/*
> > + * A flattened/serialized representation of a TupleDesc for use in shared
> > + * memory. Can be converted to and from regular TupleDesc format. Doesn't
> > + * support constraints and doesn't store the actual type OID, because this is
> > + * only for use with RECORD types as created by CreateTupleDesc(). These are
> > + * arranged into a linked list, in the hash table entry corresponding to the
> > + * OIDs of the first 16 attributes, so we'd expect to get more than one entry
> > + * in the list when named and other properties differ.
> > + */
> > +typedef struct SerializedTupleDesc
> > +{
> > + dsa_pointer next; /* next with the same same attribute OIDs */
> > + int natts; /* number of attributes in the tuple */
> > + int32 typmod; /* typmod for tuple type */
> > + bool hasoid; /* tuple has oid attribute in its header */
> > +
> > + /*
> > + * The attributes follow. We only ever access the first
> > + * ATTRIBUTE_FIXED_PART_SIZE bytes of each element, like the code in
> > + * tupdesc.c.
> > + */
> > + FormData_pg_attribute attributes[FLEXIBLE_ARRAY_MEMBER];
> > +} SerializedTupleDesc;
> >
> > Not a fan of a separate tupledesc representation, that's just going to
> > lead to divergence over time. I think we should rather change the normal
> > tupledesc representation to be compatible with this, and 'just' have a
> > wrapper struct for the parallel case (with next and such).
>
> OK. I killed this. Instead I flattened tupleDesc to make it usable
> directly in shared memory as long as there are no constraints. There
> is still a small wrapper SharedTupleDesc, but that's just to bolt a
> 'next' pointer to them so we can chain together TupleDescs with the
> same OIDs.

Yep, that makes sense.

> > +/*
> > + * An entry in SharedRecordTypmodRegistry's attribute index. The key is the
> > + * first REC_HASH_KEYS attribute OIDs. That means that collisions are
> > + * possible, but that's OK because SerializedTupleDesc objects are arranged
> > + * into a list.
> > + */
> >
> > +/* Parameters for SharedRecordTypmodRegistry's attributes hash table. */
> > +const static dht_parameters srtr_atts_index_params = {
> > + sizeof(Oid) * REC_HASH_KEYS,
> > + sizeof(SRTRAttsIndexEntry),
> > + memcmp,
> > + tag_hash,
> > + LWTRANCHE_SHARED_RECORD_ATTS_INDEX
> > +};
> > +
> > +/* Parameters for SharedRecordTypmodRegistry's typmod hash table. */
> > +const static dht_parameters srtr_typmod_index_params = {
> > + sizeof(uint32),
> > + sizeof(SRTRTypmodIndexEntry),
> > + memcmp,
> > + tag_hash,
> > + LWTRANCHE_SHARED_RECORD_TYPMOD_INDEX
> > +};
> > +
> >
> > I'm very much not a fan of this representation. I know you copied the
> > logic, but I think it's a bad idea. I think the key should just be a
> > dsa_pointer, and then we can have a proper tag_hash that hashes the
> > whole thing, and a proper comparator too. Just have
> >
> > /*
> > * Combine two hash values, resulting in another hash value, with decent bit
> > * mixing.
> > *
> > * Similar to boost's hash_combine().
> > */
> > static inline uint32
> > hash_combine(uint32 a, uint32 b)
> > {
> > a ^= b + 0x9e3779b9 + (a << 6) + (a >> 2);
> > return a;
> > }
> >
> > and then hash everything.
>
> Hmm. I'm not sure I understand. I know what hash_combine is for but
> what do you mean when you say they key should just be a dsa_pointer?

> What's wrong with providing the key size, whole entry size, compare
> function and hash function like this?

Well, right now the key is "sizeof(Oid) * REC_HASH_KEYS" which imo is
fairly ugly. Both because it wastes space for narrow cases, and because
it leads to conflicts for wide ones. By having a dsa_pointer as a key
and custom hash/compare functions there's no need for that, you can just
compute the hash based on all keys, and compare based on all keys.

> > +/*
> > + * Make sure that RecordCacheArray is large enough to store 'typmod'.
> > + */
> > +static void
> > +ensure_record_cache_typmod_slot_exists(int32 typmod)
> > +{
> > + if (RecordCacheArray == NULL)
> > + {
> > + RecordCacheArray = (TupleDesc *)
> > + MemoryContextAllocZero(CacheMemoryContext, 64 * sizeof(TupleDesc));
> > + RecordCacheArrayLen = 64;
> > + }
> > +
> > + if (typmod >= RecordCacheArrayLen)
> > + {
> > + int32 newlen = RecordCacheArrayLen * 2;
> > +
> > + while (typmod >= newlen)
> > + newlen *= 2;
> > +
> > + RecordCacheArray = (TupleDesc *) repalloc(RecordCacheArray,
> > + newlen * sizeof(TupleDesc));
> > + memset(RecordCacheArray + RecordCacheArrayLen, 0,
> > + (newlen - RecordCacheArrayLen) * sizeof(TupleDesc *));
> > + RecordCacheArrayLen = newlen;
> > + }
> > +}
> >
> > Do we really want to keep this? Could just have an equivalent dynahash
> > for the non-parallel case?
>
> Hmm. Well the plain old array makes a lot of sense in the
> non-parallel case, since we allocate typmods starting from zero. What
> don't you like about it? The reason for using an array for
> backend-local lookup (aside from "that's how it is already") is that
> it's actually the best data structure for the job; the reason for
> using a hash table in the shared case is that it gives you locking and
> coordinates growth for free. (For the OID index it has to be a hash
> table in both cases.)

Well, that reason kinda vanished after the parallelism introduction, no?
There's no guarantee at all anymore that it's gapless - it's perfectly
possible that each worker ends up with a distinct set of ids.

Greetings,

Andres Freund


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-12 22:14:09
Message-ID: CAEepm=34GVhOL+arUx56yx7OPk7=qpGsv3CpO54feqjAwQKm5g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thanks for your feedback. Here are two parts that jumped out at me.
I'll address the other parts in a separate email.

On Sat, Aug 12, 2017 at 1:55 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> This is complicated, and in the category that I would normally want a
>> stack of heavy unit tests for. If you don't feel like making
>> decisions about this now, perhaps iteration (and incremental resize?)
>> could be removed, leaving only the most primitive get/put hash table
>> facilities -- just enough for this purpose? Then a later patch could
>> add them back, with a set of really convincing unit tests...
>
> I'm inclined to go for that, yes.

I will make it so.

>> > +/*
>> > + * An entry in SharedRecordTypmodRegistry's attribute index. The key is the
>> > + * first REC_HASH_KEYS attribute OIDs. That means that collisions are
>> > + * possible, but that's OK because SerializedTupleDesc objects are arranged
>> > + * into a list.
>> > + */
>> >
>> > +/* Parameters for SharedRecordTypmodRegistry's attributes hash table. */
>> > +const static dht_parameters srtr_atts_index_params = {
>> > + sizeof(Oid) * REC_HASH_KEYS,
>> > + sizeof(SRTRAttsIndexEntry),
>> > + memcmp,
>> > + tag_hash,
>> > + LWTRANCHE_SHARED_RECORD_ATTS_INDEX
>> > +};
>> > +
>> > +/* Parameters for SharedRecordTypmodRegistry's typmod hash table. */
>> > +const static dht_parameters srtr_typmod_index_params = {
>> > + sizeof(uint32),
>> > + sizeof(SRTRTypmodIndexEntry),
>> > + memcmp,
>> > + tag_hash,
>> > + LWTRANCHE_SHARED_RECORD_TYPMOD_INDEX
>> > +};
>> > +
>> >
>> > I'm very much not a fan of this representation. I know you copied the
>> > logic, but I think it's a bad idea. I think the key should just be a
>> > dsa_pointer, and then we can have a proper tag_hash that hashes the
>> > whole thing, and a proper comparator too. Just have
>> >
>> > /*
>> > * Combine two hash values, resulting in another hash value, with decent bit
>> > * mixing.
>> > *
>> > * Similar to boost's hash_combine().
>> > */
>> > static inline uint32
>> > hash_combine(uint32 a, uint32 b)
>> > {
>> > a ^= b + 0x9e3779b9 + (a << 6) + (a >> 2);
>> > return a;
>> > }
>> >
>> > and then hash everything.
>>
>> Hmm. I'm not sure I understand. I know what hash_combine is for but
>> what do you mean when you say they key should just be a dsa_pointer?
>
>> What's wrong with providing the key size, whole entry size, compare
>> function and hash function like this?
>
> Well, right now the key is "sizeof(Oid) * REC_HASH_KEYS" which imo is
> fairly ugly. Both because it wastes space for narrow cases, and because
> it leads to conflicts for wide ones. By having a dsa_pointer as a key
> and custom hash/compare functions there's no need for that, you can just
> compute the hash based on all keys, and compare based on all keys.

Ah, that. Yeah, it is ugly, both in the pre-existing code and in my
patch. Stepping back from this a bit more, the true key here not an
array of Oid at all (whether fixed sized or variable). It's actually
a whole TupleDesc, because this is really a TupleDesc intern pool:
given a TupleDesc, please give me the canonical TupleDesc equal to
this one. You might call it a hash set rather than a hash table
(key->value associative).

Ideally, we'd get rid of the ugly REC_HASH_KEYS-sized key and the ugly
extra conflict chain, and tupdesc.c would have a hashTupleDesc()
function that is compatible with equalTupleDescs(). Then the hash
table entry would simply be a TupleDesc (that is, a pointer).

There is an extra complication when we use DSA memory though: If you
have a hash table (set) full of dsa_pointer to struct tupleDesc but
want to be able to search it given a TupleDesc (= backend local
pointer) then you have to do some extra work. I think that work is:
the hash table entries should be a small struct that has a union {
dsa_pointer, TupleDesc } and a discriminator field to say which it is,
and the hash + eq functions should be wrappers that follow dsa_pointer
if needed and then forward to hashTupleDesc() (a function that does
hash_combine() over the Oids) and equalTupleDescs().

(That complication might not exist if tupleDesc were fixed size and
could be directly in the hash table entry, but in the process of
flattening it (= holding the attributes in it) I made it variable
size, so we have to use a pointer to it in the hash table since both
DynaHash and DHT work with fixed size entries).

Thoughts?

--
Thomas Munro
http://www.enterprisedb.com


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-13 02:52:57
Message-ID: CA+Tgmob2uHcZZiBocOzjbZGdgwmL4KOvzaBvt6w0zga-JXZbEg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Aug 11, 2017 at 9:55 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Well, most of the potential usecases for dsmhash I've heard about so
> far, don't actually benefit much from incremental growth. In nearly all
> the implementations I've seen incremental move ends up requiring more
> total cycles than doing it at once, and for parallelism type usecases
> the stall isn't really an issue. So yes, I think this is something
> worth considering. If we were to actually use DHT for shared caches or
> such, this'd be different, but that seems darned far off.

I think it'd be pretty interesting to look at replacing parts of the
stats collector machinery with something DHT-based.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-13 03:30:29
Message-ID: 20170813033029.h7puphqj7nz5t5sg@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-08-12 22:52:57 -0400, Robert Haas wrote:
> On Fri, Aug 11, 2017 at 9:55 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > Well, most of the potential usecases for dsmhash I've heard about so
> > far, don't actually benefit much from incremental growth. In nearly all
> > the implementations I've seen incremental move ends up requiring more
> > total cycles than doing it at once, and for parallelism type usecases
> > the stall isn't really an issue. So yes, I think this is something
> > worth considering. If we were to actually use DHT for shared caches or
> > such, this'd be different, but that seems darned far off.
>
> I think it'd be pretty interesting to look at replacing parts of the
> stats collector machinery with something DHT-based.

That seems to involve a lot more than this though, given that currently
the stats collector data doesn't entirely have to be in memory. I've
seen sites with a lot of databases with quite some per-database stats
data. Don't think we can just require that to be in memory :(

- Andres


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-13 03:32:45
Message-ID: CA+TgmobX6ziAzK=LhZmce0nBD2_XQU-hWQaGRa4fgf15Hdek=w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Aug 12, 2017 at 11:30 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> That seems to involve a lot more than this though, given that currently
> the stats collector data doesn't entirely have to be in memory. I've
> seen sites with a lot of databases with quite some per-database stats
> data. Don't think we can just require that to be in memory :(

Hmm. I'm not sure it wouldn't end up being *less* memory. Don't we
end up caching 1 copy of it per backend, at least for the database to
which that backend is connected? Accessing a shared copy would avoid
that sort of thing.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-13 03:37:27
Message-ID: 18214.1502595447@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Sat, Aug 12, 2017 at 11:30 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> That seems to involve a lot more than this though, given that currently
>> the stats collector data doesn't entirely have to be in memory. I've
>> seen sites with a lot of databases with quite some per-database stats
>> data. Don't think we can just require that to be in memory :(

> Hmm. I'm not sure it wouldn't end up being *less* memory. Don't we
> end up caching 1 copy of it per backend, at least for the database to
> which that backend is connected? Accessing a shared copy would avoid
> that sort of thing.

Yeah ... the collector itself has got all that in memory anyway.
We do need to think about synchronization issues if we make that
memory globally available, but I find it hard to see how that would
lead to more memory consumption overall than what happens now.

regards, tom lane


From: Andres Freund <andres(at)anarazel(dot)de>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-14 00:32:21
Message-ID: 20170814003221.ujslxthyeuukwfxn@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2017-08-11 20:39:13 +1200, Thomas Munro wrote:
> Please find attached a new patch series.

Review for 0001:

I think you made a few long lines even longer, like:

@@ -1106,11 +1106,11 @@ pltcl_trigger_handler(PG_FUNCTION_ARGS, pltcl_call_state *call_state,
Tcl_ListObjAppendElement(NULL, tcl_trigtup, Tcl_NewObj());
for (i = 0; i < tupdesc->natts; i++)
{
- if (tupdesc->attrs[i]->attisdropped)
+ if (TupleDescAttr(tupdesc, i)->attisdropped)
Tcl_ListObjAppendElement(NULL, tcl_trigtup, Tcl_NewObj());
else
Tcl_ListObjAppendElement(NULL, tcl_trigtup,
- Tcl_NewStringObj(utf_e2u(NameStr(tupdesc->attrs[i]->attname)), -1));
+ Tcl_NewStringObj(utf_e2u(NameStr(TupleDescAttr(tupdesc, i)->attname)), -1));

as it's not particularly pretty to access tupdesc->attrs[i] repeatedly,
it'd be good if you instead had a local variable for the individual
attribute.

Similar:
if (OidIsValid(get_base_element_type(TupleDescAttr(tupdesc, i)->atttypid)))
sv = plperl_ref_from_pg_array(attr, TupleDescAttr(tupdesc, i)->atttypid);
else if ((funcid = get_transform_fromsql(TupleDescAttr(tupdesc, i)->atttypid, current_call_data->prodesc->lang_oid, current_call_data->prodesc->trftypes)))
sv = (SV *) DatumGetPointer(OidFunctionCall1(funcid, attr));

@@ -150,7 +148,7 @@ ValuesNext(ValuesScanState *node)
*/
values[resind] = MakeExpandedObjectReadOnly(values[resind],
isnull[resind],
- att[resind]->attlen);
+ TupleDescAttr(slot->tts_tupleDescriptor, resind)->attlen);

@@ -158,9 +158,9 @@ convert_tuples_by_position(TupleDesc indesc,
* must agree.
*/
if (attrMap[i] == 0 &&
- indesc->attrs[i]->attisdropped &&
- indesc->attrs[i]->attlen == outdesc->attrs[i]->attlen &&
- indesc->attrs[i]->attalign == outdesc->attrs[i]->attalign)
+ TupleDescAttr(indesc, i)->attisdropped &&
+ TupleDescAttr(indesc, i)->attlen == TupleDescAttr(outdesc, i)->attlen &&
+ TupleDescAttr(indesc, i)->attalign == TupleDescAttr(outdesc, i)->attalign)
continue;

I think you get the drift, there's more....

Otherwise this seems fairly boring...

Review for 0002:

@@ -71,17 +71,17 @@ typedef struct tupleConstr
typedef struct tupleDesc
{
int natts; /* number of attributes in the tuple */
- Form_pg_attribute *attrs;
- /* attrs[N] is a pointer to the description of Attribute Number N+1 */
TupleConstr *constr; /* constraints, or NULL if none */
Oid tdtypeid; /* composite type ID for tuple type */
int32 tdtypmod; /* typmod for tuple type */
bool tdhasoid; /* tuple has oid attribute in its header */
int tdrefcount; /* reference count, or -1 if not counting */
+ /* attrs[N] is the description of Attribute Number N+1 */
+ FormData_pg_attribute attrs[FLEXIBLE_ARRAY_MEMBER];
} *TupleDesc;

sorry if I'm beating on my hobby horse, but if we're re-ordering anyway,
can you move TupleConstr to the second-to-last? That a) seems more
consistent but b) (hobby horse, sorry) avoids unnecessary alignment
padding.

@@ -734,13 +708,13 @@ BuildDescForRelation(List *schema)
/* Override TupleDescInitEntry's settings as requested */
TupleDescInitEntryCollation(desc, attnum, attcollation);
if (entry->storage)
- desc->attrs[attnum - 1]->attstorage = entry->storage;
+ desc->attrs[attnum - 1].attstorage = entry->storage;

/* Fill in additional stuff not handled by TupleDescInitEntry */
- desc->attrs[attnum - 1]->attnotnull = entry->is_not_null;
+ desc->attrs[attnum - 1].attnotnull = entry->is_not_null;
has_not_null |= entry->is_not_null;
- desc->attrs[attnum - 1]->attislocal = entry->is_local;
- desc->attrs[attnum - 1]->attinhcount = entry->inhcount;
+ desc->attrs[attnum - 1].attislocal = entry->is_local;
+ desc->attrs[attnum - 1].attinhcount = entry->inhcount;

This'd be a lot more readable if wed just stored desc->attrs[attnum - 1]
in a local variable. Also think it'd be good if it just used
TupleDescAttr() for that. Probably as part of previous commit.

@@ -366,8 +340,8 @@ equalTupleDescs(TupleDesc tupdesc1, TupleDesc tupdesc2)

for (i = 0; i < tupdesc1->natts; i++)
{
- Form_pg_attribute attr1 = tupdesc1->attrs[i];
- Form_pg_attribute attr2 = tupdesc2->attrs[i];
+ Form_pg_attribute attr1 = &tupdesc1->attrs[i];
+ Form_pg_attribute attr2 = &tupdesc2->attrs[i];

I'd convert all these as part of the previous commit.

@@ -99,12 +72,9 @@ CreateTemplateTupleDesc(int natts, bool hasoid)

/*
* CreateTupleDesc
- * This function allocates a new TupleDesc pointing to a given
+ * This function allocates a new TupleDesc by copying a given
* Form_pg_attribute array.
*
- * Note: if the TupleDesc is ever freed, the Form_pg_attribute array
- * will not be freed thereby.
- *

I'm leaning towards no, but you could argue that we should just change
that remark to be about constr?

@@ -51,39 +51,12 @@ CreateTemplateTupleDesc(int natts, bool hasoid)

/*
* Allocate enough memory for the tuple descriptor, including the
- * attribute rows, and set up the attribute row pointers.
- *
- * Note: we assume that sizeof(struct tupleDesc) is a multiple of the
- * struct pointer alignment requirement, and hence we don't need to insert
- * alignment padding between the struct and the array of attribute row
- * pointers.
- *
- * Note: Only the fixed part of pg_attribute rows is included in tuple
- * descriptors, so we only need ATTRIBUTE_FIXED_PART_SIZE space per attr.
- * That might need alignment padding, however.
+ * attribute rows.
*/
- attroffset = sizeof(struct tupleDesc) + natts * sizeof(Form_pg_attribute);
- attroffset = MAXALIGN(attroffset);
- stg = palloc(attroffset + natts * MAXALIGN(ATTRIBUTE_FIXED_PART_SIZE));
+ attroffset = offsetof(struct tupleDesc, attrs);
+ stg = palloc0(attroffset + natts * sizeof(FormData_pg_attribute));
desc = (TupleDesc) stg;

note that attroffset isn't used anymore after this...

We have two mildly different places allocating a tupledesc struct for a
number of elements. Seems sense to put them into an AllocTupleDesc()?

Review of 0003:

I'm not doing a too detailed review, given I think there's some changes
in the pipeline.

@@ -0,0 +1,124 @@
+/*-------------------------------------------------------------------------
+ *
+ * ds_hash_table.h
+ * Concurrent hash tables backed by dynamic shared memory areas.

...

+/*
+ * The opaque type representing a hash table. While the struct is defined
+ * before, client code should consider it to be be an opaque and deal only in
+ * pointers to it.
+ */
+struct dht_hash_table;
+typedef struct dht_hash_table dht_hash_table;

"defined before"?

+/*
+ * The opaque type used for iterator state. While the struct is actually
+ * defined below so it can be used on the stack, client code should deal only
+ * in pointers to it rather than accessing its members.
+ */
+struct dht_iterator;
+typedef struct dht_iterator dht_iterator;

s/used/allocated/?

+
+/*
+ * The set of parameters needed to create or attach to a hash table. The
+ * members tranche_id and tranche_name do not need to be initialized when
+ * attaching to an existing hash table. The functions do need to be supplied
+ * even when attaching because we can't safely share function pointers between
+ * backends in general.
+ */
+typedef struct
+{
+ size_t key_size; /* Size of the key (initial bytes of entry) */
+ size_t entry_size; /* Total size of entry */
+ dht_compare_function compare_function; /* Compare function */
+ dht_hash_function hash_function; /* Hash function */
+ int tranche_id; /* The tranche ID to use for locks. */
+} dht_parameters;

Wonder if it'd make sense to say that key/entry sizes to be only
minimums? That means we could increase them to be the proper aligned
size?

/*
* Unlock an entry which was locked by dht_find or dht_find_or_insert.
*/
void
dht_release(dht_hash_table *hash_table, void *entry)

/*
* Release the most recently obtained lock. This can optionally be called in
* between calls to dht_iterate_next to allow other processes to access the
* same partition of the hash table.
*/
void
dht_iterate_release_lock(dht_iterator *iterator)

I'd add lock to the first too.

FWIW, I'd be perfectly fine with abbreviating iterate/iterator to "iter"
or something. We already have that elsewhere and it's pretty clear.

+/*
+ * Print out debugging information about the internal state of the hash table.
+ * No locks must be held by the caller.
+ */
+void

Should specify where the information is printed.

+ * scan. (We don't actually expect them to have more than 1 item unless
+ * the hash function is of low quality.)

That, uh, seems like an hasty remark. Even with a good hash function
you're going to get collisions occasionally.

Review of 0004:

Ignoring aspects related to REC_HASH_KEYS and related discussion, since
we're already discussing that in another email.

+static int32
+find_or_allocate_shared_record_typmod(TupleDesc tupdesc)
+{

+ /*
+ * While we still hold the atts_index entry locked, add this to
+ * typmod_index. That's important because we don't want anyone to be able
+ * to find a typmod via the former that can't yet be looked up in the
+ * latter.
+ */
+ PG_TRY();
+ {
+ typmod_index_entry =
+ dht_find_or_insert(CurrentSharedRecordTypmodRegistry.typmod_index,
+ &typmod, &found);
+ if (found)
+ elog(ERROR, "cannot create duplicate shared record typmod");
+ }
+ PG_CATCH();
+ {
+ /*
+ * If we failed to allocate or elog()ed, we have to be careful not to
+ * leak the shared memory. Note that we might have created a new
+ * atts_index entry above, but we haven't put anything in it yet.
+ */
+ dsa_free(CurrentSharedRecordTypmodRegistry.area, shared_dp);
+ PG_RE_THROW();
+ }

Not entirely related, but I do wonder if we don't need abetter solution
to this. Something like dsa pointers that register appropriate memory
context callbacks to get deleted in case of errors?

Codewise this seems ok aside from the already discussed issues.

But architecturally I'm still not sure I quite like the a bit ad-hoc
manner session state is defined here. I think we much more should go
towards a PGPROC like PGSESSION array, that PGPROCs reference. That'd
also be preallocated in "normal" shmem. From there things like the
handle for a dht typmod table could be referenced. I think we should
slowly go towards a world where session state isn't in a lot of file
local static variables. I don't know if this is the right moment to
start doing so, but I think it's quite soon.

Reviewing 0005:

Yay!

- Andres


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-15 05:44:55
Message-ID: CAEepm=3eNfnF_7GMMLs12=O_YvO0OtDn85XwZkE1__YR_7CWPg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Aug 14, 2017 at 12:32 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Review for 0001:
>
> I think you made a few long lines even longer, like:
>
> @@ -1106,11 +1106,11 @@ pltcl_trigger_handler(PG_FUNCTION_ARGS, pltcl_call_state *call_state,
> Tcl_ListObjAppendElement(NULL, tcl_trigtup, Tcl_NewObj());
> for (i = 0; i < tupdesc->natts; i++)
> {
> - if (tupdesc->attrs[i]->attisdropped)
> + if (TupleDescAttr(tupdesc, i)->attisdropped)
> Tcl_ListObjAppendElement(NULL, tcl_trigtup, Tcl_NewObj());
> else
> Tcl_ListObjAppendElement(NULL, tcl_trigtup,
> - Tcl_NewStringObj(utf_e2u(NameStr(tupdesc->attrs[i]->attname)), -1));
> + Tcl_NewStringObj(utf_e2u(NameStr(TupleDescAttr(tupdesc, i)->attname)), -1));
>
>
> as it's not particularly pretty to access tupdesc->attrs[i] repeatedly,
> it'd be good if you instead had a local variable for the individual
> attribute.

Done.

> Similar:
> if (OidIsValid(get_base_element_type(TupleDescAttr(tupdesc, i)->atttypid)))
> sv = plperl_ref_from_pg_array(attr, TupleDescAttr(tupdesc, i)->atttypid);
> else if ((funcid = get_transform_fromsql(TupleDescAttr(tupdesc, i)->atttypid, current_call_data->prodesc->lang_oid, current_call_data->prodesc->trftypes)))
> sv = (SV *) DatumGetPointer(OidFunctionCall1(funcid, attr));

Done.

> @@ -150,7 +148,7 @@ ValuesNext(ValuesScanState *node)
> */
> values[resind] = MakeExpandedObjectReadOnly(values[resind],
> isnull[resind],
> - att[resind]->attlen);
> + TupleDescAttr(slot->tts_tupleDescriptor, resind)->attlen);
>
> @@ -158,9 +158,9 @@ convert_tuples_by_position(TupleDesc indesc,
> * must agree.
> */
> if (attrMap[i] == 0 &&
> - indesc->attrs[i]->attisdropped &&
> - indesc->attrs[i]->attlen == outdesc->attrs[i]->attlen &&
> - indesc->attrs[i]->attalign == outdesc->attrs[i]->attalign)
> + TupleDescAttr(indesc, i)->attisdropped &&
> + TupleDescAttr(indesc, i)->attlen == TupleDescAttr(outdesc, i)->attlen &&
> + TupleDescAttr(indesc, i)->attalign == TupleDescAttr(outdesc, i)->attalign)
> continue;

Done.

> I think you get the drift, there's more..../

Done in some more places too.

> Review for 0002:
>
> @@ -71,17 +71,17 @@ typedef struct tupleConstr
> typedef struct tupleDesc
> {
> int natts; /* number of attributes in the tuple */
> - Form_pg_attribute *attrs;
> - /* attrs[N] is a pointer to the description of Attribute Number N+1 */
> TupleConstr *constr; /* constraints, or NULL if none */
> Oid tdtypeid; /* composite type ID for tuple type */
> int32 tdtypmod; /* typmod for tuple type */
> bool tdhasoid; /* tuple has oid attribute in its header */
> int tdrefcount; /* reference count, or -1 if not counting */
> + /* attrs[N] is the description of Attribute Number N+1 */
> + FormData_pg_attribute attrs[FLEXIBLE_ARRAY_MEMBER];
> } *TupleDesc;
>
> sorry if I'm beating on my hobby horse, but if we're re-ordering anyway,
> can you move TupleConstr to the second-to-last? That a) seems more
> consistent but b) (hobby horse, sorry) avoids unnecessary alignment
> padding.

Done.

> @@ -734,13 +708,13 @@ BuildDescForRelation(List *schema)
> /* Override TupleDescInitEntry's settings as requested */
> TupleDescInitEntryCollation(desc, attnum, attcollation);
> if (entry->storage)
> - desc->attrs[attnum - 1]->attstorage = entry->storage;
> + desc->attrs[attnum - 1].attstorage = entry->storage;
>
> /* Fill in additional stuff not handled by TupleDescInitEntry */
> - desc->attrs[attnum - 1]->attnotnull = entry->is_not_null;
> + desc->attrs[attnum - 1].attnotnull = entry->is_not_null;
> has_not_null |= entry->is_not_null;
> - desc->attrs[attnum - 1]->attislocal = entry->is_local;
> - desc->attrs[attnum - 1]->attinhcount = entry->inhcount;
> + desc->attrs[attnum - 1].attislocal = entry->is_local;
> + desc->attrs[attnum - 1].attinhcount = entry->inhcount;
>
> This'd be a lot more readable if wed just stored desc->attrs[attnum - 1]
> in a local variable. Also think it'd be good if it just used
> TupleDescAttr() for that. Probably as part of previous commit.

Done.

> @@ -366,8 +340,8 @@ equalTupleDescs(TupleDesc tupdesc1, TupleDesc tupdesc2)
>
> for (i = 0; i < tupdesc1->natts; i++)
> {
> - Form_pg_attribute attr1 = tupdesc1->attrs[i];
> - Form_pg_attribute attr2 = tupdesc2->attrs[i];
> + Form_pg_attribute attr1 = &tupdesc1->attrs[i];
> + Form_pg_attribute attr2 = &tupdesc2->attrs[i];
>
> I'd convert all these as part of the previous commit.

Done.

> @@ -99,12 +72,9 @@ CreateTemplateTupleDesc(int natts, bool hasoid)
>
> /*
> * CreateTupleDesc
> - * This function allocates a new TupleDesc pointing to a given
> + * This function allocates a new TupleDesc by copying a given
> * Form_pg_attribute array.
> *
> - * Note: if the TupleDesc is ever freed, the Form_pg_attribute array
> - * will not be freed thereby.
> - *
>
> I'm leaning towards no, but you could argue that we should just change
> that remark to be about constr?

I don't see why.

> @@ -51,39 +51,12 @@ CreateTemplateTupleDesc(int natts, bool hasoid)
>
> /*
> * Allocate enough memory for the tuple descriptor, including the
> - * attribute rows, and set up the attribute row pointers.
> - *
> - * Note: we assume that sizeof(struct tupleDesc) is a multiple of the
> - * struct pointer alignment requirement, and hence we don't need to insert
> - * alignment padding between the struct and the array of attribute row
> - * pointers.
> - *
> - * Note: Only the fixed part of pg_attribute rows is included in tuple
> - * descriptors, so we only need ATTRIBUTE_FIXED_PART_SIZE space per attr.
> - * That might need alignment padding, however.
> + * attribute rows.
> */
> - attroffset = sizeof(struct tupleDesc) + natts * sizeof(Form_pg_attribute);
> - attroffset = MAXALIGN(attroffset);
> - stg = palloc(attroffset + natts * MAXALIGN(ATTRIBUTE_FIXED_PART_SIZE));
> + attroffset = offsetof(struct tupleDesc, attrs);
> + stg = palloc0(attroffset + natts * sizeof(FormData_pg_attribute));
> desc = (TupleDesc) stg;
>
> note that attroffset isn't used anymore after this...

Tidied.

> We have two mildly different places allocating a tupledesc struct for a
> number of elements. Seems sense to put them into an AllocTupleDesc()?

Yeah. CreateTemplateTupleDesc() is already suitable. I changed
CreateTupleDesc() to call that instead of duplicating the allocation
code.

> Review of 0003:
>
> I'm not doing a too detailed review, given I think there's some changes
> in the pipeline.

Yep. In the new patch set the hash table formerly known as DHT is now
in patch 0004 and I made the following changes based on your feedback:

1. Renamed it to "dshash". The files are named dshash.{c,h}, and the
prefix on identifiers is dshash_. You suggested dsmhash, but the "m"
didn't seem to make much sense. I considered dsahash, but dshash
seemed better. Thoughts?

2. Ripped out the incremental resizing and iterator support for now,
as discussed. I want to post patches to add those features when we
have a use case but I can see that it's no slam dunk so I want to keep
that stuff out of the dependency graph for parallel hash.

3. Added support for hash and compare functions with an extra
argument for user data, a bit like qsort_arg_comparator. This is
necessary for functions that need to be able to dereference a
dsa_pointer stored in the entry, since they need the dsa_area. (I
would normally call such an argument 'user_data' or 'context' or
something but 'arg' seemed to be establish by qsort_arg.)

> @@ -0,0 +1,124 @@
> +/*-------------------------------------------------------------------------
> + *
> + * ds_hash_table.h
> + * Concurrent hash tables backed by dynamic shared memory areas.
>
> ...
>
> +/*
> + * The opaque type representing a hash table. While the struct is defined
> + * before, client code should consider it to be be an opaque and deal only in
> + * pointers to it.
> + */
> +struct dht_hash_table;
> +typedef struct dht_hash_table dht_hash_table;
>
> "defined before"?

Bleugh. Fixed.

> +/*
> + * The opaque type used for iterator state. While the struct is actually
> + * defined below so it can be used on the stack, client code should deal only
> + * in pointers to it rather than accessing its members.
> + */
> +struct dht_iterator;
> +typedef struct dht_iterator dht_iterator;
>
> s/used/allocated/?

Removed for now, see above.

> +
> +/*
> + * The set of parameters needed to create or attach to a hash table. The
> + * members tranche_id and tranche_name do not need to be initialized when
> + * attaching to an existing hash table. The functions do need to be supplied
> + * even when attaching because we can't safely share function pointers between
> + * backends in general.
> + */
> +typedef struct
> +{
> + size_t key_size; /* Size of the key (initial bytes of entry) */
> + size_t entry_size; /* Total size of entry */
> + dht_compare_function compare_function; /* Compare function */
> + dht_hash_function hash_function; /* Hash function */
> + int tranche_id; /* The tranche ID to use for locks. */
> +} dht_parameters;
>
> Wonder if it'd make sense to say that key/entry sizes to be only
> minimums? That means we could increase them to be the proper aligned
> size?

I don't understand. You mean explicitly saying that there are
overheads? Doesn't that go without saying?

> /*
> * Unlock an entry which was locked by dht_find or dht_find_or_insert.
> */
> void
> dht_release(dht_hash_table *hash_table, void *entry)
>
> /*
> * Release the most recently obtained lock. This can optionally be called in
> * between calls to dht_iterate_next to allow other processes to access the
> * same partition of the hash table.
> */
> void
> dht_iterate_release_lock(dht_iterator *iterator)
>
> I'd add lock to the first too.

Done.

> FWIW, I'd be perfectly fine with abbreviating iterate/iterator to "iter"
> or something. We already have that elsewhere and it's pretty clear.

Will return to this question in a future submission that adds iterators.

> +/*
> + * Print out debugging information about the internal state of the hash table.
> + * No locks must be held by the caller.
> + */
> +void
>
> Should specify where the information is printed.

Done.

> + * scan. (We don't actually expect them to have more than 1 item unless
> + * the hash function is of low quality.)
>
> That, uh, seems like an hasty remark. Even with a good hash function
> you're going to get collisions occasionally.
>
>
> Review of 0004:
>
> Ignoring aspects related to REC_HASH_KEYS and related discussion, since
> we're already discussing that in another email.

This version includes new refactoring patches 0003, 0004 to get rid of
REC_HASH_KEYS by teaching the hash table how to use a TupleDesc as a
key directly. Then the shared version does approximately the same
thing, with a couple of extra hoops to jump thought. Thoughts?

> +static int32
> +find_or_allocate_shared_record_typmod(TupleDesc tupdesc)
> +{
>
> + /*
> + * While we still hold the atts_index entry locked, add this to
> + * typmod_index. That's important because we don't want anyone to be able
> + * to find a typmod via the former that can't yet be looked up in the
> + * latter.
> + */
> + PG_TRY();
> + {
> + typmod_index_entry =
> + dht_find_or_insert(CurrentSharedRecordTypmodRegistry.typmod_index,
> + &typmod, &found);
> + if (found)
> + elog(ERROR, "cannot create duplicate shared record typmod");
> + }
> + PG_CATCH();
> + {
> + /*
> + * If we failed to allocate or elog()ed, we have to be careful not to
> + * leak the shared memory. Note that we might have created a new
> + * atts_index entry above, but we haven't put anything in it yet.
> + */
> + dsa_free(CurrentSharedRecordTypmodRegistry.area, shared_dp);
> + PG_RE_THROW();
> + }
>
> Not entirely related, but I do wonder if we don't need abetter solution
> to this. Something like dsa pointers that register appropriate memory
> context callbacks to get deleted in case of errors?

Huh, scope guards. I have had some ideas about some kind of
destructor mechanism that might replace what we're doing with DSM
detach hooks in various places and also work in containers like hash
tables (ie entries could have destructors), but doing it with the
stack is another level...

> Codewise this seems ok aside from the already discussed issues.
>
> But architecturally I'm still not sure I quite like the a bit ad-hoc
> manner session state is defined here. I think we much more should go
> towards a PGPROC like PGSESSION array, that PGPROCs reference. That'd
> also be preallocated in "normal" shmem. From there things like the
> handle for a dht typmod table could be referenced. I think we should
> slowly go towards a world where session state isn't in a lot of file
> local static variables. I don't know if this is the right moment to
> start doing so, but I think it's quite soon.

No argument from me about that general idea. All our global state is
an obstacle for testability, multi-threading, new CPU scheduling
architectures etc. I had been trying to avoid getting too adventurous
here, but here goes nothing... In this version there is an honest
Session struct. There is still a single global variable --
CurrentSession -- which would I guess could be a candidate to become a
thread-local variable from the future (or alternatively an argument to
every function that needs session access). Is this better? Haven't
tested this much yet but seems like better code layout to me.

> Reviewing 0005:
>
> Yay!

That's the kind of review I like! Thanks.

I will also post some testing code soon.

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
shared-record-typmods-v5.patchset.tgz application/x-gzip 72.9 KB

From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-15 21:17:04
Message-ID: CAEepm=1vUNNBUvTfP+J7wgSqtEbb5NAg01VoZ2hPVyKG2qo8Qw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Aug 15, 2017 at 5:44 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Mon, Aug 14, 2017 at 12:32 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> But architecturally I'm still not sure I quite like the a bit ad-hoc
>> manner session state is defined here. I think we much more should go
>> towards a PGPROC like PGSESSION array, that PGPROCs reference. That'd
>> also be preallocated in "normal" shmem. From there things like the
>> handle for a dht typmod table could be referenced. I think we should
>> slowly go towards a world where session state isn't in a lot of file
>> local static variables. I don't know if this is the right moment to
>> start doing so, but I think it's quite soon.
>
> No argument from me about that general idea. All our global state is
> an obstacle for testability, multi-threading, new CPU scheduling
> architectures etc. I had been trying to avoid getting too adventurous
> here, but here goes nothing... In this version there is an honest
> Session struct. There is still a single global variable --
> CurrentSession -- which would I guess could be a candidate to become a
> thread-local variable from the future (or alternatively an argument to
> every function that needs session access). Is this better? Haven't
> tested this much yet but seems like better code layout to me.

> 0006-Introduce-a-shared-memory-record-typmod-registry.patch

+/*
+ * A struct encapsulating some elements of a user's session. For now this
+ * manages state that applies to parallel query, but it principle it could
+ * include other things that are currently global variables.
+ */
+typedef struct Session
+{
+ dsm_segment *segment; /* The session-scoped
DSM segment. */
+ dsa_area *area; /* The
session-scoped DSA area. */
+
+ /* State managed by typcache.c. */
+ SharedRecordTypmodRegistry *typmod_registry;
+ dshash_table *record_table; /* Typmods indexed by tuple
descriptor */
+ dshash_table *typmod_table; /* Tuple descriptors indexed
by typmod */
+} Session;

Upon reflection, these members should probably be called
shared_record_table etc. Presumably later refactoring would introduce
(for example) local_record_table, which would replace the following
variable in typcache.c:

static HTAB *RecordCacheHash = NULL;

... and likewise for NextRecordTypmod and RecordCacheArray which
together embody this session's local typmod registry and ability to
make more.

The idea here is eventually to move all state that is tried to a
session into this structure, though I'm not proposing to do any more
of that than is necessary as part of *this* patchset. For now I'm
just looking for a decent place to put the minimal shared session
state, but in a way that allows us "slowly [to] go towards a world
where session state isn't in a lot of file local static variables" as
you put it.

There's a separate discussion to be had about whether things like
assign_record_type_typmod() should take a Session pointer or access
the global variable (and perhaps in future thread-local)
CurrentSession, but the path of least resistance for now is, I think,
as I have it.

On another topic, I probably need to study and test some failure paths better.

Thoughts?

--
Thomas Munro
http://www.enterprisedb.com


From: Andres Freund <andres(at)anarazel(dot)de>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-15 22:06:53
Message-ID: 20170815220653.bhz3s2zo7jo5dtjj@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-08-15 17:44:55 +1200, Thomas Munro wrote:
> > @@ -99,12 +72,9 @@ CreateTemplateTupleDesc(int natts, bool hasoid)
> >
> > /*
> > * CreateTupleDesc
> > - * This function allocates a new TupleDesc pointing to a given
> > + * This function allocates a new TupleDesc by copying a given
> > * Form_pg_attribute array.
> > *
> > - * Note: if the TupleDesc is ever freed, the Form_pg_attribute array
> > - * will not be freed thereby.
> > - *
> >
> > I'm leaning towards no, but you could argue that we should just change
> > that remark to be about constr?
>
> I don't see why.

Because for that the freeing bit is still true, ie. it's still
separately allocated.

> > Review of 0003:
> >
> > I'm not doing a too detailed review, given I think there's some changes
> > in the pipeline.
>
> Yep. In the new patch set the hash table formerly known as DHT is now
> in patch 0004 and I made the following changes based on your feedback:
>
> 1. Renamed it to "dshash". The files are named dshash.{c,h}, and the
> prefix on identifiers is dshash_. You suggested dsmhash, but the "m"
> didn't seem to make much sense. I considered dsahash, but dshash
> seemed better. Thoughts?

WFM. Just curious, why didn't m make sense? I was referring to dynamic
shared memory hash - seems right. Whether there's an intermediary dsa
layer or not...

> 2. Ripped out the incremental resizing and iterator support for now,
> as discussed. I want to post patches to add those features when we
> have a use case but I can see that it's no slam dunk so I want to keep
> that stuff out of the dependency graph for parallel hash.

Cool.

> 3. Added support for hash and compare functions with an extra
> argument for user data, a bit like qsort_arg_comparator. This is
> necessary for functions that need to be able to dereference a
> dsa_pointer stored in the entry, since they need the dsa_area. (I
> would normally call such an argument 'user_data' or 'context' or
> something but 'arg' seemed to be establish by qsort_arg.)

Good.

> > +/*
> > + * The set of parameters needed to create or attach to a hash table. The
> > + * members tranche_id and tranche_name do not need to be initialized when
> > + * attaching to an existing hash table. The functions do need to be supplied
> > + * even when attaching because we can't safely share function pointers between
> > + * backends in general.
> > + */
> > +typedef struct
> > +{
> > + size_t key_size; /* Size of the key (initial bytes of entry) */
> > + size_t entry_size; /* Total size of entry */
> > + dht_compare_function compare_function; /* Compare function */
> > + dht_hash_function hash_function; /* Hash function */
> > + int tranche_id; /* The tranche ID to use for locks. */
> > +} dht_parameters;
> >
> > Wonder if it'd make sense to say that key/entry sizes to be only
> > minimums? That means we could increase them to be the proper aligned
> > size?
>
> I don't understand. You mean explicitly saying that there are
> overheads? Doesn't that go without saying?

I was thinking that we could do the MAXALIGN style calculations once
instead of repeatedly, by including them in the key and entry sizes.

> > Ignoring aspects related to REC_HASH_KEYS and related discussion, since
> > we're already discussing that in another email.
>
> This version includes new refactoring patches 0003, 0004 to get rid of
> REC_HASH_KEYS by teaching the hash table how to use a TupleDesc as a
> key directly. Then the shared version does approximately the same
> thing, with a couple of extra hoops to jump thought. Thoughts?

Will look.

> > +static int32
> > +find_or_allocate_shared_record_typmod(TupleDesc tupdesc)
> > +{
> >
> > + /*
> > + * While we still hold the atts_index entry locked, add this to
> > + * typmod_index. That's important because we don't want anyone to be able
> > + * to find a typmod via the former that can't yet be looked up in the
> > + * latter.
> > + */
> > + PG_TRY();
> > + {
> > + typmod_index_entry =
> > + dht_find_or_insert(CurrentSharedRecordTypmodRegistry.typmod_index,
> > + &typmod, &found);
> > + if (found)
> > + elog(ERROR, "cannot create duplicate shared record typmod");
> > + }
> > + PG_CATCH();
> > + {
> > + /*
> > + * If we failed to allocate or elog()ed, we have to be careful not to
> > + * leak the shared memory. Note that we might have created a new
> > + * atts_index entry above, but we haven't put anything in it yet.
> > + */
> > + dsa_free(CurrentSharedRecordTypmodRegistry.area, shared_dp);
> > + PG_RE_THROW();
> > + }
> >
> > Not entirely related, but I do wonder if we don't need abetter solution
> > to this. Something like dsa pointers that register appropriate memory
> > context callbacks to get deleted in case of errors?
>
> Huh, scope guards. I have had some ideas about some kind of
> destructor mechanism that might replace what we're doing with DSM
> detach hooks in various places and also work in containers like hash
> tables (ie entries could have destructors), but doing it with the
> stack is another level...

Not sure what you mean with 'stack'?

shared-record-typmods-v5.patchset/0004-Refactor-typcache.c-s-record-typmod-hash-table.patch

+ * hashTupleDesc
+ * Compute a hash value for a tuple descriptor.
+ *
+ * If two tuple descriptors would be considered equal by equalTupleDescs()
+ * then their hash value will be equal according to this function.
+ */
+uint32
+hashTupleDesc(TupleDesc desc)
+{
+ uint32 s = 0;
+ int i;
+
+ for (i = 0; i < desc->natts; ++i)
+ s = hash_combine(s, hash_uint32(TupleDescAttr(desc, i)->atttypid));
+
+ return s;
+}

Hm, is it right not to include tdtypeid, tdtypmod, tdhasoid here?
equalTupleDescs() does compare them...

+/*
+ * Hash function for the hash table of RecordCacheEntry.
+ */
+static uint32
+record_type_typmod_hash(const void *data, size_t size)
+{
+ return hashTupleDesc(((RecordCacheEntry *) data)->tupdesc);
+}
+
+/*
+ * Match function for the hash table of RecordCacheEntry.
+ */
+static int
+record_type_typmod_compare(const void *a, const void *b, size_t size)
+{
+ return equalTupleDescs(((RecordCacheEntry *) a)->tupdesc,
+ ((RecordCacheEntry *) b)->tupdesc) ? 0 : 1;
+}

I'd rather have local vars for the casted params, but it's not
important.

MemSet(&ctl, 0, sizeof(ctl));
- ctl.keysize = REC_HASH_KEYS * sizeof(Oid);
+ ctl.keysize = 0; /* unused */
ctl.entrysize = sizeof(RecordCacheEntry);

Hm, keysize 0? Is that right? Wouldn't it be more correct to have both
of the same size, given dynahash includes the key size in the entry, and
the pointer really is the key?

Otherwise looks pretty good.

shared-record-typmods-v5.patchset/0006-Introduce-a-shared-memory-record-typmod-registry.patch

Hm, name & comment don't quite describe this accurately anymore.

+/*
+ * A struct encapsulating some elements of a user's session. For now this
+ * manages state that applies to parallel query, but it principle it could
+ * include other things that are currently global variables.
+ */
+typedef struct Session
+{
+ dsm_segment *segment; /* The session-scoped DSM segment. */
+ dsa_area *area; /* The session-scoped DSA area. */
+
+ /* State managed by typcache.c. */
+ SharedRecordTypmodRegistry *typmod_registry;
+ dshash_table *record_table; /* Typmods indexed by tuple descriptor */
+ dshash_table *typmod_table; /* Tuple descriptors indexed by typmod */
+} Session;

Interesting. I was apparently thinking slightly differently. I'd have
thought we'd have Session struct in statically allocated shared
memory. Which'd then have dsa_handle, dshash_table_handle, ... members.

+extern void EnsureCurrentSession(void);
+extern void EnsureCurrentSession(void);

duplicated.

+/*
+ * We want to create a DSA area to store shared state that has the same extent
+ * as a session. So far, it's only used to hold the shared record type
+ * registry. We don't want it to have to create any DSM segments just yet in
+ * common cases, so we'll give it enough space to hold a very small
+ * SharedRecordTypmodRegistry.
+ */
+#define SESSION_DSA_SIZE 0x30000

Same "extent"? Maybe lifetime?

+
+/*
+ * Make sure that there is a CurrentSession.
+ */
+void EnsureCurrentSession(void)
+{

linebreak.

+{
+ if (CurrentSession == NULL)
+ {
+ MemoryContext old_context = MemoryContextSwitchTo(TopMemoryContext);
+
+ CurrentSession = palloc0(sizeof(Session));
+ MemoryContextSwitchTo(old_context);
+ }
+}

Isn't MemoryContextAllocZero easier?

Greetings,

Andres Freund


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-16 00:30:16
Message-ID: CA+TgmoZ9-kagi1Y8Z28HspbEMB4+fy3DeeByT-yw-uwk_yY3LQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Aug 15, 2017 at 6:06 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Interesting. I was apparently thinking slightly differently. I'd have
> thought we'd have Session struct in statically allocated shared
> memory. Which'd then have dsa_handle, dshash_table_handle, ... members.

Sounds an awful lot like what we're already doing with PGPROC.

I am not sure that inventing a Session thing that should have 500
things in it but actually has the 3 that are relevant to this patch is
really a step forward. In fact, it sounds like something that will
just create confusion down the road.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-16 00:34:54
Message-ID: 20170816003454.kvtzrh2kquaiorxi@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-08-15 20:30:16 -0400, Robert Haas wrote:
> On Tue, Aug 15, 2017 at 6:06 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > Interesting. I was apparently thinking slightly differently. I'd have
> > thought we'd have Session struct in statically allocated shared
> > memory. Which'd then have dsa_handle, dshash_table_handle, ... members.
>
> Sounds an awful lot like what we're already doing with PGPROC.

Except it'd be shared between leader and workers. So no, not really.

Greetings,

Andres Freund


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-16 00:42:20
Message-ID: CAEepm=0H176UhBedyS455ABJvQFeB+qA4WOAPP=D2oc9k8pqKQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Will respond to the actionable code review points separately with a
new patch set, but first:

On Wed, Aug 16, 2017 at 10:06 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2017-08-15 17:44:55 +1200, Thomas Munro wrote:
>> > @@ -99,12 +72,9 @@ CreateTemplateTupleDesc(int natts, bool hasoid)
>> >
>> > /*
>> > * CreateTupleDesc
>> > - * This function allocates a new TupleDesc pointing to a given
>> > + * This function allocates a new TupleDesc by copying a given
>> > * Form_pg_attribute array.
>> > *
>> > - * Note: if the TupleDesc is ever freed, the Form_pg_attribute array
>> > - * will not be freed thereby.
>> > - *
>> >
>> > I'm leaning towards no, but you could argue that we should just change
>> > that remark to be about constr?
>>
>> I don't see why.
>
> Because for that the freeing bit is still true, ie. it's still
> separately allocated.

It's true of struct tupleDesc in general but not true of objects
returned by this function in respect of the arguments to the function.
In master, that comment is a useful warning that the object will hold
onto but never free the attrs array you pass in. The same doesn't
apply to constr so I don't think we need to say anything.

>> > Review of 0003:
>> >
>> > I'm not doing a too detailed review, given I think there's some changes
>> > in the pipeline.
>>
>> Yep. In the new patch set the hash table formerly known as DHT is now
>> in patch 0004 and I made the following changes based on your feedback:
>>
>> 1. Renamed it to "dshash". The files are named dshash.{c,h}, and the
>> prefix on identifiers is dshash_. You suggested dsmhash, but the "m"
>> didn't seem to make much sense. I considered dsahash, but dshash
>> seemed better. Thoughts?
>
> WFM. Just curious, why didn't m make sense? I was referring to dynamic
> shared memory hash - seems right. Whether there's an intermediary dsa
> layer or not...

I think of DSA as a defining characteristic that dshash exists to work
with (it's baked into dshash's API), but DSM as an implementation
detail which dshash doesn't directly depend on. Therefore I don't
like the "m".

I speculate that in future we might have build modes where DSA doesn't
use DSM anyway: it could use native pointers and maybe even a
different allocator in a build that either uses threads or
non-portable tricks to carve out a huge amount of virtual address
space so that it can map memory in at the same location in each
backend. In that universe DSA would still be providing the service of
grouping allocations together into a scope for "rip cord" cleanup
(possibly by forwarding to MemoryContext stuff) but otherwise compile
away to nearly nothing.

>> > +static int32
>> > +find_or_allocate_shared_record_typmod(TupleDesc tupdesc)
>> > +{
>> >
>> > + /*
>> > + * While we still hold the atts_index entry locked, add this to
>> > + * typmod_index. That's important because we don't want anyone to be able
>> > + * to find a typmod via the former that can't yet be looked up in the
>> > + * latter.
>> > + */
>> > + PG_TRY();
>> > + {
>> > + typmod_index_entry =
>> > + dht_find_or_insert(CurrentSharedRecordTypmodRegistry.typmod_index,
>> > + &typmod, &found);
>> > + if (found)
>> > + elog(ERROR, "cannot create duplicate shared record typmod");
>> > + }
>> > + PG_CATCH();
>> > + {
>> > + /*
>> > + * If we failed to allocate or elog()ed, we have to be careful not to
>> > + * leak the shared memory. Note that we might have created a new
>> > + * atts_index entry above, but we haven't put anything in it yet.
>> > + */
>> > + dsa_free(CurrentSharedRecordTypmodRegistry.area, shared_dp);
>> > + PG_RE_THROW();
>> > + }
>> >
>> > Not entirely related, but I do wonder if we don't need abetter solution
>> > to this. Something like dsa pointers that register appropriate memory
>> > context callbacks to get deleted in case of errors?
>>
>> Huh, scope guards. I have had some ideas about some kind of
>> destructor mechanism that might replace what we're doing with DSM
>> detach hooks in various places and also work in containers like hash
>> tables (ie entries could have destructors), but doing it with the
>> stack is another level...
>
> Not sure what you mean with 'stack'?

I probably read too much into your words. I was imagining something
conceptually like the following, since the "appropriate memory
context" in the code above is actually a stack frame:

dsa_pointer p = ...;

ON_ERROR_SCOPE_EXIT(dsa_free, area, p); /* yeah, I know, no variadic macros */

elog(ERROR, "boo"); /* this causes p to be freed */

The point being that if the caller of this function catches the error
then ITS on-error-cleanup stack mustn't run, but this one's must.
Hence requirement for awareness of stack. I'm not sure it's actually
any easier to use this than the existing try/catch macros and I'm not
proposing it, but I thought perhaps you were.

> +/*
> + * A struct encapsulating some elements of a user's session. For now this
> + * manages state that applies to parallel query, but it principle it could
> + * include other things that are currently global variables.
> + */
> +typedef struct Session
> +{
> + dsm_segment *segment; /* The session-scoped DSM segment. */
> + dsa_area *area; /* The session-scoped DSA area. */
> +
> + /* State managed by typcache.c. */
> + SharedRecordTypmodRegistry *typmod_registry;
> + dshash_table *record_table; /* Typmods indexed by tuple descriptor */
> + dshash_table *typmod_table; /* Tuple descriptors indexed by typmod */
> +} Session;
>
>
> Interesting. I was apparently thinking slightly differently. I'd have
> thought we'd have Session struct in statically allocated shared
> memory. Which'd then have dsa_handle, dshash_table_handle, ... members.

A session needs (1) some backend-private state to hold the addresses
of stuff in this process's memory map and (2) some shared state worth
pointing to. My patch introduces both of those things, but doesn't
need to make the shared state part 'discoverable'. Workers get their
hands on it by receiving it explicitly from a leader.

I think that you're right about a cluster-wide PGSESSION array being
useful if we divorce session from processes completely and write some
kind of scheduler. But for the purposes of this patch set we don't
seem to need to any decisions about that. Leaders passing DSM handles
over to workers seems to be enough for now. Does this make sense?

--
Thomas Munro
http://www.enterprisedb.com


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-16 12:42:27
Message-ID: CA+TgmoYac8jgrcsiTuqVzwndP-iZn-N6sFDtGppwnZeswQSSZA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Aug 15, 2017 at 8:34 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2017-08-15 20:30:16 -0400, Robert Haas wrote:
>> On Tue, Aug 15, 2017 at 6:06 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> > Interesting. I was apparently thinking slightly differently. I'd have
>> > thought we'd have Session struct in statically allocated shared
>> > memory. Which'd then have dsa_handle, dshash_table_handle, ... members.
>>
>> Sounds an awful lot like what we're already doing with PGPROC.
>
> Except it'd be shared between leader and workers. So no, not really.

There's precedent for using it that way, though - cf. group locking.
And in practice you're going to need an array of the same length as
the procarray. It's maybe not quite the same thing, but it smells
pretty similar.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-20 18:17:23
Message-ID: 20170820181723.tdswdinzptbcwhrr@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

Pushing 0001, 0002 now.

- rebased after conflicts
- fixed a significant number of too long lines
- removed a number of now superflous linebreaks

I think it'd be a good idea to backpatch the addition of
TupleDescAttr(tupledesc, n) to make future backpatching easier. What do
others think?

Thomas, prepare yourself for some hate from extension and fork authors /
maintainers ;)

Regards,

Andres


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-20 23:02:52
Message-ID: CAEepm=04LM87Ya_Avgw40934Wh3G4Oyy+mmthYHuMb9m5WZwaQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Aug 21, 2017 at 6:17 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Pushing 0001, 0002 now.
>
> - rebased after conflicts
> - fixed a significant number of too long lines
> - removed a number of now superflous linebreaks

Thanks! Please find attached a rebased version of the rest of the patch set.

> Thomas, prepare yourself for some hate from extension and fork authors /
> maintainers ;)

/me hides

The attached version also fixes a couple of small details you
complained about last week:

On Wed, Aug 16, 2017 at 10:06 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> > + size_t key_size; /* Size of the key (initial bytes of entry) */
>> > + size_t entry_size; /* Total size of entry */
>> >
>> > Wonder if it'd make sense to say that key/entry sizes to be only
>> > minimums? That means we could increase them to be the proper aligned
>> > size?
>>
>> I don't understand. You mean explicitly saying that there are
>> overheads? Doesn't that go without saying?
>
> I was thinking that we could do the MAXALIGN style calculations once
> instead of repeatedly, by including them in the key and entry sizes.

I must be missing something -- where do we do it repeatedly? The only
place we use MAXALIGN is a compile-type constant expression (see
expansion of macros ENTRY_FROM_ITEM and ITEM_FROM_ENTRY, and also in
one place AXALIGN(sizeof(dshash_table_item))).

> shared-record-typmods-v5.patchset/0004-Refactor-typcache.c-s-record-typmod-hash-table.patch
>
> + * hashTupleDesc
> + * Compute a hash value for a tuple descriptor.
> + *
> + * If two tuple descriptors would be considered equal by equalTupleDescs()
> + * then their hash value will be equal according to this function.
> + */
> +uint32
> +hashTupleDesc(TupleDesc desc)
> +{
> + uint32 s = 0;
> + int i;
> +
> + for (i = 0; i < desc->natts; ++i)
> + s = hash_combine(s, hash_uint32(TupleDescAttr(desc, i)->atttypid));
> +
> + return s;
> +}
>
> Hm, is it right not to include tdtypeid, tdtypmod, tdhasoid here?
> equalTupleDescs() does compare them...

OK, now adding natts (just for consistency), tdtypeid and tdhasoid to
be exactly like equalTupleDescs(). Note that tdtypmod is deliberately
*not* included.

> + return hashTupleDesc(((RecordCacheEntry *) data)->tupdesc);
> ...
> + return equalTupleDescs(((RecordCacheEntry *) a)->tupdesc,
> + ((RecordCacheEntry *) b)-
>
> I'd rather have local vars for the casted params, but it's not
> important.

Done.

> MemSet(&ctl, 0, sizeof(ctl));
> - ctl.keysize = REC_HASH_KEYS * sizeof(Oid);
> + ctl.keysize = 0; /* unused */
> ctl.entrysize = sizeof(RecordCacheEntry);
>
> Hm, keysize 0? Is that right? Wouldn't it be more correct to have both
> of the same size, given dynahash includes the key size in the entry, and
> the pointer really is the key?

Done.

> shared-record-typmods-v5.patchset/0006-Introduce-a-shared-memory-record-typmod-registry.patch
>
> Hm, name & comment don't quite describe this accurately anymore.

Updated commit message.

> +extern void EnsureCurrentSession(void);
> +extern void EnsureCurrentSession(void);
>
> duplicated.

Fixed.

> +/*
> + * We want to create a DSA area to store shared state that has the same extent
> + * as a session. So far, it's only used to hold the shared record type
> + * registry. We don't want it to have to create any DSM segments just yet in
> + * common cases, so we'll give it enough space to hold a very small
> + * SharedRecordTypmodRegistry.
> + */
> +#define SESSION_DSA_SIZE 0x30000
>
> Same "extent"? Maybe lifetime?

Done.

> +
> +/*
> + * Make sure that there is a CurrentSession.
> + */
> +void EnsureCurrentSession(void)
> +{
>
> linebreak.

Fixed.

> +{
> + if (CurrentSession == NULL)
> + {
> + MemoryContext old_context = MemoryContextSwitchTo(TopMemoryContext);
> +
> + CurrentSession = palloc0(sizeof(Session));
> + MemoryContextSwitchTo(old_context);
> + }
> +}
>
> Isn't MemoryContextAllocZero easier?

Done.

I also stopped saying "const TupleDesc" in a few places, which was a
thinko (I wanted pointer to const tupldeDesc, not const pointer to
tupleDesc...), and made sure that the shmem TupleDescs always have
tdtypmod actually set.

So as I understand it the remaining issues (aside from any
undiscovered bugs...) are:

1. Do we like "Session", "CurrentSession" etc? Robert seems to be
suggesting that this is likely to get in the way when we try to tackle
this area more thoroughly. Andres is suggesting that this is a good
time to take steps in this direction.

2. Andres didn't like what I did to DecrTupleDescRefCount, namely
allowing to run when there is no ResourceOwner. I now see that this
is probably an indication of a different problem; even if there were a
worker ResourceOwner as he suggested (or perhaps a session-scoped one,
which a worker would reset before being reused), it wouldn't be the
one that was active when the TupleDesc was created. I think I have
failed to understand the contracts here and will think/read about it
some more.

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
shared-record-typmods-v6.patchset.tgz application/x-gzip 31.7 KB

From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-21 01:18:35
Message-ID: CAEepm=3wk1AiAy=_gobkOVnzCwGdrw1Magq1g76oWCfN+mRuRw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Aug 21, 2017 at 6:17 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> I think it'd be a good idea to backpatch the addition of
> TupleDescAttr(tupledesc, n) to make future backpatching easier. What do
> others think?

+1

That would also provide a way for extension developers to be able to
write code that compiles against PG11 and also earlier releases
without having to do ugly conditional macros stuff.

--
Thomas Munro
http://www.enterprisedb.com


From: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-21 01:33:00
Message-ID: CAB7nPqS=FQiJnjXRrbzdfBixgp_ZN9mxP_m4FvpzEdJPr+b=cA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Aug 21, 2017 at 10:18 AM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Mon, Aug 21, 2017 at 6:17 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> I think it'd be a good idea to backpatch the addition of
>> TupleDescAttr(tupledesc, n) to make future backpatching easier. What do
>> others think?
>
> +1
>
> That would also provide a way for extension developers to be able to
> write code that compiles against PG11 and also earlier releases
> without having to do ugly conditional macros stuff.

Updating only tupdesc.h is harmless, so no real objection to your argument.
--
Michael


From: Andres Freund <andres(at)anarazel(dot)de>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-22 23:41:23
Message-ID: 20170822234123.3nmge5x7hbsbms5o@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-08-21 11:02:52 +1200, Thomas Munro wrote:
> On Mon, Aug 21, 2017 at 6:17 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > Pushing 0001, 0002 now.
> >
> > - rebased after conflicts
> > - fixed a significant number of too long lines
> > - removed a number of now superflous linebreaks
>
> Thanks! Please find attached a rebased version of the rest of the patch set.

Pushed 0001, 0002. Looking at later patches.

Greetings,

Andres Freund


From: Andres Freund <andres(at)anarazel(dot)de>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-23 05:46:44
Message-ID: 20170823054644.efuzftxjpfi6wwqs@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-08-22 16:41:23 -0700, Andres Freund wrote:
> On 2017-08-21 11:02:52 +1200, Thomas Munro wrote:
> > On Mon, Aug 21, 2017 at 6:17 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > > Pushing 0001, 0002 now.
> > >
> > > - rebased after conflicts
> > > - fixed a significant number of too long lines
> > > - removed a number of now superflous linebreaks
> >
> > Thanks! Please find attached a rebased version of the rest of the patch set.
>
> Pushed 0001, 0002. Looking at later patches.

Committing 0003. This'll probably need further adjustment, but I think
it's good to make progress here.

Changes:
- pgindent'ed after adding the necessary typedefs to typedefs.list
- replaced INT64CONST w UINT64CONST
- moved count assertion in delete_item to before decrementing - as count
is unsigned, it'd just wrap around on underflow not triggering the assertion.
- documented and asserted resize is called without partition lock held
- removed reference to iterator in dshash_find comments
- removed stray references to dshash_release (rather than dshash_release_lock)
- reworded dshash_find_or_insert reference to dshash_find to also
mention error handling.

Notes for possible followup commits of the dshash API:
- nontrivial portions of dsahash are essentially critical sections lest
dynamic shared memory is leaked. Should we, short term, introduce
actual critical section markers to make that more obvious? Should we,
longer term, make this more failsafe / easier to use, by
extending/emulating memory contexts for dsa memory?
- I'm very unconvinced of supporting both {compare,hash}_arg_function
and the non-arg version. Why not solely support the _arg_ version, but
add the size argument? On all relevant platforms that should still be
register arg callable, and the branch isn't free either.
- might be worthwhile to try to reduce duplication between
delete_item_from_bucket, delete_key_from_bucket, delete_item
dshash_delete_key.

For later commits in the series:
- Afaict the whole shared tupledesc stuff, as tqueue.c before, is
entirely untested. This baffles me. See also [1]. I can force the code
to be reached with force_parallel_mode=regress/1, but this absolutely
really totally needs to be reached by the default tests. Robert?
- gcc wants static before const (0004).
- Afaict GetSessionDsmHandle() uses the current rather than
TopMemoryContext. Try running the regression tests under
force_parallel_mode - crashes immediately for me without fixing that.
- SharedRecordTypmodRegistryInit() is called from GetSessionDsmHandle()
which calls EnsureCurrentSession(), but
SharedRecordTypmodRegistryInit() does so again - sprinkling those
around liberally seems like it could hide bugs.

Regards,

Andres

[1] https://coverage.postgresql.org/src/backend/executor/tqueue.c.gcov.html


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-23 11:58:04
Message-ID: CAEepm=3wiFVqhm0zUAbzbzg6um3Wc+=w8bM=shr-NqJSdJj=dg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Aug 23, 2017 at 5:46 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Committing 0003. This'll probably need further adjustment, but I think
> it's good to make progress here.

Thanks!

> Changes:
> - pgindent'ed after adding the necessary typedefs to typedefs.list
> - replaced INT64CONST w UINT64CONST
> - moved count assertion in delete_item to before decrementing - as count
> is unsigned, it'd just wrap around on underflow not triggering the assertion.
> - documented and asserted resize is called without partition lock held
> - removed reference to iterator in dshash_find comments
> - removed stray references to dshash_release (rather than dshash_release_lock)
> - reworded dshash_find_or_insert reference to dshash_find to also
> mention error handling.

Doh. Thanks.

> Notes for possible followup commits of the dshash API:
> - nontrivial portions of dsahash are essentially critical sections lest
> dynamic shared memory is leaked. Should we, short term, introduce
> actual critical section markers to make that more obvious? Should we,
> longer term, make this more failsafe / easier to use, by
> extending/emulating memory contexts for dsa memory?

Hmm. I will look into this.

> - I'm very unconvinced of supporting both {compare,hash}_arg_function
> and the non-arg version. Why not solely support the _arg_ version, but
> add the size argument? On all relevant platforms that should still be
> register arg callable, and the branch isn't free either.

Well, the idea was that both versions were compatible with existing
functions: one with DynaHash's hash and compare functions and the
other with qsort_arg's compare function type. In the attached version
I've done as you suggested in 0001. Since I guess many users will
finish up wanting raw memory compare and hash I've provided
dshash_memcmp() and dshash_memhash(). Thoughts?

Since there is no attempt to be compatible with anything else, I was
slightly tempted to make equal functions return true for a match,
rather than the memcmp-style return value but figured it was still
better to be consistent.

> - might be worthwhile to try to reduce duplication between
> delete_item_from_bucket, delete_key_from_bucket, delete_item
> dshash_delete_key.

Yeah. I will try this and send a separate refactoring patch.

> For later commits in the series:
> - Afaict the whole shared tupledesc stuff, as tqueue.c before, is
> entirely untested. This baffles me. See also [1]. I can force the code
> to be reached with force_parallel_mode=regress/1, but this absolutely
> really totally needs to be reached by the default tests. Robert?

A fair point. 0002 is a simple patch to push some blessed records
through a TupleQueue in select_parallel.sql. It doesn't do ranges and
arrays (special cases in the tqueue.c code that 0004 rips out), but
for exercising the new shared code I believe this is enough. If you
apply just 0002 and 0004 then this test fails with a strange confused
record decoding error as expected.

> - gcc wants static before const (0004).

Fixed.

> - Afaict GetSessionDsmHandle() uses the current rather than
> TopMemoryContext. Try running the regression tests under
> force_parallel_mode - crashes immediately for me without fixing that.

Gah, right. Fixed.

> - SharedRecordTypmodRegistryInit() is called from GetSessionDsmHandle()
> which calls EnsureCurrentSession(), but
> SharedRecordTypmodRegistryInit() does so again - sprinkling those
> around liberally seems like it could hide bugs.

Yeah. Will look into this.

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
shared-record-typmods-v7.patchset.tgz application/x-gzip 23.5 KB

From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-23 12:04:42
Message-ID: CAEepm=37RvJC3U1v5S6Pj+Xb2sE5DxL_+YSvoHzgP1btC8Hs5w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Aug 23, 2017 at 11:58 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Wed, Aug 23, 2017 at 5:46 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> - Afaict GetSessionDsmHandle() uses the current rather than
>> TopMemoryContext. Try running the regression tests under
>> force_parallel_mode - crashes immediately for me without fixing that.
>
> Gah, right. Fixed.

That version missed an early return case where dsm_create failed.
Here's a version that restores the caller's memory context in that
case too.

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
shared-record-typmods-v8.patchset.tgz application/x-gzip 23.5 KB

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-23 13:45:38
Message-ID: CA+TgmoawEK_JWbPATxtQu1jcwxjRu8HhvOAQofD-SpNZ3s_EOw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Aug 23, 2017 at 1:46 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> For later commits in the series:
> - Afaict the whole shared tupledesc stuff, as tqueue.c before, is
> entirely untested. This baffles me. See also [1]. I can force the code
> to be reached with force_parallel_mode=regress/1, but this absolutely
> really totally needs to be reached by the default tests. Robert?

force_parallel_mode=regress is a good way of testing this because it
keeps the leader from doing the work, which would likely dodge any
bugs that happened to exist. If you want to test something in the
regular regression tests, using force_parallel_mode=on is probably a
good way to do it.

Also note that there are 3 buildfarm members that test with
force_parallel_mode=regress on a regular basis, so it's not like there
is no automated coverage of this area.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-23 16:42:47
Message-ID: 20170823164247.ml7dkxjp2petob3q@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-08-23 09:45:38 -0400, Robert Haas wrote:
> On Wed, Aug 23, 2017 at 1:46 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> > For later commits in the series:
> > - Afaict the whole shared tupledesc stuff, as tqueue.c before, is
> > entirely untested. This baffles me. See also [1]. I can force the code
> > to be reached with force_parallel_mode=regress/1, but this absolutely
> > really totally needs to be reached by the default tests. Robert?
>
> force_parallel_mode=regress is a good way of testing this because it
> keeps the leader from doing the work, which would likely dodge any
> bugs that happened to exist. If you want to test something in the
> regular regression tests, using force_parallel_mode=on is probably a
> good way to do it.
>
> Also note that there are 3 buildfarm members that test with
> force_parallel_mode=regress on a regular basis, so it's not like there
> is no automated coverage of this area.

I don't think that's sufficient. make, and especially check-world,
should have a decent coverage of the code locally. Without having to
know about options like force_parallel_mode=regress. As e.g. evidenced
by the fact that Thomas's latest version crashed if you ran the tests
that way. If there's a few lines that aren't covered by the plain
tests, and more than a few node + parallelism combinations, I'm not
bothered much. But this is (soon hopefully was) a fairly complicated
piece of infrastructure - that should be exercised. If necessary that
can just be a BEGIN; SET LOCAL force_parallel_mode=on; query with
blessed descs;COMMIT or whatnot - it's not like we need something hugely
complicated here.

Greetings,

Andres Freund


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-23 19:13:30
Message-ID: CA+TgmoZEvMAJQ+mJPQjwmW=oD4xu-D8S+r2qkxGiRJycD63_-w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Aug 23, 2017 at 12:42 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> I don't think that's sufficient. make, and especially check-world,
> should have a decent coverage of the code locally. Without having to
> know about options like force_parallel_mode=regress. As e.g. evidenced
> by the fact that Thomas's latest version crashed if you ran the tests
> that way. If there's a few lines that aren't covered by the plain
> tests, and more than a few node + parallelism combinations, I'm not
> bothered much. But this is (soon hopefully was) a fairly complicated
> piece of infrastructure - that should be exercised. If necessary that
> can just be a BEGIN; SET LOCAL force_parallel_mode=on; query with
> blessed descs;COMMIT or whatnot - it's not like we need something hugely
> complicated here.

Yeah, we've been bitten before by changes that seemed OK when run
without force_parallel_mode but misbehaved with that option, so it
would be nice to improve things. Now, I'm not totally convinced that
just adding a test around blessed tupledescs is really going to help
very much - that option exercises a lot of code, and this is only one
relatively small bit of it. But I'm certainly not objecting to the
idea.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-24 12:08:08
Message-ID: CAEepm=18WWQs4uX9YSb_LR5nQk1rFd9V+OAjjGhbYbYU2gADLA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Aug 23, 2017 at 11:58 PM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> On Wed, Aug 23, 2017 at 5:46 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> Notes for possible followup commits of the dshash API:
>> - nontrivial portions of dsahash are essentially critical sections lest
>> dynamic shared memory is leaked. Should we, short term, introduce
>> actual critical section markers to make that more obvious? Should we,
>> longer term, make this more failsafe / easier to use, by
>> extending/emulating memory contexts for dsa memory?
>
> Hmm. I will look into this.

Yeah, dshash_create() leaks the control object if the later allocation
of the initial hash table array raises an error. I think that should
be fixed -- please see 0001 in the new patch set attached.

The other two places where shared memory is allocated are resize() and
insert_into_bucket(), and both of those seem exception-safe to me: if
dsa_allocate() elogs then nothing is changed, and the code after that
point is no-throw. Am I missing something?

>> - SharedRecordTypmodRegistryInit() is called from GetSessionDsmHandle()
>> which calls EnsureCurrentSession(), but
>> SharedRecordTypmodRegistryInit() does so again - sprinkling those
>> around liberally seems like it could hide bugs.
>
> Yeah. Will look into this.

One idea is to run InitializeSession() in InitPostgres() instead, so
that CurrentSession is initialized at startup, but initially empty.
See attached. (I realised that that terminology is a bit like a large
volume called FRENCH CUISINE which turns out to have just one recipe
for an omelette in it, but you have to start somewhere...) Better
ideas?

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
shared-record-typmods-v9.patchset.tgz application/x-gzip 24.3 KB

From: Andres Freund <andres(at)anarazel(dot)de>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-08-25 01:46:20
Message-ID: 20170825014620.rs5jwfgzsrd4aqg7@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-08-21 11:02:52 +1200, Thomas Munro wrote:
> 2. Andres didn't like what I did to DecrTupleDescRefCount, namely
> allowing to run when there is no ResourceOwner. I now see that this
> is probably an indication of a different problem; even if there were a
> worker ResourceOwner as he suggested (or perhaps a session-scoped one,
> which a worker would reset before being reused), it wouldn't be the
> one that was active when the TupleDesc was created. I think I have
> failed to understand the contracts here and will think/read about it
> some more.

Maybe I'm missing something, but isn't the issue here that using
DecrTupleDescRefCount() simply is wrong, because we're not actually
necessarily tracking the TupleDesc via the resowner mechanism?

If you look at the code, in the case it's a previously unknown tupledesc
it's registered with:

entDesc = CreateTupleDescCopy(tupDesc);
...
/* mark it as a reference-counted tupdesc */
entDesc->tdrefcount = 1;
...
RecordCacheArray[newtypmod] = entDesc;
...

Note that there's no PinTupleDesc(), IncrTupleDescRefCount() or
ResourceOwnerRememberTupleDesc() managing the reference from the
array. Nor was there one before.

We have other code managing TupleDesc lifetimes similarly, and look at
how they're freeing it:
/* Delete tupdesc if we have it */
if (typentry->tupDesc != NULL)
{
/*
* Release our refcount, and free the tupdesc if none remain.
* (Can't use DecrTupleDescRefCount because this reference is not
* logged in current resource owner.)
*/
Assert(typentry->tupDesc->tdrefcount > 0);
if (--typentry->tupDesc->tdrefcount == 0)
FreeTupleDesc(typentry->tupDesc);
typentry->tupDesc = NULL;
}

This also made me think about how we're managing the lookup from the
shared array:

/*
* Our local array can now point directly to the TupleDesc
* in shared memory.
*/
RecordCacheArray[typmod] = tupdesc;

Uhm. Isn't that highly highly problematic? E.g. tdrefcount manipulations
which are done by all lookups (cf. lookup_rowtype_tupdesc()) would in
that case manipulate shared memory in a concurrency unsafe manner.

Greetings,

Andres Freund


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-09-04 06:14:39
Message-ID: CAEepm=2Vs5iR4MO4LWe3Ap0jiQngoT3jrSAGPV3BiGd3i3n6ig@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thanks for the review and commits so far. Here's a rebased, debugged
and pgindented version of the remaining patches. I ran pgindent with
--list-of-typedefs="SharedRecordTableKey,SharedRecordTableEntry,SharedTypmodTableEntry,SharedRecordTypmodRegistry,Session"
to fix some weirdness around these new typenames.

While rebasing the 0002 patch (removal of tqueue.c's remapping logic),
I modified the interface of the newly added
ExecParallelCreateReaders() function from commit 51daa7bd because it
no longer has any reason to take a TupleDesc.

On Fri, Aug 25, 2017 at 1:46 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2017-08-21 11:02:52 +1200, Thomas Munro wrote:
>> 2. Andres didn't like what I did to DecrTupleDescRefCount, namely
>> allowing to run when there is no ResourceOwner. I now see that this
>> is probably an indication of a different problem; even if there were a
>> worker ResourceOwner as he suggested (or perhaps a session-scoped one,
>> which a worker would reset before being reused), it wouldn't be the
>> one that was active when the TupleDesc was created. I think I have
>> failed to understand the contracts here and will think/read about it
>> some more.
>
> Maybe I'm missing something, but isn't the issue here that using
> DecrTupleDescRefCount() simply is wrong, because we're not actually
> necessarily tracking the TupleDesc via the resowner mechanism?

Yeah. Thanks.

> If you look at the code, in the case it's a previously unknown tupledesc
> it's registered with:
>
> entDesc = CreateTupleDescCopy(tupDesc);
> ...
> /* mark it as a reference-counted tupdesc */
> entDesc->tdrefcount = 1;
> ...
> RecordCacheArray[newtypmod] = entDesc;
> ...
>
> Note that there's no PinTupleDesc(), IncrTupleDescRefCount() or
> ResourceOwnerRememberTupleDesc() managing the reference from the
> array. Nor was there one before.
>
> We have other code managing TupleDesc lifetimes similarly, and look at
> how they're freeing it:
> /* Delete tupdesc if we have it */
> if (typentry->tupDesc != NULL)
> {
> /*
> * Release our refcount, and free the tupdesc if none remain.
> * (Can't use DecrTupleDescRefCount because this reference is not
> * logged in current resource owner.)
> */
> Assert(typentry->tupDesc->tdrefcount > 0);
> if (--typentry->tupDesc->tdrefcount == 0)
> FreeTupleDesc(typentry->tupDesc);
> typentry->tupDesc = NULL;
> }

Right. I have changed shared_record_typmod_registry_worker_detach()
to be more like that, with an explanation.

> This also made me think about how we're managing the lookup from the
> shared array:
>
> /*
> * Our local array can now point directly to the TupleDesc
> * in shared memory.
> */
> RecordCacheArray[typmod] = tupdesc;
>
> Uhm. Isn't that highly highly problematic? E.g. tdrefcount manipulations
> which are done by all lookups (cf. lookup_rowtype_tupdesc()) would in
> that case manipulate shared memory in a concurrency unsafe manner.

No. See this change, in that and similar code paths:

- IncrTupleDescRefCount(tupDesc);
+ PinTupleDesc(tupDesc);

The difference between IncrTupleDescRefCount() and PinTupleDesc() is
that the latter recognises non-refcounted tuple descriptors
(tdrefcount == -1) and does nothing. Shared tuple descriptors are not
reference counted (see TupleDescCopy() which initialises
dst->tdrefcount to -1). It was for foolish symmetry that I was trying
to use ReleaseTupleDesc() in shared_record_typmod_registry_detach()
before, since it also knows about non-refcounted tuple descriptors,
but that's not appropriate: it calls DecrTupleDescRefCount() which
assumes that we're using resource owners. We're not.

To summarise the object lifetime management situation created by this
patch: shared TupleDesc objects accumulate in per-session DSM memory
until eventually the session ends and the DSM memory goes away. A bit
like CacheMemoryContext: there is no retail cleanup of shared
TupleDesc objects. BUT: the DSM detach callback is used to clear out
backend-local pointers to that stuff (and any non-shared reference
counted TupleDesc objects that might be found), in anticipation of
being able to reuse a worker process one day (which will involve
attaching to a new session, so we mustn't retain any traces of the
previous session in our local state). Maybe I'm trying to be a little
too clairvoyant there...

I improved the cleanup code: now it frees RecordCacheArray and
RecordCacheHash and reinstalls NULL pointers. Also it deals with
errors in GetSessionDsmHandle() better.

I renamed the members of Session to include a "shared_" prefix, which
seems a bit clearer.

I refactored it so that it never makes needless local copies of
TupleDesc objects (previously assign_record_type_typmod() would create
an extra local copy and cache that, which was wasteful). That
actually makes much of the discussion above moot: on detach, a worker
should now ONLY find shared non-refcounted TupleDesc objects in the
local caches, so the FreeTupleDesc() case is unreachable...

The leader on the other hand can finish up with a mixture of local and
shared TupleDesc objects in its cache, if it had some before it ran a
parallel query. Its detach hook doesn't try to free those so it
doesn't matter.

--
Thomas Munro
http://www.enterprisedb.com

Attachment Content-Type Size
shared-record-typmods-v10.patchset.tgz application/x-gzip 22.6 KB

From: Andres Freund <andres(at)anarazel(dot)de>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-09-15 03:03:35
Message-ID: 20170915030335.tnzxircl5vpjstvj@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi,

On 2017-09-04 18:14:39 +1200, Thomas Munro wrote:
> Thanks for the review and commits so far. Here's a rebased, debugged
> and pgindented version of the remaining patches.

I've pushed this with minor modifications:
- added typedefs to typedefs.list
- re-pgindented, there were some missing reindents in headers
- added a very brief intro into session.c, moved some content repeated
in various places to the header - some of them were bound to become
out-of-date due to future uses of the facility.
- moved NULL setting in detach hook directly after the respective
resource deallocation, for the not really probable case of it being
reinvoked due to an error in a later dealloc function

Two remarks:
- I'm not sure I like the order in which things are added to the typemod
hashes, I wonder if some more careful organization could get rid of
the races. Doesn't seem critical, but would be a bit nicer.

- I'm not yet quite happy with the Session facility. I think it'd be
nicer if we'd a cleaner split between the shared memory notion of a
session and the local memory version of it. The shared memory version
would live in a ~max_connections sized array, referenced from
PGPROC. In a lot of cases it'd completely obsolete the need for a
shm_toc, because you could just store handles etc in there. The local
memory version then would just store local pointers etc into that.

But I think we can get there incrementally.

It's very nice to push commits that have stats like
6 files changed, 27 insertions(+), 1110 deletions(-)
even if it essentially has been paid forward by a lot of previous work
;)

Thanks for the work on this!

Regards,

Andres


From: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-09-15 03:16:54
Message-ID: CAEepm=2cN+ci72LzKsOCFU-JAGyn+ST3Gk2FE33P-Pqf1PBfwg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Fri, Sep 15, 2017 at 3:03 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2017-09-04 18:14:39 +1200, Thomas Munro wrote:
>> Thanks for the review and commits so far. Here's a rebased, debugged
>> and pgindented version of the remaining patches.
>
> I've pushed this with minor modifications:

Thank you!

> - added typedefs to typedefs.list

Should I do this manually with future patches?

> - re-pgindented, there were some missing reindents in headers
> - added a very brief intro into session.c, moved some content repeated
> in various places to the header - some of them were bound to become
> out-of-date due to future uses of the facility.
> - moved NULL setting in detach hook directly after the respective
> resource deallocation, for the not really probable case of it being
> reinvoked due to an error in a later dealloc function
>
> Two remarks:
> - I'm not sure I like the order in which things are added to the typemod
> hashes, I wonder if some more careful organization could get rid of
> the races. Doesn't seem critical, but would be a bit nicer.

I will have a think about whether I can improve that. In an earlier
version I did things in a different order and had different problems.
The main hazard to worry about here is that you can't let any typmod
number escape into shmem where it might be read by others (for example
a concurrent session that wants a typmod for a TupleDesc that happens
to match) until the typmod number is resolvable back to a TupleDesc
(meaning you can look it up in shared_typmod_table). Not
wasting/leaking memory in various failure cases is a secondary (but
obviously important) concern.

> - I'm not yet quite happy with the Session facility. I think it'd be
> nicer if we'd a cleaner split between the shared memory notion of a
> session and the local memory version of it. The shared memory version
> would live in a ~max_connections sized array, referenced from
> PGPROC. In a lot of cases it'd completely obsolete the need for a
> shm_toc, because you could just store handles etc in there. The local
> memory version then would just store local pointers etc into that.
>
> But I think we can get there incrementally.

+1 to all of the above. I fully expect this to get changed around quite a lot.

I'll keep an eye out for problem reports.

--
Thomas Munro
http://www.enterprisedb.com


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-09-15 03:29:05
Message-ID: 14352.1505446145@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> writes:
> On Fri, Sep 15, 2017 at 3:03 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>> - added typedefs to typedefs.list

> Should I do this manually with future patches?

FWIW, I'm not on board with that. I think the version of typedefs.list
in the tree should reflect the last official pgindent run. There's also
a problem that it only works well if *every* committer faithfully updates
typedefs.list, which isn't going to happen.

For local pgindent'ing, I pull down

https://buildfarm.postgresql.org/cgi-bin/typedefs.pl

and then add any typedefs created by the patch I'm working on to that.
But I don't put the result into the commit. Maybe we need a bit better
documentation and/or tool support for using an unofficial typedef list.

regards, tom lane


From: Andres Freund <andres(at)anarazel(dot)de>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-09-15 18:01:31
Message-ID: 20170915180131.rswlqo3iyxmoru2m@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-09-14 23:29:05 -0400, Tom Lane wrote:
> Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> writes:
> > On Fri, Sep 15, 2017 at 3:03 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> >> - added typedefs to typedefs.list
>
> > Should I do this manually with future patches?

I think there's sort of a circuit split on that one. Robert and I do
regularly, most others don't.

> FWIW, I'm not on board with that. I think the version of typedefs.list
> in the tree should reflect the last official pgindent run.

Why? I see pretty much no upside to that. You can't reindent anyway, due
to unindented changes. You can get the used typedefs.list trivially from
git.

> There's also a problem that it only works well if *every* committer
> faithfully updates typedefs.list, which isn't going to happen.
>
> For local pgindent'ing, I pull down
>
> https://buildfarm.postgresql.org/cgi-bin/typedefs.pl
>
> and then add any typedefs created by the patch I'm working on to that.
> But I don't put the result into the commit. Maybe we need a bit better
> documentation and/or tool support for using an unofficial typedef list.

That's a mighty manual process - I want to be able to reindent files,
especially new ones where it's still reasonably possible, without having
to download files, then move changes out of the way, so I can rebase,
...

Greetings,

Andres Freund


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-09-15 19:39:49
Message-ID: 630.1505504389@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andres Freund <andres(at)anarazel(dot)de> writes:
> On 2017-09-14 23:29:05 -0400, Tom Lane wrote:
>> FWIW, I'm not on board with that. I think the version of typedefs.list
>> in the tree should reflect the last official pgindent run.

> Why? I see pretty much no upside to that. You can't reindent anyway, due
> to unindented changes. You can get the used typedefs.list trivially from
> git.

Perhaps, but the real problem is still this:

>> There's also a problem that it only works well if *every* committer
>> faithfully updates typedefs.list, which isn't going to happen.

We can't even get everybody to pgindent patches before commit, let alone
update typedefs.list. So sooner or later your process is going to need
to involve getting a current list from the buildfarm.

>> For local pgindent'ing, I pull down
>> https://buildfarm.postgresql.org/cgi-bin/typedefs.pl

> That's a mighty manual process - I want to be able to reindent files,
> especially new ones where it's still reasonably possible, without having
> to download files, then move changes out of the way, so I can rebase,

Well, that just shows you don't know how to use it. You can tell pgindent
to use an out-of-tree copy of typedefs.list. I have the curl fetch and
using the out-of-tree list all nicely scripted ;-)

There might be something to be said for removing the typedefs list from
git altogether, and adjusting the standard wrapper script to pull it from
the buildfarm into a .gitignore'd location if there's not a copy there
already.

regards, tom lane


From: Andres Freund <andres(at)anarazel(dot)de>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: POC: Sharing record typmods between backends
Date: 2017-09-15 19:50:33
Message-ID: 20170915195033.vsxm46mvfzxrwfuy@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 2017-09-15 15:39:49 -0400, Tom Lane wrote:
> Andres Freund <andres(at)anarazel(dot)de> writes:
> > On 2017-09-14 23:29:05 -0400, Tom Lane wrote:
> >> FWIW, I'm not on board with that. I think the version of typedefs.list
> >> in the tree should reflect the last official pgindent run.
>
> > Why? I see pretty much no upside to that. You can't reindent anyway, due
> > to unindented changes. You can get the used typedefs.list trivially from
> > git.
>
> Perhaps, but the real problem is still this:
>
> >> There's also a problem that it only works well if *every* committer
> >> faithfully updates typedefs.list, which isn't going to happen.
>
> We can't even get everybody to pgindent patches before commit, let alone
> update typedefs.list.

Well, that's partially because right now it's really painful to do so,
and we've not tried to push people to do so. You essentially have to:
1) Pull down a new typedefs.list (how many people know where from?)
2) Add new typedefs that have been added in the commit-to-be
3) Run pgindent only on the changed files, because there's bound to be
thousands of unrelated reindents
4) Revert reindents in changed files that are unrelated to the commit.

1) is undocumented 2) is painful (add option to automatically
generate?), 3) is painful (add commandline tool?) 4) is painful. So
it's not particularly surprising that many don't bother.

> >> For local pgindent'ing, I pull down
> >> https://buildfarm.postgresql.org/cgi-bin/typedefs.pl
>
> > That's a mighty manual process - I want to be able to reindent files,
> > especially new ones where it's still reasonably possible, without having
> > to download files, then move changes out of the way, so I can rebase,
>
> Well, that just shows you don't know how to use it. You can tell pgindent
> to use an out-of-tree copy of typedefs.list. I have the curl fetch and
> using the out-of-tree list all nicely scripted ;-)

Not sure how that invalidates my statement. If you have to script it
locally, and still have to add typedefs manually, that's still plenty
stuff every committer (and better even, ever contributor!) has to learn.

> There might be something to be said for removing the typedefs list
> from git altogether, and adjusting the standard wrapper script to pull
> it from the buildfarm into a .gitignore'd location if there's not a
> copy there already.

I wonder if we could add a command that pulls down an up2date list *and*
regenerates a list for the local tree with the local settings. And then
runs pgindent with the combined list - in most cases that'd result in a
properly indented tree. The number of commits with platform specific
changes that the author/committer doesn't compile/run isn't that high.

Greetings,

Andres Freund