From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Arthur Zakirov <a(dot)zakirov(at)postgrespro(dot)ru> |
Cc: | Ildus Kurbangaliev <i(dot)kurbangaliev(at)postgrespro(dot)ru>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [PROPOSAL] Shared Ispell dictionaries |
Date: | 2018-03-02 04:31:49 |
Message-ID: | 20180302043149.tn2xjgt2vcigknhe@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2018-02-07 19:28:29 +0300, Arthur Zakirov wrote:
> + {
> + {"max_shared_dictionaries_size", PGC_POSTMASTER, RESOURCES_MEM,
> + gettext_noop("Sets the maximum size of all text search dictionaries loaded into shared memory."),
> + gettext_noop("Currently controls only loading of Ispell dictionaries. "
> + "If total size of simultaneously loaded dictionaries "
> + "reaches the maximum allowed size then a new dictionary "
> + "will be loaded into local memory of a backend."),
> + GUC_UNIT_KB,
> + },
> + &max_shared_dictionaries_size,
> + 100 * 1024, 0, MAX_KILOBYTES,
> + NULL, NULL, NULL
> + },
So this uses shared memory, allocated at server start? That doesn't
seem right. Wouldn't it make more sense to have a
'num_shared_dictionaries' GUC, and then allocate them with dsm? Or even
better not have any such limit and us a dshash table to point to
individual loaded tables?
Is there any chance we can instead can convert dictionaries into a form
we can just mmap() into memory? That'd scale a lot higher and more
dynamicallly?
Regards,
Andres
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2018-03-02 04:34:11 | Re: 2018-03 Commitfest Summary (Andres #3) |
Previous Message | Nikhil Sontakke | 2018-03-02 04:21:14 | Re: [HACKERS] logical decoding of two-phase transactions |