Re: [PATCH] Add PQconninfoParseParams and PQconninfodefaultsMerge to libpq

From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Heikki Linnakangas'" <hlinnakangas(at)vmware(dot)com>
Cc: "'Phil Sorber'" <phil(at)omniti(dot)com>, "'Alvaro Herrera'" <alvherre(at)2ndquadrant(dot)com>, "'Magnus Hagander'" <magnus(at)hagander(dot)net>, "'PostgreSQL-development'" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [PATCH] Add PQconninfoParseParams and PQconninfodefaultsMerge to libpq
Date: 2013-02-19 12:52:40
Message-ID: 004f01ce0ea0$017eda90$047c8fb0$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> -----Original Message-----
> From: pgsql-hackers-owner(at)postgresql(dot)org [mailto:pgsql-hackers-
> owner(at)postgresql(dot)org] On Behalf Of Amit Kapila
> Sent: Monday, February 18, 2013 6:38 PM
> To: 'Heikki Linnakangas'
> Cc: 'Phil Sorber'; 'Alvaro Herrera'; 'Magnus Hagander'; 'PostgreSQL-
> development'
> Subject: Re: [HACKERS] [PATCH] Add PQconninfoParseParams and
> PQconninfodefaultsMerge to libpq
>
> On Monday, February 18, 2013 1:41 PM Heikki Linnakangas wrote:
> > On 18.02.2013 06:07, Amit Kapila wrote:
> > > On Sunday, February 17, 2013 8:44 PM Phil Sorber wrote:
> > >> On Sun, Feb 17, 2013 at 1:35 AM, Amit
> kapila<amit(dot)kapila(at)huawei(dot)com>
> > >> wrote:
> > >>> Now the patch of Phil Sober provides 2 new API's
> > >> PQconninfoParseParams(), and PQconninfodefaultsMerge(),
> > >>> using these API's I can think of below way for patch "pass a
> > >> connection string to pg_basebackup, ..."
> > >>>
> > >>> 1. Call existing function PQconinfoParse() with connection string
> > >> input by user and get PQconninfoOption.
> > >>>
> > >>> 2. Now use the existing keywords (individual options specified by
> > >> user) and extract the keywords from
> > >>> PQconninfoOption structure and call new API
> > >> PQconninfoParseParams() which will return PQconninfoOption.
> > >>> The PQconninfoOption structure returned in this step will
> > contain
> > >> all keywords
> > >>>
> > >>> 3. Call PQconninfodefaultsMerge() to merge any default values if
> > >> exist. Not sure if this step is required?
> > >>>
> > >>> 4. Extract individual keywords from PQconninfoOption structure
> and
> > >> call PQconnectdbParams.
> > >>>
> > >>> Is this inline with what you have in mind or you have thought of
> > some
> > >> other simpler way of using new API's?
> >
> > Yep, that's roughly what I had in mind. I don't think it's necessary
> to
> > merge defaults in step 3, but it needs to add the "replication=true"
> > and
> > "dbname=replication" options.
>
> I could see the advantage of calling PQconninfoParseParams() in step-2
> is
> that
> it will remove the duplicate values by overriding the values for
> conflicting
> keywords.
> This is done in function conninfo_array_parse() which is called from
> PQconninfoParseParams().
> Am I right or there is any other advantage of calling
> PQconninfoParseParams()?
>
> If there is no other advantage then this is done in PQconnectdbParams()
> also, so can't we avoid calling PQconninfoParseParams()?

----
> I note that pg_dumpall also has a similar issue as pg_basebackup and
> pg_receivexlog; there's no way to pass a connection string to it
> either.

I think not only pg_dumpall, but we need to add it to pg_dump.
As -C is already used option in pg_dump, I need to use something different.
I am planning to use -K as new option(available ones were
d,g,j,k,l,m,p,q,y).

I am planning to keep option same for pg_dumpall, as pg_dumpall internally
calls pg_dump with the options supplied by user.
In fact, I think we can hack the string passed to pg_dump to change the
option from -C to -K, but I am not able see if it will be way better than
using -K for both.

Suggestions?

With Regards,
Amit Kapila.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Stehule 2013-02-19 13:00:07 Re: BUG #7873: pg_restore --clean tries to drop tables that don't exist
Previous Message Tom Lane 2013-02-19 10:27:14 Re: PATCH: Split stats file per database WAS: autovacuum stress-testing our system