Re: ideas for auto-processing patches

Lists: pgsql-hackers
From: markwkm(at)gmail(dot)com
To: pgsql-hackers(at)postgresql(dot)org
Cc: andrew(at)dunslane(dot)net
Subject: ideas for auto-processing patches
Date: 2007-01-04 19:38:01
Message-ID: 70c01d1d0701041138u49245da3i5ab42de9f9e45d35@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

OSDL had a tool called PLM with a primary goal to test patches against
the Linux kernel. It applied them and built them on multiple
platforms. It's a pretty simple idea and here are some links to what
it did; the systems appear to still be up for the moment so here are a
couple of links to what it did.

Summary of build results:
http://plm.testing.osdl.org/patches/show/linux-2.6.20-rc3-git3

Summary of recent patches submitted into the system:
http://plm.testing.osdl.org/patches/search_result

It also provides an rss feed:
http://plm.testing.osdl.org/rss

There a couple of things initial things I wanted to change, which I
think are improvements:

1. Pull source directly from repositories (cvs, git, etc.) PLM
doesn't really track actually scm repositories. It requires
directories of source code to be traversed, which are set up by
creating mirrors.

2. Apply and build patches against daily updates from the
repositories, as opposed to only against a specified version of the
source code.

Thoughts?

Regards,
Mark


From: Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>
To: markwkm(at)gmail(dot)com
Cc: pgsql-hackers(at)postgresql(dot)org, andrew(at)dunslane(dot)net
Subject: Re: ideas for auto-processing patches
Date: 2007-01-04 23:34:03
Message-ID: Pine.LNX.4.58.0701051030370.2840@linuxworld.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, 4 Jan 2007 markwkm(at)gmail(dot)com wrote:

> 1. Pull source directly from repositories (cvs, git, etc.) PLM
> doesn't really track actually scm repositories. It requires
> directories of source code to be traversed, which are set up by
> creating mirrors.

It seems to me that a better approach might be to mirror the CVS repo --
or at least make that an option -- and pull the sources locally. Having to
pull down >100MB of data for every build might be onerous to some build
farm members.

Thanks,

Gavin


From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>
Cc: markwkm(at)gmail(dot)com, pgsql-hackers(at)postgresql(dot)org, andrew(at)dunslane(dot)net
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 01:19:49
Message-ID: 20070105011949.GF3792@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Gavin Sherry wrote:
> On Thu, 4 Jan 2007 markwkm(at)gmail(dot)com wrote:
>
> > 1. Pull source directly from repositories (cvs, git, etc.) PLM
> > doesn't really track actually scm repositories. It requires
> > directories of source code to be traversed, which are set up by
> > creating mirrors.
>
> It seems to me that a better approach might be to mirror the CVS repo --
> or at least make that an option -- and pull the sources locally. Having to
> pull down >100MB of data for every build might be onerous to some build
> farm members.

Another idea is using the git-cvs interoperability system, as described
here (albeit with SVN, but the idea is the same):

http://tw.apinc.org/weblog/2007/01/03#subverting-git

Now, if we were to use a distributed system like Monotone this sort of
thing would be completely a non-issue ...

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.


From: Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: markwkm(at)gmail(dot)com, pgsql-hackers(at)postgresql(dot)org, andrew(at)dunslane(dot)net
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 01:24:23
Message-ID: Pine.LNX.4.58.0701051221410.3624@linuxworld.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, 4 Jan 2007, Alvaro Herrera wrote:

> Gavin Sherry wrote:
> > On Thu, 4 Jan 2007 markwkm(at)gmail(dot)com wrote:
> >
> > > 1. Pull source directly from repositories (cvs, git, etc.) PLM
> > > doesn't really track actually scm repositories. It requires
> > > directories of source code to be traversed, which are set up by
> > > creating mirrors.
> >
> > It seems to me that a better approach might be to mirror the CVS repo --
> > or at least make that an option -- and pull the sources locally. Having to
> > pull down >100MB of data for every build might be onerous to some build
> > farm members.
>
> Another idea is using the git-cvs interoperability system, as described
> here (albeit with SVN, but the idea is the same):
>
> http://tw.apinc.org/weblog/2007/01/03#subverting-git

It seems like that will just add one more cog to the machinary for no
extra benefit. Am I missing something?

>
> Now, if we were to use a distributed system like Monotone this sort of
> thing would be completely a non-issue ...

Monotone is so 2006. The new new thing is mercurial!

Gavin


From: "Andrew Dunstan" <andrew(at)dunslane(dot)net>
To: "Gavin Sherry" <swm(at)linuxworld(dot)com(dot)au>
Cc: markwkm(at)gmail(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 01:44:20
Message-ID: 3674.24.211.165.134.1167961460.squirrel@www.dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Gavin Sherry wrote:
> On Thu, 4 Jan 2007 markwkm(at)gmail(dot)com wrote:
>
>> 1. Pull source directly from repositories (cvs, git, etc.) PLM
>> doesn't really track actually scm repositories. It requires
>> directories of source code to be traversed, which are set up by
>> creating mirrors.
>
> It seems to me that a better approach might be to mirror the CVS repo --
> or at least make that an option -- and pull the sources locally. Having to
> pull down >100MB of data for every build might be onerous to some build
> farm members.
>

I am not clear about what is being proposed. Currently buildfarm syncs
against (or pulls a fresh copy from, depending on configuration) either
the main anoncvs repo or a mirror (which you can get using cvsup or rsync,
among other mechanisms). I can imagine a mechanism in which we pull
certain patches from a patch server (maybe using an RSS feed, or a SOAP
call?) which could be applied before the run. I wouldn't want to couple
things much more closely than that.

The patches would need to be vetted first, or no sane buildfarm owner will
want to use them.

cheers

andrew


From: Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: markwkm(at)gmail(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 02:44:56
Message-ID: Pine.LNX.4.58.0701051324250.3948@linuxworld.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, 4 Jan 2007, Andrew Dunstan wrote:

> Gavin Sherry wrote:
> > On Thu, 4 Jan 2007 markwkm(at)gmail(dot)com wrote:
> >
> >> 1. Pull source directly from repositories (cvs, git, etc.) PLM
> >> doesn't really track actually scm repositories. It requires
> >> directories of source code to be traversed, which are set up by
> >> creating mirrors.
> >
> > It seems to me that a better approach might be to mirror the CVS repo --
> > or at least make that an option -- and pull the sources locally. Having to
> > pull down >100MB of data for every build might be onerous to some build
> > farm members.
> >
>
>
> I am not clear about what is being proposed. Currently buildfarm syncs
> against (or pulls a fresh copy from, depending on configuration) either
> the main anoncvs repo or a mirror (which you can get using cvsup or rsync,
> among other mechanisms). I can imagine a mechanism in which we pull
> certain patches from a patch server (maybe using an RSS feed, or a SOAP
> call?) which could be applied before the run. I wouldn't want to couple
> things much more closely than that.

With PLM, you could test patches against various code branches. I'd
guessed Mark would want to provide this capability. Pulling branches from
anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
local mirror would be beneficial for patch testing.

> The patches would need to be vetted first, or no sane buildfarm owner will
> want to use them.

It would be nice if there could be a class of trusted users whose patches
would not have to be vetted.

Thanks,

Gavin


From: markwkm(at)gmail(dot)com
To: "Gavin Sherry" <swm(at)linuxworld(dot)com(dot)au>
Cc: "Andrew Dunstan" <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 03:25:41
Message-ID: 70c01d1d0701041925u6e9ca4f7xa51e42e99c9299bf@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 1/4/07, Gavin Sherry <swm(at)linuxworld(dot)com(dot)au> wrote:
> On Thu, 4 Jan 2007, Andrew Dunstan wrote:
>
> > Gavin Sherry wrote:
> > > On Thu, 4 Jan 2007 markwkm(at)gmail(dot)com wrote:
> > >
> > >> 1. Pull source directly from repositories (cvs, git, etc.) PLM
> > >> doesn't really track actually scm repositories. It requires
> > >> directories of source code to be traversed, which are set up by
> > >> creating mirrors.
> > >
> > > It seems to me that a better approach might be to mirror the CVS repo --
> > > or at least make that an option -- and pull the sources locally. Having to
> > > pull down >100MB of data for every build might be onerous to some build
> > > farm members.
> > >
> >
> >
> > I am not clear about what is being proposed. Currently buildfarm syncs
> > against (or pulls a fresh copy from, depending on configuration) either
> > the main anoncvs repo or a mirror (which you can get using cvsup or rsync,
> > among other mechanisms). I can imagine a mechanism in which we pull
> > certain patches from a patch server (maybe using an RSS feed, or a SOAP
> > call?) which could be applied before the run. I wouldn't want to couple
> > things much more closely than that.
>
> With PLM, you could test patches against various code branches. I'd
> guessed Mark would want to provide this capability.

Yeah, that pretty much covers it.

> Pulling branches from
> anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
> local mirror would be beneficial for patch testing.

Right some sort of local mirror would definitely speed things up.

> > The patches would need to be vetted first, or no sane buildfarm owner will
> > want to use them.
>
> It would be nice if there could be a class of trusted users whose patches
> would not have to be vetted.

PLM's authentication is tied to OSDL's internal authentication system,
but some I imagine setting up accounts and trusting specific users
would be an easy first try.

Regards,
Mark


From: "Andrew Dunstan" <andrew(at)dunslane(dot)net>
To: "Gavin Sherry" <swm(at)linuxworld(dot)com(dot)au>
Cc: markwkm(at)gmail(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 03:34:41
Message-ID: 4078.24.211.165.134.1167968081.squirrel@www.dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Gavin Sherry wrote:
>
> With PLM, you could test patches against various code branches. I'd
> guessed Mark would want to provide this capability. Pulling branches from
> anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
> local mirror would be beneficial for patch testing.

I think you're missing the point. Buildfarm members already typically have
or can get very cheaply a copy of each branch they build (HEAD and/or
REL*_*_STABLE). As long as the patch feed is kept to just patches which
they can apply there should be no great bandwidth issues.

>
>> The patches would need to be vetted first, or no sane buildfarm owner
>> will
>> want to use them.
>
> It would be nice if there could be a class of trusted users whose patches
> would not have to be vetted.
>
>

Beyond committers?

cheers

andrew


From: Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: markwkm(at)gmail(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 03:57:55
Message-ID: Pine.LNX.4.58.0701051456250.4447@linuxworld.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, 4 Jan 2007, Andrew Dunstan wrote:

> Gavin Sherry wrote:
> >
> > With PLM, you could test patches against various code branches. I'd
> > guessed Mark would want to provide this capability. Pulling branches from
> > anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
> > local mirror would be beneficial for patch testing.
>
>
> I think you're missing the point. Buildfarm members already typically have
> or can get very cheaply a copy of each branch they build (HEAD and/or
> REL*_*_STABLE). As long as the patch feed is kept to just patches which
> they can apply there should be no great bandwidth issues.

Right... my comment was more for Mark.

> > It would be nice if there could be a class of trusted users whose patches
> > would not have to be vetted.
> >
> >
>
> Beyond committers?

Hmmm... good question. I think so. I imagine the group would be small
though.

Thanks,

Gavin


From: Tino Wildenhain <tino(at)wildenhain(dot)de>
To: markwkm(at)gmail(dot)com
Cc: Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 07:14:07
Message-ID: 459DFABF.5010400@wildenhain.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

markwkm(at)gmail(dot)com schrieb:
> On 1/4/07, Gavin Sherry <swm(at)linuxworld(dot)com(dot)au> wrote:
>> On Thu, 4 Jan 2007, Andrew Dunstan wrote:
...
>> Pulling branches from
>> anonvcvs regularly might be burdensome bandwidth-wise. So, like you
>> say, a
>> local mirror would be beneficial for patch testing.
>
> Right some sort of local mirror would definitely speed things up.

Easier speedup in this regard would be using subversion instead
of cvs. It transfers only diffs to your working copy (or rather,
to your last checkout) so its really saving on bandwidth.

Regards
Tino


From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>, markwkm(at)gmail(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 09:59:46
Message-ID: 459E2192.7070201@kaltenbrunner.cc
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Andrew Dunstan wrote:
> Gavin Sherry wrote:
>> With PLM, you could test patches against various code branches. I'd
>> guessed Mark would want to provide this capability. Pulling branches from
>> anonvcvs regularly might be burdensome bandwidth-wise. So, like you say, a
>> local mirror would be beneficial for patch testing.
>
>
> I think you're missing the point. Buildfarm members already typically have
> or can get very cheaply a copy of each branch they build (HEAD and/or
> REL*_*_STABLE). As long as the patch feed is kept to just patches which
> they can apply there should be no great bandwidth issues.

yeah - another thing to consider is that switching to a different scm
repository qould put quite a burden on the buildfarm admins (most of
those are not that easily available for the more esotheric platforms for
example).
I'm also not sure how useful it would be to test patches against
branches other then HEAD - new and complex patches will only get applied
on HEAD anyway ...

Stefan


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tino Wildenhain <tino(at)wildenhain(dot)de>
Cc: markwkm(at)gmail(dot)com, Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 15:24:13
Message-ID: 459E6D9D.1050204@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Tino Wildenhain wrote:
> markwkm(at)gmail(dot)com schrieb:
>> On 1/4/07, Gavin Sherry <swm(at)linuxworld(dot)com(dot)au> wrote:
>>> On Thu, 4 Jan 2007, Andrew Dunstan wrote:
> ...
>>> Pulling branches from
>>> anonvcvs regularly might be burdensome bandwidth-wise. So, like you
>>> say, a
>>> local mirror would be beneficial for patch testing.
>>
>> Right some sort of local mirror would definitely speed things up.
>
> Easier speedup in this regard would be using subversion instead
> of cvs. It transfers only diffs to your working copy (or rather,
> to your last checkout) so its really saving on bandwidth.
>

cvs update isn't too bad either. I just did a substantial update on a
tree that had not been touched for nearly 6 months, and ethereal tells
me that total traffic was 7343004 bytes in 7188 packets. Individual
buildfarm updates are going to be much lower than that, by a couple of
orders of magnitude, I suspect.

If we were to switch to subversion we should do it for the right reason
- this isn't one.

cheers

andrew


From: Jim Nasby <decibel(at)decibel(dot)org>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Tino Wildenhain <tino(at)wildenhain(dot)de>, markwkm(at)gmail(dot)com, Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-05 21:35:50
Message-ID: E020CEEC-9E37-4AFE-944B-150053C7BCF4@decibel.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Jan 5, 2007, at 10:24 AM, Andrew Dunstan wrote:
> cvs update isn't too bad either. I just did a substantial update on
> a tree that had not been touched for nearly 6 months, and ethereal
> tells me that total traffic was 7343004 bytes in 7188 packets.
> Individual buildfarm updates are going to be much lower than that,
> by a couple of orders of magnitude, I suspect.

More important, I see no reason to tie applying patches to pulling
from CVS. In fact, I think it's a bad idea: you want to build just
what's in CVS first, to make sure that it's working, before you start
testing any patches against it. So if this were added to buildfarm,
presumably it would build plain CVS, then start testing patches. It
could try a CVS up between each patch to see if anything changed, and
possibly start back at the top at that point.
--
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)


From: markwkm(at)gmail(dot)com
To: "Andrew Dunstan" <andrew(at)dunslane(dot)net>
Cc: "Gavin Sherry" <swm(at)linuxworld(dot)com(dot)au>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-11 17:04:21
Message-ID: 70c01d1d0701110904l1f71f1cdneaab4998522bc06e@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 1/4/07, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
> Gavin Sherry wrote:
> > On Thu, 4 Jan 2007 markwkm(at)gmail(dot)com wrote:
> >
> >> 1. Pull source directly from repositories (cvs, git, etc.) PLM
> >> doesn't really track actually scm repositories. It requires
> >> directories of source code to be traversed, which are set up by
> >> creating mirrors.
> >
> > It seems to me that a better approach might be to mirror the CVS repo --
> > or at least make that an option -- and pull the sources locally. Having to
> > pull down >100MB of data for every build might be onerous to some build
> > farm members.
> >
>
>
> I am not clear about what is being proposed. Currently buildfarm syncs
> against (or pulls a fresh copy from, depending on configuration) either
> the main anoncvs repo or a mirror (which you can get using cvsup or rsync,
> among other mechanisms). I can imagine a mechanism in which we pull
> certain patches from a patch server (maybe using an RSS feed, or a SOAP
> call?) which could be applied before the run. I wouldn't want to couple
> things much more closely than that.

I'm thinking that a SOAP call might be easier to implement? The RSS
feed seems like it would be more interesting as I am imagining that a
buildfarm system might be able to react to new patches being added to
the system. But maybe that's a trivial thing for either SOAP or an
RSS feed.

> The patches would need to be vetted first, or no sane buildfarm owner will
> want to use them.

Perhaps as a first go it can pull any patch that can be applied
without errors? The list of patches to test can be eventually
restricted by name and who submitted them.

Regards,
Mark


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: markwkm(at)gmail(dot)com
Cc: Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-11 17:21:34
Message-ID: 45A6721E.90105@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

markwkm(at)gmail(dot)com wrote:
>>
>> I am not clear about what is being proposed. Currently buildfarm syncs
>> against (or pulls a fresh copy from, depending on configuration) either
>> the main anoncvs repo or a mirror (which you can get using cvsup or
>> rsync,
>> among other mechanisms). I can imagine a mechanism in which we pull
>> certain patches from a patch server (maybe using an RSS feed, or a SOAP
>> call?) which could be applied before the run. I wouldn't want to couple
>> things much more closely than that.
>
> I'm thinking that a SOAP call might be easier to implement? The RSS
> feed seems like it would be more interesting as I am imagining that a
> buildfarm system might be able to react to new patches being added to
> the system. But maybe that's a trivial thing for either SOAP or an
> RSS feed.

I'd be quite happy with SOAP. We can make SOAP::Lite an optional load
module, so if you don't want to run patches you don't need to have the
module available.

>
>> The patches would need to be vetted first, or no sane buildfarm owner
>> will
>> want to use them.
>
> Perhaps as a first go it can pull any patch that can be applied
> without errors? The list of patches to test can be eventually
> restricted by name and who submitted them.
>
>

This reasoning seems unsafe. I am not prepared to test arbitrary patches
on my machine - that seems like a perfect recipe for a trojan horse. I
want to know that they have been vetted by someone I trust. That means
that in order to get into the feed in the first place there has to be a
group of trusted submitters. Obviously, current postgres core committers
should be in that group, and I can think of maybe 5 or 6 other people
that could easily be on it. Perhaps we should leave the selection to the
core team.

cheers

andrew


From: markwkm(at)gmail(dot)com
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-12 16:53:21
Message-ID: 70c01d1d0701120853l506de559kb363a6be279d09eb@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 1/11/07, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
> markwkm(at)gmail(dot)com wrote:
> >>
> >> I am not clear about what is being proposed. Currently buildfarm syncs
> >> against (or pulls a fresh copy from, depending on configuration) either
> >> the main anoncvs repo or a mirror (which you can get using cvsup or
> >> rsync,
> >> among other mechanisms). I can imagine a mechanism in which we pull
> >> certain patches from a patch server (maybe using an RSS feed, or a SOAP
> >> call?) which could be applied before the run. I wouldn't want to couple
> >> things much more closely than that.
> >
> > I'm thinking that a SOAP call might be easier to implement? The RSS
> > feed seems like it would be more interesting as I am imagining that a
> > buildfarm system might be able to react to new patches being added to
> > the system. But maybe that's a trivial thing for either SOAP or an
> > RSS feed.
>
> I'd be quite happy with SOAP. We can make SOAP::Lite an optional load
> module, so if you don't want to run patches you don't need to have the
> module available.
>
> >
> >> The patches would need to be vetted first, or no sane buildfarm owner
> >> will
> >> want to use them.
> >
> > Perhaps as a first go it can pull any patch that can be applied
> > without errors? The list of patches to test can be eventually
> > restricted by name and who submitted them.
> >
> >
>
> This reasoning seems unsafe. I am not prepared to test arbitrary patches
> on my machine - that seems like a perfect recipe for a trojan horse. I
> want to know that they have been vetted by someone I trust. That means
> that in order to get into the feed in the first place there has to be a
> group of trusted submitters. Obviously, current postgres core committers
> should be in that group, and I can think of maybe 5 or 6 other people
> that could easily be on it. Perhaps we should leave the selection to the
> core team.

That's an excellent point; I didn't think of the trojan horse
scenario. What do you think about setting up the buildfarm clients
with the users they are willing to test patches for, as opposed to
having the patch system track who is are trusted users? My thoughts
are the former is easier to implement and that it allows anyone to use
the buildfarm to test a patch for anyone, well each buildfarm client
user permitting.

Regards,
Mark


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: markwkm(at)gmail(dot)com
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-12 17:08:20
Message-ID: 45A7C084.60407@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

markwkm(at)gmail(dot)com wrote:
> What do you think about setting up the buildfarm clients
> with the users they are willing to test patches for, as opposed to
> having the patch system track who is are trusted users? My thoughts
> are the former is easier to implement and that it allows anyone to use
> the buildfarm to test a patch for anyone, well each buildfarm client
> user permitting.

We can do this, but the utility will be somewhat limited. The submitters
will still have to be known and authenticated on the patch server. I
think you're also overlooking one of the virtues of the buildfarm,
namely that it does its thing unattended. If there is a preconfigured
set of submitters/vetters then we can rely on them all to do their
stuff. If it's more ad hoc, then when Joe Bloggs submits a spiffy new
patch every buildfarm owner that wanted to test it would need to go and
add him to their configured list of patch submitters. This doesn't seem
too workable.

cheers

andrew

>
> Regards,
> Mark
>
> ---------------------------(end of broadcast)---------------------------
> TIP 7: You can help support the PostgreSQL project by donating at
>
> http://www.postgresql.org/about/donate
>


From: markwkm(at)gmail(dot)com
To: "Andrew Dunstan" <andrew(at)dunslane(dot)net>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-15 20:52:49
Message-ID: 70c01d1d0701151252u5977f311odd01a256f82b95f8@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 1/12/07, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
> markwkm(at)gmail(dot)com wrote:
> > What do you think about setting up the buildfarm clients
> > with the users they are willing to test patches for, as opposed to
> > having the patch system track who is are trusted users? My thoughts
> > are the former is easier to implement and that it allows anyone to use
> > the buildfarm to test a patch for anyone, well each buildfarm client
> > user permitting.
>
> We can do this, but the utility will be somewhat limited. The submitters
> will still have to be known and authenticated on the patch server. I
> think you're also overlooking one of the virtues of the buildfarm,
> namely that it does its thing unattended. If there is a preconfigured
> set of submitters/vetters then we can rely on them all to do their
> stuff. If it's more ad hoc, then when Joe Bloggs submits a spiffy new
> patch every buildfarm owner that wanted to test it would need to go and
> add him to their configured list of patch submitters. This doesn't seem
> too workable.

Ok so it really wasn't much work to put together a SOAP call that'll
return patches from everyone, a trusted group, or a specified
individual. I put together a short perl example that illustrates some
of this:
http://folio.dyndns.org/example.pl.txt

How does that look?

Regards,
Mark


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: markwkm(at)gmail(dot)com
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-17 19:50:25
Message-ID: 45AE7E01.1070000@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

markwkm(at)gmail(dot)com wrote:
> On 1/12/07, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
>> markwkm(at)gmail(dot)com wrote:
>> > What do you think about setting up the buildfarm clients
>> > with the users they are willing to test patches for, as opposed to
>> > having the patch system track who is are trusted users? My thoughts
>> > are the former is easier to implement and that it allows anyone to use
>> > the buildfarm to test a patch for anyone, well each buildfarm client
>> > user permitting.
>>
>> We can do this, but the utility will be somewhat limited. The submitters
>> will still have to be known and authenticated on the patch server. I
>> think you're also overlooking one of the virtues of the buildfarm,
>> namely that it does its thing unattended. If there is a preconfigured
>> set of submitters/vetters then we can rely on them all to do their
>> stuff. If it's more ad hoc, then when Joe Bloggs submits a spiffy new
>> patch every buildfarm owner that wanted to test it would need to go and
>> add him to their configured list of patch submitters. This doesn't seem
>> too workable.
>
> Ok so it really wasn't much work to put together a SOAP call that'll
> return patches from everyone, a trusted group, or a specified
> individual. I put together a short perl example that illustrates some
> of this:
> http://folio.dyndns.org/example.pl.txt
>
> How does that look?
>

Looks OK in general, although I would need to know a little more of the
semantics. I get back a structure that looks like what's below.

One thing: the patch server will have to run over HTTPS - that way we
can know that it is who it says it is.

cheers

andrew

$VAR1 = [
bless( {
'repository_id' => '1',
'created_on' => '2007-01-15T19:40:09-08:00',
'diff' => 'dummied out',
'name' => 'copy_nowal.v1.patch',
'owner_id' => '1',
'id' => '1',
'updated_on' => '2007-01-15T11:40:10-08:00'
}, 'Patch' ),
bless( {
'repository_id' => '1',
'created_on' => '2007-01-15T19:40:09-08:00',
'diff' => 'dummied out',
'name' => 'pgsql-bitmap-09-17.patch',
'owner_id' => '1',
'id' => '2',
'updated_on' => '2007-01-15T11:40:29-08:00'
}, 'Patch' )
];


From: markwkm(at)gmail(dot)com
To: "Andrew Dunstan" <andrew(at)dunslane(dot)net>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-18 02:35:10
Message-ID: 70c01d1d0701171835q282146dei1b5bcb7d2b85220d@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 1/17/07, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
> markwkm(at)gmail(dot)com wrote:
> > On 1/12/07, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
> >> markwkm(at)gmail(dot)com wrote:
> >> > What do you think about setting up the buildfarm clients
> >> > with the users they are willing to test patches for, as opposed to
> >> > having the patch system track who is are trusted users? My thoughts
> >> > are the former is easier to implement and that it allows anyone to use
> >> > the buildfarm to test a patch for anyone, well each buildfarm client
> >> > user permitting.
> >>
> >> We can do this, but the utility will be somewhat limited. The submitters
> >> will still have to be known and authenticated on the patch server. I
> >> think you're also overlooking one of the virtues of the buildfarm,
> >> namely that it does its thing unattended. If there is a preconfigured
> >> set of submitters/vetters then we can rely on them all to do their
> >> stuff. If it's more ad hoc, then when Joe Bloggs submits a spiffy new
> >> patch every buildfarm owner that wanted to test it would need to go and
> >> add him to their configured list of patch submitters. This doesn't seem
> >> too workable.
> >
> > Ok so it really wasn't much work to put together a SOAP call that'll
> > return patches from everyone, a trusted group, or a specified
> > individual. I put together a short perl example that illustrates some
> > of this:
> > http://folio.dyndns.org/example.pl.txt
> >
> > How does that look?
> >
>
> Looks OK in general, although I would need to know a little more of the
> semantics. I get back a structure that looks like what's below.

There probably isn't a need to return all that data. I was being lazy
and returning the entire object. I'll annotate below.

> One thing: the patch server will have to run over HTTPS - that way we
> can know that it is who it says it is.

Right, I'm not sure if the computer I'm proofing it on is the best
place for it so I didn't bother with the HTTPS, but should be trivial
to have it.

> cheers
>
> andrew
>
>
> $VAR1 = [
> bless( {
> 'repository_id' => '1',
ID of the repository the patch applies to.

> 'created_on' => '2007-01-15T19:40:09-08:00',
Timestamp of when the record was created.

> 'diff' => 'dummied out',
Actual patch, in plain text.

> 'name' => 'copy_nowal.v1.patch',
Name of the patch file.

> 'owner_id' => '1',
ID of the owner of the patch.

> 'id' => '1',
ID of the patch.

> 'updated_on' => '2007-01-15T11:40:10-08:00'
Timestamp of when patch was updated.

> }, 'Patch' ),
> bless( {
> 'repository_id' => '1',
> 'created_on' => '2007-01-15T19:40:09-08:00',
> 'diff' => 'dummied out',
> 'name' => 'pgsql-bitmap-09-17.patch',
> 'owner_id' => '1',
> 'id' => '2',
> 'updated_on' => '2007-01-15T11:40:29-08:00'
> }, 'Patch' )
> ];

Regards,
Mark


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: markwkm(at)gmail(dot)com
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: ideas for auto-processing patches
Date: 2007-01-18 14:22:32
Message-ID: 45AF82A8.8040102@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

markwkm(at)gmail(dot)com wrote:
>
>
>> One thing: the patch server will have to run over HTTPS - that way we
>> can know that it is who it says it is.
>
> Right, I'm not sure if the computer I'm proofing it on is the best
> place for it so I didn't bother with the HTTPS, but should be trivial
> to have it.
>

Yes, this was more by way of a "don't forget this" note. The
implementation can be happily oblivious of it, other than setting https
in the proxy for the SOAP::Lite dispatcher. From a buildfarm point of
view, we would add such SOAP params into the config file.

cheers

andrew