From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | greg(at)ngender(dot)net, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: 4.1beta1: ANYARRAY disallowed for DOMAIN types which happen to be arrays |
Date: | 2011-05-10 17:47:26 |
Message-ID: | BANLkTin=QnfD=WwnVpmEty9FG2PsgHpkzg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, May 9, 2011 at 11:32 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> "J. Greg Davidson" <greg(at)ngender(dot)net> writes:
>> * Tighten casting checks for domains based on arrays (Tom Lane)
>
>> When a domain is based on an array type,..., such a domain type
>> is no longer allowed to match an anyarray parameter of a
>> polymorphic function, except by explicitly downcasting it to the
>> base array type.
>
>> This will require me to add hundreds of casts to my code. I do not get
>> how this will "Tighten casting checks". It will certainly not tighten
>> my code! Could you explain how it is good to not be able to do array
>> operations with a type which is an array?
>
> The discussion that led up to that decision is in this thread:
> http://archives.postgresql.org/pgsql-hackers/2010-10/msg01362.php
> specifically here:
> http://archives.postgresql.org/pgsql-hackers/2010-10/msg01545.php
>
> The previous behavior was clearly broken. The new behavior is at least
> consistent. It might be more user-friendly if we did automatic
> downcasts in these cases, but we were not (and still are not) doing
> automatic downcasts for domains over scalar types in comparable cases,
> so it's not very clear why domains over array types should be treated
> differently.
>
> To be concrete, consider the function array_append(anyarray, anyelement)
> yielding anyarray. Suppose we have a domain D over int[] and the call
> array_append(var_of_type_D, 42). If we automatically downcast the
> variable to int[], should the result of the function be considered to be
> of type D, or type int[]? This isn't a trivial distinction because
> choosing to consider it of type D means we have to re-check D's domain
> constraints, which might or might not be satisfied by the modified
> array. Previous releases considered the result to be of type D,
> *without* rechecking the domain constraints, which was flat out wrong.
>
> So we basically had three alternatives to make it better:
> * downcast to the array type, which would possibly silently
> break applications that were relying on the function result
> being considered of the domain type
> * re-apply domain checks on the function result, which would be
> a performance hit and possibly again result in unobvious
> breakage
> * explicitly break it by throwing a parse error until you
> downcast (and then upcast the function result if you want)
> I realize that #3 is a bit unpleasant, but are either of the other two
> better? At least #3 shows you where you need to check for problems.
Aren't any applications that would be broken by #1 broken already?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2011-05-10 17:49:50 | Re: collateral benefits of a crash-safe visibility map |
Previous Message | Ross J. Reedstrom | 2011-05-10 17:42:34 | Re: Collation mega-cleanups |