Re: Duplicate JSON Object Keys

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: "David E(dot) Wheeler" <david(at)justatheory(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org Hackers" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Duplicate JSON Object Keys
Date: 2013-03-08 20:39:06
Message-ID: CA+Tgmob8c1-Knz7JwWgsXXqf-XR4=jkME5ftVpJhPkrH5R4qCw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Mar 7, 2013 at 2:48 PM, David E. Wheeler <david(at)justatheory(dot)com> wrote:
> In the spirit of being liberal about what we accept but strict about what we store, it seems to me that JSON object key uniqueness should be enforced either by throwing an error on duplicate keys, or by flattening so that the latest key wins (as happens in JavaScript). I realize that tracking keys will slow parsing down, and potentially make it more memory-intensive, but such is the price for correctness.

I'm with Andrew. That's a rathole I emphatically don't want to go
down. I wrote this code originally, and I had the thought clearly in
mind that I wanted to accept JSON that was syntactically well-formed,
not JSON that met certain semantic constraints. We could add
functions like json_is_non_stupid(json) so that people can easily add
a CHECK constraint that enforces this if they so desire. But
enforcing it categorically seems like a bad plan, especially since at
this point it would require a compatibility break with previous
releases.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Hannu Krosing 2013-03-08 20:56:42 Re: Duplicate JSON Object Keys
Previous Message Robert Haas 2013-03-08 20:33:39 Re: Small patch for "CREATE TRIGGER" documentation