Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)

Lists: pgsql-hackers
From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-16 16:59:34
Message-ID: AANLkTinxBrLLx_bjNB0pT4kk3Qnt-Zwb_aU8ibz7m8Ea@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hi all -
I independently started some work on a similar capability as was contributed
back in August by Joey Adams for a json datatype. Before starting, I did a
quick search but for some reason didn't turn this existing thread up.

What I've been working on is out on github for now:
http://github.com/tlaurenzo/pgjson

When I started, I was actually aiming for something else, and got caught up
going down this rabbit hole. I took a different design approach, making the
internal form be an extended BSON stream and implementing event-driven
parsing and serializing to the different formats. There was some discussion
in the original thread around storing plain text vs a custom format. I have
to admit I've been back and forth a couple of times on this and have come to
like a BSON-like format for the data at rest.

Pros:
- It is directly iterable without parsing and/or constructing an AST
- It is its own representation. If iterating and you want to tear-off a
value to be returned or used elsewhere, its a simple buffer copy plus some
bit twiddling.
- It is conceivable that clients already know how to deal with BSON,
allowing them to work with the internal form directly (ala MongoDB)
- It stores a wider range of primitive types than JSON-text. The most
important are Date and binary.

Cons:
- The format appears to have been "grown". Some of the critical
decisions were made late in the game (ie. why would your integral type codes
be last)
- Natively, the format falls victim to the artificial document vs element
distinction, which I never understood. I've worked around this with an
escape mechanism for representing root values, but its not great.
- The processor is not resilient in the face of unknown element types

I'm leaning towards thinking that the determination comes down to the
following:
- If you just want a "checkbox" item that the database has a json
datatype and some support functions, storing as text may make sense. It can
be much simpler; however, it becomes increasingly hard to do anything real
without adding a parse to AST, manipulate, dump to text cycle to every
function.
- If you want a json datatype that is highly integrated and manipulable,
you want a binary datastructure and in the absence of any other contender in
this area, BSON is ok (not great, but ok).
- The addition of a JavaScript procedural language probably does not
bring its own format for data at rest. All of the engines I know of (I
haven't looked at what Guile is doing) do not have a static representation
for internal data structures. They are heap objects with liberal use of
internal and external pointers. Most do have a mechanism, however, for
injecting foreign objects into the runtime without resorting to making a
dumb copy. As such, the integration approach would probably be to determine
the best format for JSON data at rest and provide adapters to the chosen
JavaScript runtime to manipulate this at-rest format directly (potentially
using a copy on write approach). If the at-rest format is Text, then you
would need to do a parse-to-AST step for each JavaScript function
invocation.

Here's a few notes on my current implementation:
- Excessive use of lex/yacc: This was quick and easy but the grammars are
simple enough that I'd probably hand-code a parser for any final solution.
- When the choice between following the json.org spec to the letter and
implementing lenient parsing for valid JavaScript constructs arose, I chose
lenient.
- Too much buffer copying: When I started, I was just doodling with
writing C code to manipulate JSON/BSON and not working with postgres in
particular. As such, it all uses straight malloc/free and too many copies
are made to get things in and out of VARDATA structures. This would all be
eliminated in any real version.
- UTF-8 is supported but not fully working completely. The support
functions that Joey wrote do a better job at this.
- My json path evaluation is crippled. Given the integration with the PG
type system, I thought I just wanted a simple property traversal mechanism,
punting higher level manipulation to native PG functions. Seeing real
JSONPath work, though, I'm not so sure. I like the simplicity of what I've
done but the features of the full bit are nice too.
- This is first-pass prototype code with the goal of seeing it all
working together.

While I had an end in mind, I did a lot of this for the fun of it and to
just scratch an itch, so I'm not really advocating for anything at this
point. I'm curious as to what others think the state of JSON and Postgres
should be. I've worked with JavaScript engines a good deal and would be
happy to help get us there, either using some of the work/approaches here or
going in a different direction.

Terry


From: Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-16 20:18:03
Message-ID: AANLkTim_0jwnVEtgsDnT+meKqZfZFCtyMLRa8Uzc=tCP@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

2010/10/17 Terry Laurenzo <tj(at)laurenzo(dot)org>:
> Hi all -
> I independently started some work on a similar capability as was contributed
> back in August by Joey Adams for a json datatype.  Before starting, I did a
> quick search but for some reason didn't turn this existing thread up.
> What I've been working on is out on github for
> now: http://github.com/tlaurenzo/pgjson
> When I started, I was actually aiming for something else, and got caught up
> going down this rabbit hole.  I took a different design approach, making the
> internal form be an extended BSON stream and implementing event-driven
> parsing and serializing to the different formats.  There was some discussion
> in the original thread around storing plain text vs a custom format.  I have
> to admit I've been back and forth a couple of times on this and have come to
> like a BSON-like format for the data at rest.

Reading your proposal, I'm now +1 for BSON-like style. Especially JS
engine's capabilities to map external data to the language
representation are good news. I agree the mapping is engine's task,
not data format task. I'm not sure if your BSON-like format is more
efficient in terms of space and time than plain text, though.

I like as simple design as we can accept. ISTM format, I/O interface,
simple get/set, mapping tuple from/to object, and indexing are minimum
requirement. Something like JSONPath, aggregates, hstore conversion
and whatsoever sound too much.

Regards,

--
Hitoshi Harada


From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com>
Cc: Terry Laurenzo <tj(at)laurenzo(dot)org>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-18 10:56:03
Message-ID: AANLkTi=vrmC=Z33qmsyDycFdCiUVM490VntJbNRZ6F5e@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Oct 17, 2010 at 5:18 AM, Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com> wrote:
> Reading your proposal, I'm now +1 for BSON-like style. Especially JS
> engine's capabilities to map external data to the language
> representation are good news.

Hmm, we could store postgres' data types as-is with their type oids.
I'm not sure whether it is efficient or worth doing, though.

> I like as simple design as we can accept. ISTM format, I/O interface,
> simple get/set, mapping tuple from/to object, and indexing are minimum
> requirement.

+1 to small start, but simple get/set are already debatable...
For example, text/json conversion:
A. SELECT '<json>'::json;
B. SELECT '<text>'::text::json;

In the git repo, A calls parse_json_to_bson_as_vardata(), so the input
should be a json format. OTOH, B calls pgjson_json_from_text(), so the
input can be any text. Those behaviors are surprising. I think we have
no other choice but to define text-to-json cast as parsing. The same
can be said for json-to-text -- type-output function vs. extracting
text value from json.

I think casting text to/from json should behave in the same way as type
input/output. The xml type works in the same manner. And if so, we might
not have any casts to/from json for consistency, even though there are
no problems in casts for non-text types.

I'll list issues before we start json types even in the simplest cases:
----
1. where to implement json core: external library vs. inner postgres
2. internal format: text vs. binary (*)
3. encoding: always UTF-8 vs. database encoding (*)
4. meaning of casts text to/from json: parse/stringify vs. get/set
5. parser implementation: flex/bison vs. hand-coded.
----
(*) Note that we would have comparison two json values in the future. So,
we might need to normalize the internal format even in text representation.

The most interesting parts of json types, including indexing and jsonpath,
would be made on the json core. We need conclusions about those issues.

--
Itagaki Takahiro


From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
Cc: Hitoshi Harada <umi(dot)tanuki(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-18 16:15:07
Message-ID: AANLkTi=uaD_M5vo491ohO6Gcmgkqi+02vVRYzF-aM+_0@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

> > I like as simple design as we can accept. ISTM format, I/O interface,
> > simple get/set, mapping tuple from/to object, and indexing are minimum
> > requirement.
>
> +1 to small start, but simple get/set are already debatable...
> For example, text/json conversion:
> A. SELECT '<json>'::json;
> B. SELECT '<text>'::text::json;
>
> In the git repo, A calls parse_json_to_bson_as_vardata(), so the input
> should be a json format. OTOH, B calls pgjson_json_from_text(), so the
> input can be any text. Those behaviors are surprising. I think we have
> no other choice but to define text-to-json cast as parsing. The same
> can be said for json-to-text -- type-output function vs. extracting
> text value from json.
>
> I think casting text to/from json should behave in the same way as type
> input/output. The xml type works in the same manner. And if so, we might
> not have any casts to/from json for consistency, even though there are
> no problems in casts for non-text types.
>

I just reworked some of this last night, so I'm not sure which version you
are referring to (new version has a pgplugin/jsoncast.c source file). I was
basically circling around the same thing as you trying to find something
that felt natural and not confusing. I agree that we don't have much of a
choice to keep in/out functions as parse/serialize and that then introducing
casts that do differently creates confusion. When I was playing with it, I
was getting confused, and I wrote it. :)

An alternative to pg casting to extract postgres values could be to
introduce analogs to JavaScript constructors, which is the JavaScript way to
cast. For example: String(json), Number(json), Date(json). This would feel
natural to a JavaScript programmer and would be explicit and non-surprising:
A. SELECT String('{a: 1, b:2}'::json -> 'a') (Returns postgres text)
B. SELECT Number('1'::json) (Returns postgres decimal)

I think that the most common use case for this type of thing in the DB will
be to extract a JSON scalar as a postgres scalar.

The inverse, while probably less useful, is currently represented by the
json_from_* functions. We could collapse all of these down to one
overloaded function, say ToJson(...):
A. SELECT ToJson(1) (Would return a json type with an int32 "1" value)
B. SELECT ToJson('Some String') (Would return a json type with a string
value)

There might be some syntactic magic we could do by adding an intermediate
jsonscalar type, but based on trying real cases with this stuff, you always
end up having to be explicit about your conversions anyway. Having implicit
type coercion from this polymorphic type tends to make things confusing,
imo.

>
> I'll list issues before we start json types even in the simplest cases:
> ----
> 1. where to implement json core: external library vs. inner postgres
> 2. internal format: text vs. binary (*)
> 3. encoding: always UTF-8 vs. database encoding (*)
> 4. meaning of casts text to/from json: parse/stringify vs. get/set
> 5. parser implementation: flex/bison vs. hand-coded.
> ----
> (*) Note that we would have comparison two json values in the future. So,
> we might need to normalize the internal format even in text representation.
>
> The most interesting parts of json types, including indexing and jsonpath,
> would be made on the json core. We need conclusions about those issues.
>
>
My opinions or ramblings on the above:
1. There's a fair bit of code involved for something that many are going
to gloss over. I can think of pros/cons for external/internal/contrib and
I'm not sure which I would choose.
2. I'm definitely in the binary camp, but part of the reason for building
it out was to try it with some real world cases to get a feel for
performance implications end to end. We make heavy use of MongoDB at the
office and I was thinking it might make sense to strip some of those cases
down and see how they would be implemented in this context. I'll write up
more thoughts on how I think text/binary should perform for various cases
tonight.
3. I think if we go with binary, we should always store UTF-8 in the
binary structure. Otherwise, we just have too much of the guts of the
binary left to the whim of the database encoding. As currently implemented,
all strings generated by the in/out functions should be escaped so that they
are pure ascii (not quite working, but there in theory). JSON is by
definition UTF-8, and in this case, I think it trumps database encoding.
4. My thoughts on the casts are above.
5. There seems to be a lot of runtime and code size overhead inherent in
the flex/bison parsers, especially considering that they will most
frequently be invoked for very small streams. Writing a good hand-coded
parser for comparison is just a matter of which bottle of wine to choose
prior to spending the hours coding it, and I would probably defer the
decision until later.

Terry


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 14:44:44
Message-ID: AANLkTi=Lha2-PtnKNT3OTjg=HzEVWraubT523PemxJZB@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Oct 16, 2010 at 12:59 PM, Terry Laurenzo <tj(at)laurenzo(dot)org> wrote:
>    - It is directly iterable without parsing and/or constructing an AST
>    - It is its own representation.  If iterating and you want to tear-off a
> value to be returned or used elsewhere, its a simple buffer copy plus some
> bit twiddling.
>    - It is conceivable that clients already know how to deal with BSON,
> allowing them to work with the internal form directly (ala MongoDB)
>    - It stores a wider range of primitive types than JSON-text.  The most
> important are Date and binary.

When last I looked at that, it appeared to me that what BSON could
represent was a subset of what JSON could represent - in particular,
that it had things like a 32-bit limit on integers, or something along
those lines. Sounds like it may be neither a superset nor a subset,
in which case I think it's a poor choice for an internal
representation of JSON.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Terry Laurenzo <tj(at)laurenzo(dot)org>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 14:57:30
Message-ID: 4CBDB1DA.8090003@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 10/19/2010 10:44 AM, Robert Haas wrote:
> On Sat, Oct 16, 2010 at 12:59 PM, Terry Laurenzo<tj(at)laurenzo(dot)org> wrote:
>> - It is directly iterable without parsing and/or constructing an AST
>> - It is its own representation. If iterating and you want to tear-off a
>> value to be returned or used elsewhere, its a simple buffer copy plus some
>> bit twiddling.
>> - It is conceivable that clients already know how to deal with BSON,
>> allowing them to work with the internal form directly (ala MongoDB)
>> - It stores a wider range of primitive types than JSON-text. The most
>> important are Date and binary.
> When last I looked at that, it appeared to me that what BSON could
> represent was a subset of what JSON could represent - in particular,
> that it had things like a 32-bit limit on integers, or something along
> those lines. Sounds like it may be neither a superset nor a subset,
> in which case I think it's a poor choice for an internal
> representation of JSON.

Yeah, if it can't handle arbitrary precision numbers as has previously
been stated it's dead in the water for our purposes, I think.

cheers

andrew


From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 15:22:20
Message-ID: AANLkTinLd1K_kN8i1dms1evbFXtzW8yWfWWcg_38sZaC@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Agreed. BSON was born out of implementations that either lacked arbitrary
precision numbers or had a strong affinity to an int/floating point way of
thinking about numbers. I believe that if BSON had an arbitrary precision
number type, it would be a proper superset of JSON.

As an aside, the max range of an int in BSON 64bits. Back to my original
comment that BSON was "grown" instead of designed, it looks like both the
32bit and 64bit integers were added late in the game and that the original
designers perhaps were just going to store all numbers as double.

Perhaps we should enumerate the attributes of what would make a good binary
encoding?

Terry

On Tue, Oct 19, 2010 at 8:57 AM, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:

>
>
> On 10/19/2010 10:44 AM, Robert Haas wrote:
>
>> On Sat, Oct 16, 2010 at 12:59 PM, Terry Laurenzo<tj(at)laurenzo(dot)org> wrote:
>>
>>> - It is directly iterable without parsing and/or constructing an AST
>>> - It is its own representation. If iterating and you want to tear-off
>>> a
>>> value to be returned or used elsewhere, its a simple buffer copy plus
>>> some
>>> bit twiddling.
>>> - It is conceivable that clients already know how to deal with BSON,
>>> allowing them to work with the internal form directly (ala MongoDB)
>>> - It stores a wider range of primitive types than JSON-text. The most
>>> important are Date and binary.
>>>
>> When last I looked at that, it appeared to me that what BSON could
>> represent was a subset of what JSON could represent - in particular,
>> that it had things like a 32-bit limit on integers, or something along
>> those lines. Sounds like it may be neither a superset nor a subset,
>> in which case I think it's a poor choice for an internal
>> representation of JSON.
>>
>
> Yeah, if it can't handle arbitrary precision numbers as has previously been
> stated it's dead in the water for our purposes, I think.
>
> cheers
>
> andrew
>


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 19:17:09
Message-ID: AANLkTikt=BLqXLU-sO23WcXV=szBrwJJ1cYjGbhJJZ_H@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Oct 19, 2010 at 11:22 AM, Terry Laurenzo <tj(at)laurenzo(dot)org> wrote:
> Agreed.  BSON was born out of implementations that either lacked arbitrary
> precision numbers or had a strong affinity to an int/floating point way of
> thinking about numbers.  I believe that if BSON had an arbitrary precision
> number type, it would be a proper superset of JSON.
>
> As an aside, the max range of an int in BSON 64bits.  Back to my original
> comment that BSON was "grown" instead of designed, it looks like both the
> 32bit and 64bit integers were added late in the game and that the original
> designers perhaps were just going to store all numbers as double.
>
> Perhaps we should enumerate the attributes of what would make a good binary
> encoding?

I think we should take a few steps back and ask why we think that
binary encoding is the way to go. We store XML as text, for example,
and I can't remember any complaints about that on -bugs or
-performance, so why do we think JSON will be different? Binary
encoding is a trade-off. A well-designed binary encoding should make
it quicker to extract a small chunk of a large JSON object and return
it; however, it will also make it slower to return the whole object
(because you're adding serialization overhead). I haven't seen any
analysis of which of those use cases is more important and why.

I am also wondering how this proposed binary encoding scheme will
interact with TOAST. If the datum is compressed on disk, you'll have
to decompress it anyway to do anything with it; at that point, is
there still going to be a noticeable speed-up from using the binary
encoding?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: "David E(dot) Wheeler" <david(at)kineticode(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Terry Laurenzo <tj(at)laurenzo(dot)org>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 19:36:20
Message-ID: 042F2ECE-7320-45EF-A721-B4F830EEC126@kineticode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Oct 19, 2010, at 12:17 PM, Robert Haas wrote:

> I think we should take a few steps back and ask why we think that
> binary encoding is the way to go. We store XML as text, for example,
> and I can't remember any complaints about that on -bugs or
> -performance, so why do we think JSON will be different? Binary
> encoding is a trade-off. A well-designed binary encoding should make
> it quicker to extract a small chunk of a large JSON object and return
> it; however, it will also make it slower to return the whole object
> (because you're adding serialization overhead). I haven't seen any
> analysis of which of those use cases is more important and why.

Maybe someone has numbers on that for the XML type?

Best,

David


From: Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>, robertmhaas(at)gmail(dot)com
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 19:40:28
Message-ID: AANLkTikWiqbDSdAoDv73noBV9irYO6LyhCB_oSS95OVS@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Oct 19, 2010 at 11:22 AM, Terry Laurenzo <tj(at)laurenzo(dot)org> wrote:
> Perhaps we should enumerate the attributes of what would make a good binary
> encoding?

Not sure if we're discussing the internal storage format or the binary
send/recv format, but in my humble opinion, some attributes of a good
internal format are:

1. Lightweight - it'd be really nice for the JSON datatype to be
available in core (even if extra features like JSONPath aren't).
2. Efficiency - Retrieval and storage of JSON datums should be
efficient. The internal format should probably closely resemble the
binary send/recv format so there's a good reason to use it.

A good attribute of the binary send/recv format would be
compatibility. For instance, if MongoDB (which I know very little
about) has binary send/receive, perhaps the JSON data type's binary
send/receive should use it.

Efficient retrieval and update of values in a large JSON tree would be
cool, but would be rather complex, and IMHO, overkill. JSON's main
advantage is that it's sort of a least common denominator of the type
systems of many popular languages, making it easy to transfer
information between them. Having hierarchical key/value store support
would be pretty cool, but I don't think it's what PostgreSQL's JSON
data type should do.

On Tue, Oct 19, 2010 at 3:17 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I think we should take a few steps back and ask why we think that
> binary encoding is the way to go. We store XML as text, for example,
> and I can't remember any complaints about that on -bugs or
> -performance, so why do we think JSON will be different? Binary
> encoding is a trade-off. A well-designed binary encoding should make
> it quicker to extract a small chunk of a large JSON object and return
> it; however, it will also make it slower to return the whole object
> (because you're adding serialization overhead). I haven't seen any
> analysis of which of those use cases is more important and why.

Speculation: the overhead involved with retrieving/sending and
receiving/storing JSON (not to mention TOAST
compression/decompression) will be far greater than that of
serializing/unserializing.


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>
Cc: Terry Laurenzo <tj(at)laurenzo(dot)org>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 20:46:03
Message-ID: AANLkTi=BWnKi+5ZxhNN1KHGvNS0ODYC5JikQdEXTYXxV@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Oct 19, 2010 at 3:40 PM, Joseph Adams
<joeyadams3(dot)14159(at)gmail(dot)com> wrote:
> On Tue, Oct 19, 2010 at 3:17 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> I think we should take a few steps back and ask why we think that
>> binary encoding is the way to go.  We store XML as text, for example,
>> and I can't remember any complaints about that on -bugs or
>> -performance, so why do we think JSON will be different?  Binary
>> encoding is a trade-off.  A well-designed binary encoding should make
>> it quicker to extract a small chunk of a large JSON object and return
>> it; however, it will also make it slower to return the whole object
>> (because you're adding serialization overhead).  I haven't seen any
>> analysis of which of those use cases is more important and why.
>
> Speculation: the overhead involved with retrieving/sending and
> receiving/storing JSON (not to mention TOAST
> compression/decompression) will be far greater than that of
> serializing/unserializing.

I speculate that your speculation is incorrect. AIUI, we, unlike
$COMPETITOR, tend to be CPU-bound rather than IO-bound on COPY. But
perhaps less speculation and more benchmarking is in order.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 21:39:53
Message-ID: AANLkTi=Lc2wOp1XMguj_OMQoWYTW5V507gbtirZur-xe@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Oct 19, 2010 at 2:46 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> On Tue, Oct 19, 2010 at 3:40 PM, Joseph Adams
> <joeyadams3(dot)14159(at)gmail(dot)com> wrote:
> > On Tue, Oct 19, 2010 at 3:17 PM, Robert Haas <robertmhaas(at)gmail(dot)com>
> wrote:
> >> I think we should take a few steps back and ask why we think that
> >> binary encoding is the way to go. We store XML as text, for example,
> >> and I can't remember any complaints about that on -bugs or
> >> -performance, so why do we think JSON will be different? Binary
> >> encoding is a trade-off. A well-designed binary encoding should make
> >> it quicker to extract a small chunk of a large JSON object and return
> >> it; however, it will also make it slower to return the whole object
> >> (because you're adding serialization overhead). I haven't seen any
> >> analysis of which of those use cases is more important and why.
> >
> > Speculation: the overhead involved with retrieving/sending and
> > receiving/storing JSON (not to mention TOAST
> > compression/decompression) will be far greater than that of
> > serializing/unserializing.
>
> I speculate that your speculation is incorrect. AIUI, we, unlike
> $COMPETITOR, tend to be CPU-bound rather than IO-bound on COPY. But
> perhaps less speculation and more benchmarking is in order.
>
>
After spending a week in the morass of this, I have to say that I am less
certain than I was on any front regarding the text/binary distinction. I'll
take some time and benchmark different cases. My hypothesis is that a well
implemented binary structure and conversions will add minimal overhead in
the IO + Validate case which would be the typical in/out flow. It could be
substantially faster for binary send/receive because the validation step
could be eliminated/reduced. Further storing as binary reduces the overhead
of random access to the data by database functions.

I'm envisioning staging this up as follows:
1. Create a "jsontext". jsontext uses text as its internal
representation. in/out functions are essentially a straight copy or a copy
+ validate.
2. Create a "jsonbinary" type. This uses an optimized binary format for
internal rep and send/receive. in/out is a parse/transcode operation to
standard JSON text.
3. Internal data access functions and JSON Path require a jsonbinary.
4. There are implicit casts to/from jsontext and jsonbinary.

I've got a grammar in mind for the binary structure that I'll share later
when I've got some more time. It's inspired by $COMPETITOR's format but a
little more sane, using type tags that implicitly define the size of the
operands, simplifying parsing.

I'll then define the various use cases and benchmark using the different
types. Some examples include such as IO No Validate, IO+Validate, Store and
Index, Internal Processing, Internal Composition, etc.

The answer may be to have both a jsontext and jsonbinary type as each will
be optimized for a different case.

Make sense? It may be a week before I get through this.
Terry


From: Greg Stark <gsstark(at)mit(dot)edu>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Terry Laurenzo <tj(at)laurenzo(dot)org>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 21:57:41
Message-ID: AANLkTi=+u_g=RL_J5wYvtB728_HmmTJFY7QF=YqF7V-t@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Oct 19, 2010 at 12:17 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> I think we should take a few steps back and ask why we think that
> binary encoding is the way to go.  We store XML as text, for example,
> and I can't remember any complaints about that on -bugs or
> -performance, so why do we think JSON will be different?  Binary
> encoding is a trade-off.  A well-designed binary encoding should make
> it quicker to extract a small chunk of a large JSON object and return
> it; however, it will also make it slower to return the whole object
> (because you're adding serialization overhead).  I haven't seen any
> analysis of which of those use cases is more important and why.
>

The elephant in the room is if the binary encoded form is smaller then
it occupies less ram and disk bandwidth to copy it around. If your
database is large that alone is the dominant factor. Doubling the size
of all the objects in your database means halving the portion of the
database that fits in RAM and doubling the amount of I/O required to
complete any given operation. If your database fits entirely in RAM
either way then it still means less RAM bandwidth used which is often
the limiting factor but depending on how much cpu effort it takes to
serialize and deserialize the balance could shift either way.

--
greg


From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 21:59:55
Message-ID: AANLkTimEcNATebYGkHMEiQksfie4xhufe-XS+mK==jLQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> After spending a week in the morass of this, I have to say that I am less
> certain than I was on any front regarding the text/binary distinction.  I'll
> take some time and benchmark different cases.  My hypothesis is that a well
> implemented binary structure and conversions will add minimal overhead in
> the IO + Validate case which would be the typical in/out flow.  It could be
> substantially faster for binary send/receive because the validation step
> could be eliminated/reduced.  Further storing as binary reduces the overhead
> of random access to the data by database functions.
>
> I'm envisioning staging this up as follows:
>    1. Create a "jsontext".  jsontext uses text as its internal
> representation.  in/out functions are essentially a straight copy or a copy
> + validate.
>    2. Create a "jsonbinary" type.  This uses an optimized binary format for
> internal rep and send/receive.  in/out is a parse/transcode operation to
> standard JSON text.
>    3. Internal data access functions and JSON Path require a jsonbinary.
>    4. There are implicit casts to/from jsontext and jsonbinary.

some years ago I solved similar problems with xml type. I think, so
you have to calculate with two factors:

a) all varlena types are compressed - you cannot to get some
interesting fragment or you cannot tu update some interesting
fragment, every time pg working with complete document

b) access to some fragment of JSON or XML document are not really
important, because fast access to data are solved via indexes.

c) only a few API allows binary communication between server/client.
Almost all interfases use only text based API. I see some possible
interesting direction for binary protocol when some one uses a
javascript driver, when some one use pg in some javascript server side
environment, but it isn't a often used now.

Regards

Pavel

>
> I've got a grammar in mind for the binary structure that I'll share later
> when I've got some more time.  It's inspired by $COMPETITOR's format but a
> little more sane, using type tags that implicitly define the size of the
> operands, simplifying parsing.
>
> I'll then define the various use cases and benchmark using the different
> types.  Some examples include such as IO No Validate, IO+Validate, Store and
> Index, Internal Processing, Internal Composition, etc.
>
> The answer may be to have both a jsontext and jsonbinary type as each will
> be optimized for a different case.
>
> Make sense?  It may be a week before I get through this.
> Terry
>
>


From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Greg Stark <gsstark(at)mit(dot)edu>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Terry Laurenzo <tj(at)laurenzo(dot)org>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 22:04:16
Message-ID: AANLkTinyFd37tSPGKZMwbv7ofQR32YjEB3QrJXeSGmNw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

2010/10/19 Greg Stark <gsstark(at)mit(dot)edu>:
> On Tue, Oct 19, 2010 at 12:17 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> I think we should take a few steps back and ask why we think that
>> binary encoding is the way to go.  We store XML as text, for example,
>> and I can't remember any complaints about that on -bugs or
>> -performance, so why do we think JSON will be different?  Binary
>> encoding is a trade-off.  A well-designed binary encoding should make
>> it quicker to extract a small chunk of a large JSON object and return
>> it; however, it will also make it slower to return the whole object
>> (because you're adding serialization overhead).  I haven't seen any
>> analysis of which of those use cases is more important and why.
>>
>
> The elephant in the room is if the binary encoded form is smaller then
> it occupies less ram and disk bandwidth to copy it around. If your
> database is large that alone is the dominant factor. Doubling the size
> of all the objects in your database means halving the portion of the
> database that fits in RAM and doubling the amount of I/O required to
> complete any given operation. If your database fits entirely in RAM
> either way then it still means less RAM bandwidth used which is often
> the limiting factor but depending on how much cpu effort it takes to
> serialize and deserialize the balance could shift either way.

I am not sure, if this argument is important for json. This protocol
has not big overhead and json documents are pretty small. More - from
9.0 TOAST uses a relative aggresive compression. I would to like a
some standardised format for json inside pg too, but without using a
some external library I don't see a advantages to use a other format
then text.

Regards

Pavel

>
>
>
>
> --
> greg
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 22:51:45
Message-ID: 10741.1287528705@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Terry Laurenzo <tj(at)laurenzo(dot)org> writes:
> After spending a week in the morass of this, I have to say that I am less
> certain than I was on any front regarding the text/binary distinction. I'll
> take some time and benchmark different cases. My hypothesis is that a well
> implemented binary structure and conversions will add minimal overhead in
> the IO + Validate case which would be the typical in/out flow. It could be
> substantially faster for binary send/receive because the validation step
> could be eliminated/reduced.

I think that arguments proceeding from speed of binary send/receive
aren't worth the electrons they're written on, because there is nothing
anywhere that says what the binary format ought to be. In the case of
XML we're just using the text representation as the binary format too,
and nobody's complained about that. If we were to choose to stick with
straight text internally for a JSON type, we'd do the same thing, and
again nobody would complain.

So, if you want to make a case for using some binary internal format or
other, make it without that consideration.

> I'm envisioning staging this up as follows:
> 1. Create a "jsontext". jsontext uses text as its internal
> representation. in/out functions are essentially a straight copy or a copy
> + validate.
> 2. Create a "jsonbinary" type. This uses an optimized binary format for
> internal rep and send/receive. in/out is a parse/transcode operation to
> standard JSON text.

Ugh. Please don't. JSON should be JSON, and nothing else. Do you see
any other datatypes in Postgres that expose such internal considerations?

regards, tom lane


From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Greg Stark <gsstark(at)mit(dot)edu>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Terry Laurenzo <tj(at)laurenzo(dot)org>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 22:56:51
Message-ID: 10843.1287529011@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Greg Stark <gsstark(at)mit(dot)edu> writes:
> The elephant in the room is if the binary encoded form is smaller then
> it occupies less ram and disk bandwidth to copy it around.

It seems equally likely that a binary-encoded form could be larger
than the text form (that's often true for our other datatypes).
Again, this is an argument that would require experimental evidence
to back it up.

regards, tom lane


From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-19 23:12:52
Message-ID: AANLkTinDKygPC39Dvf7xuRWrQwg5Z=wpTAxA5nYMygNv@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Oct 19, 2010 at 4:51 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

> Terry Laurenzo <tj(at)laurenzo(dot)org> writes:
> > After spending a week in the morass of this, I have to say that I am less
> > certain than I was on any front regarding the text/binary distinction.
> I'll
> > take some time and benchmark different cases. My hypothesis is that a
> well
> > implemented binary structure and conversions will add minimal overhead in
> > the IO + Validate case which would be the typical in/out flow. It could
> be
> > substantially faster for binary send/receive because the validation step
> > could be eliminated/reduced.
>
> I think that arguments proceeding from speed of binary send/receive
> aren't worth the electrons they're written on, because there is nothing
> anywhere that says what the binary format ought to be. In the case of
> XML we're just using the text representation as the binary format too,
> and nobody's complained about that. If we were to choose to stick with
> straight text internally for a JSON type, we'd do the same thing, and
> again nobody would complain.
>
> So, if you want to make a case for using some binary internal format or
> other, make it without that consideration.
>
> > I'm envisioning staging this up as follows:
> > 1. Create a "jsontext". jsontext uses text as its internal
> > representation. in/out functions are essentially a straight copy or a
> copy
> > + validate.
> > 2. Create a "jsonbinary" type. This uses an optimized binary format
> for
> > internal rep and send/receive. in/out is a parse/transcode operation to
> > standard JSON text.
>
> Ugh. Please don't. JSON should be JSON, and nothing else. Do you see
> any other datatypes in Postgres that expose such internal considerations?
>
> regards, tom lane
>

I don't think anyone here was really presenting arguments as yet. We're
several layers deep on speculation and everyone is saying that
experimentation is needed.

I've got my own reasons for going down this path for a solution I have in
mind. I had thought that some part of that might have been applicable to pg
core, but if not, that's no problem. For my own edification, I'm going to
proceed down this path and see where it leads. I'll let the list know what
I find out.

I can understand the sentiment that JSON should be JSON and nothing else
from a traditional database server's point of view, but there is nothing
sacrosanct about it in the broader context.

Terry


From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: David E(dot) Wheeler <david(at)kineticode(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Terry Laurenzo <tj(at)laurenzo(dot)org>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-20 00:34:30
Message-ID: 1287534832-sup-1986@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Excerpts from David E. Wheeler's message of mar oct 19 16:36:20 -0300 2010:
> On Oct 19, 2010, at 12:17 PM, Robert Haas wrote:
>
> > I think we should take a few steps back and ask why we think that
> > binary encoding is the way to go. We store XML as text, for example,
> > and I can't remember any complaints about that on -bugs or
> > -performance, so why do we think JSON will be different? Binary
> > encoding is a trade-off. A well-designed binary encoding should make
> > it quicker to extract a small chunk of a large JSON object and return
> > it; however, it will also make it slower to return the whole object
> > (because you're adding serialization overhead). I haven't seen any
> > analysis of which of those use cases is more important and why.
>
> Maybe someone has numbers on that for the XML type?

Like these?
http://exificient.sourceforge.net/?id=performance

--
Álvaro Herrera <alvherre(at)commandprompt(dot)com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Greg Stark <gsstark(at)mit(dot)edu>, Terry Laurenzo <tj(at)laurenzo(dot)org>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-20 00:36:56
Message-ID: AANLkTim7htDPripP15JsQEjLGpyMNuvMJurb0aZOpKpO@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Oct 19, 2010 at 6:56 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Greg Stark <gsstark(at)mit(dot)edu> writes:
>> The elephant in the room is if the binary encoded form is smaller then
>> it occupies less ram and disk bandwidth to copy it around.
>
> It seems equally likely that a binary-encoded form could be larger
> than the text form (that's often true for our other datatypes).
> Again, this is an argument that would require experimental evidence
> to back it up.

That's exactly what I was thinking when I read Greg's email. I
designed something vaguely (very vaguely) like this many years ago and
the binary format that I worked so hard to create was enormous
compared to the text format, mostly because I had a lot of small
integers in the data I was serializing, and as it turns out,
representing {0,1,2} in less than 7 bytes is not very easy. It can
certainly be done if you set out to optimize for precisely those kinds
of cases, but I ended up with something awful like:

<4 byte type = list> <4 byte list length = 3> <4 byte type = integer>
<4 byte integer = 0> <4 byte type = integer> <4 byte integer = 1> <4
byte type = integer> <4 byte integer = 2>

= 32 bytes. Even if you were a little smarter than I was and used 2
byte integers (with some escape hatch allowing larger numbers to be
represented) it's still more than twice the size of the text
representation. Even if you use 1 byte integers it's still bigger.
To get it down to being smaller, you've got to do something like make
the high nibble of each byte a type field and the low nibble the first
4 payload bits. You can certainly do all of this but you could also
just store it as text and let the TOAST compression algorithm worry
about making it smaller.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Terri Laurenzo <tj(at)laurenzo(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Greg Stark <gsstark(at)mit(dot)edu>, AndrewDunstan <andrew(at)dunslane(dot)net>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-20 01:15:52
Message-ID: 7C954B81-026B-40D0-9E84-3467062B9532@laurenzo.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I hear ya. It might be a premature optimization but I still think there may be benefit for the case of large scale extraction and in- database transformation of large JSON datastructures. We have terabytes of this stuff and I'd like something between the hip nosql options and a fully structured SQL datastore.

Terry

Sent from my iPhone

On Oct 19, 2010, at 6:36 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> On Tue, Oct 19, 2010 at 6:56 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Greg Stark <gsstark(at)mit(dot)edu> writes:
>>> The elephant in the room is if the binary encoded form is smaller then
>>> it occupies less ram and disk bandwidth to copy it around.
>>
>> It seems equally likely that a binary-encoded form could be larger
>> than the text form (that's often true for our other datatypes).
>> Again, this is an argument that would require experimental evidence
>> to back it up.
>
> That's exactly what I was thinking when I read Greg's email. I
> designed something vaguely (very vaguely) like this many years ago and
> the binary format that I worked so hard to create was enormous
> compared to the text format, mostly because I had a lot of small
> integers in the data I was serializing, and as it turns out,
> representing {0,1,2} in less than 7 bytes is not very easy. It can
> certainly be done if you set out to optimize for precisely those kinds
> of cases, but I ended up with something awful like:
>
> <4 byte type = list> <4 byte list length = 3> <4 byte type = integer>
> <4 byte integer = 0> <4 byte type = integer> <4 byte integer = 1> <4
> byte type = integer> <4 byte integer = 2>
>
> = 32 bytes. Even if you were a little smarter than I was and used 2
> byte integers (with some escape hatch allowing larger numbers to be
> represented) it's still more than twice the size of the text
> representation. Even if you use 1 byte integers it's still bigger.
> To get it down to being smaller, you've got to do something like make
> the high nibble of each byte a type field and the low nibble the first
> 4 payload bits. You can certainly do all of this but you could also
> just store it as text and let the TOAST compression algorithm worry
> about making it smaller.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Terri Laurenzo <tj(at)laurenzo(dot)org>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Greg Stark <gsstark(at)mit(dot)edu>, AndrewDunstan <andrew(at)dunslane(dot)net>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-20 01:37:48
Message-ID: AANLkTik5b+wkL0aOmXq4GNosgurXDPwF5J_tXaprA9g+@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Tue, Oct 19, 2010 at 9:15 PM, Terri Laurenzo <tj(at)laurenzo(dot)org> wrote:
> I hear ya.  It might be a premature optimization but I still think there may be benefit for the case of large scale extraction and in- database transformation of large JSON datastructures.  We have terabytes of this stuff and I'd like something between the hip nosql options and a fully structured SQL datastore.

Well, keep hacking on it!

I'm interested to hear what you find out.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-20 06:46:18
Message-ID: AANLkTimOaJT8jbx7yuNxAx90_Gz_opaL6AsKHZ8xfhbi@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wed, Oct 20, 2010 at 6:39 AM, Terry Laurenzo <tj(at)laurenzo(dot)org> wrote:
> The answer may be to have both a jsontext and jsonbinary type as each will
> be optimized for a different case.

I want to choose one format for JSON rather than having two types.
It should be more efficient than other format in many cases,
and not so bad in other cases.

I think the discussion was started with
"BSON could represent was a subset of what JSON could represent".
So, any binary format could be acceptable that have enough
representational power compared with text format.

For example, a sequence of <byte-length> <text> could reduce
CPU cycles for reparsing and hold all of the input as-is except
ignorable white-spaces. It is not a BSON, but is a binary format.

Or, if we want to store numbers in binary form, I think the
format will be numeric type in postgres. It has high precision,
and we don't need any higher precision than it to compare two
numbers eventually. Even if we use BSON format, we need to extend
it to store all of numeric values, that precision is 10^1000.

--
Itagaki Takahiro


From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-20 06:58:34
Message-ID: AANLkTiknwbEFW-gFyndRfCXsYoyq8pRm=NKwHecQRyJw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Good points. In addition, any binary format needs to support object
property traversal without having to do a deep scan of all descendants.
BSON handles this with explicit lengths for document types (objects and
arrays) so that entire parts of the tree can be skipped during sibling
traversal.

It would also be nice to make sure that we store fully parsed strings.
There are lots of escape options that simply do not need to be preserved (c
escapes, unicode, octal, hex sequences) and hinder the ability to do direct
comparisons. BSON also makes a small extra effort to ensure that object
property names are encoded in a way that is easily comparable, as this will
be the most frequently compared items.

I'm still going to write up a proposed grammar that takes these items into
account - just ran out of time tonight.

Terry

On Wed, Oct 20, 2010 at 12:46 AM, Itagaki Takahiro <
itagaki(dot)takahiro(at)gmail(dot)com> wrote:

> On Wed, Oct 20, 2010 at 6:39 AM, Terry Laurenzo <tj(at)laurenzo(dot)org> wrote:
> > The answer may be to have both a jsontext and jsonbinary type as each
> will
> > be optimized for a different case.
>
> I want to choose one format for JSON rather than having two types.
> It should be more efficient than other format in many cases,
> and not so bad in other cases.
>
> I think the discussion was started with
> "BSON could represent was a subset of what JSON could represent".
> So, any binary format could be acceptable that have enough
> representational power compared with text format.
>
> For example, a sequence of <byte-length> <text> could reduce
> CPU cycles for reparsing and hold all of the input as-is except
> ignorable white-spaces. It is not a BSON, but is a binary format.
>
> Or, if we want to store numbers in binary form, I think the
> format will be numeric type in postgres. It has high precision,
> and we don't need any higher precision than it to compare two
> numbers eventually. Even if we use BSON format, we need to extend
> it to store all of numeric values, that precision is 10^1000.
>
> --
> Itagaki Takahiro
>


From: Florian Weimer <fw(at)deneb(dot)enyo(dot)de>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: Andrew Dunstan <andrew(at)dunslane(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-20 17:15:59
Message-ID: 87vd4wbx9s.fsf@mid.deneb.enyo.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

* Terry Laurenzo:

> Agreed. BSON was born out of implementations that either lacked
> arbitrary precision numbers or had a strong affinity to an
> int/floating point way of thinking about numbers. I believe that if
> BSON had an arbitrary precision number type, it would be a proper
> superset of JSON.

But JSON has only double-precision numbers!?


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Florian Weimer <fw(at)deneb(dot)enyo(dot)de>
Cc: Terry Laurenzo <tj(at)laurenzo(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-20 17:26:12
Message-ID: 4CBF2634.80107@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 10/20/2010 01:15 PM, Florian Weimer wrote:
> * Terry Laurenzo:
>
>> Agreed. BSON was born out of implementations that either lacked
>> arbitrary precision numbers or had a strong affinity to an
>> int/floating point way of thinking about numbers. I believe that if
>> BSON had an arbitrary precision number type, it would be a proper
>> superset of JSON.
> But JSON has only double-precision numbers!?

AFAICT the JSON spec says nothing at all about the precision of numbers.
It just provides a syntax for them. We should not confuse what can be
allowed in JSON with what can be handled by some consumers of JSON such
as ECMAScript.

However, since we would quite reasonably require that any JSON
implementation be able to handle arbitrary precision numbers, that
apparently rules out BSON as a storage engine for it, since BSON can not
handle such things.

cheers

andrew


From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-24 00:33:36
Message-ID: AANLkTi=-L0d1GVXu0vY5JDRMSWqWLro2rK6+B4WuYYcK@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
>
> I'm still going to write up a proposed grammar that takes these items into
> account - just ran out of time tonight.
>
>
The binary format I was thinking of is here:

http://github.com/tlaurenzo/pgjson/blob/master/pgjson/shared/include/json/jsonbinary.h

This was just a quick brain dump and I haven't done a lot of diligence on
verifying it, but I think it should be more compact than most JSON text
payloads and quick to iterate over/update in sibling traversal order vs
depth-first traversal which is what we would get out of JSON text.

Thoughts?

Terry


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-24 04:57:05
Message-ID: AANLkTikn=YjzJ0vEQPOr19eQSypr=0JuPTVgGUKkZ1bv@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sat, Oct 23, 2010 at 8:33 PM, Terry Laurenzo <tj(at)laurenzo(dot)org> wrote:
>>
>> I'm still going to write up a proposed grammar that takes these items into
>> account - just ran out of time tonight.
>>
>
> The binary format I was thinking of is here:
>
>   http://github.com/tlaurenzo/pgjson/blob/master/pgjson/shared/include/json/jsonbinary.h
> This was just a quick brain dump and I haven't done a lot of diligence on
> verifying it, but I think it should be more compact than most JSON text
> payloads and quick to iterate over/update in sibling traversal order vs
> depth-first traversal which is what we would get out of JSON text.
> Thoughts?
> Terry

It doesn't do particularly well on my previous example of [1,2,3]. It
comes out slightly shorter on ["a","b","c"] and better if the strings
need any escaping. I don't think the float4 and float8 formats are
very useful; how could you be sure that the output was going to look
the same as the input? Or alternatively that dumping a particular
object to text and reloading it will produce the same internal
representation? I think it would be simpler to represent integers
using a string of digits; that way you can be assured of going from
text -> binary -> text without change.

Perhaps it would be enough to define the high two bits as follows: 00
= array, 01 = object, 10 = string, 11 = number/true/false/null. The
next 2 bits specify how the length is stored. 00 = remaining 4 bits
store a length of up to 15 bytes, 01 = remaining 4 bits + 1 additional
byte store a 12-bit length of up to 4K, 10 = remaining 4 bits + 2
additional bytes store a 20-bit length of up to 1MB, 11 = 4 additional
bytes store a full 32-bit length word. Then, the array, object, and
string representations can work as you've specified them. Anything
else can be represented by itself, or perhaps we should say that
numbers represent themselves and true/false/null are represented by a
1-byte sequence, t/f/n (or perhaps we could define 111100{00,01,10} to
mean those values, since there's no obvious reason for the low bits to
be non-zero if a 4-bit length word ostensibly follows).

So [1,2,3] = 06 C1 '1' C1 '2' C1 '3' and ["a","b","c"] = 06 81 'a' 81 'b' 81 'c'

(I am still worried about the serialization/deserialization overhead
but that's a different issue.)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-24 06:21:27
Message-ID: AANLkTinCTALsqJTDkQprBHzYztD5k+KaFnOq4pYXf9W_@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> It doesn't do particularly well on my previous example of [1,2,3]. It
> comes out slightly shorter on ["a","b","c"] and better if the strings
> need any escaping. I don't think the float4 and float8 formats are
> very useful; how could you be sure that the output was going to look
> the same as the input? Or alternatively that dumping a particular
> object to text and reloading it will produce the same internal
> representation? I think it would be simpler to represent integers
> using a string of digits; that way you can be assured of going from
> text -> binary -> text without change.
>
> Perhaps it would be enough to define the high two bits as follows: 00
> = array, 01 = object, 10 = string, 11 = number/true/false/null. The
> next 2 bits specify how the length is stored. 00 = remaining 4 bits
> store a length of up to 15 bytes, 01 = remaining 4 bits + 1 additional
> byte store a 12-bit length of up to 4K, 10 = remaining 4 bits + 2
> additional bytes store a 20-bit length of up to 1MB, 11 = 4 additional
> bytes store a full 32-bit length word. Then, the array, object, and
> string representations can work as you've specified them. Anything
> else can be represented by itself, or perhaps we should say that
> numbers represent themselves and true/false/null are represented by a
> 1-byte sequence, t/f/n (or perhaps we could define 111100{00,01,10} to
> mean those values, since there's no obvious reason for the low bits to
> be non-zero if a 4-bit length word ostensibly follows).
>
> So [1,2,3] = 06 C1 '1' C1 '2' C1 '3' and ["a","b","c"] = 06 81 'a' 81 'b'
> 81 'c'
>
> (I am still worried about the serialization/deserialization overhead
> but that's a different issue.)
>
>
Thanks. I'll play around with the bit and numeric encodings you've
recommended. Arrays are certainly the toughest to economize on as a text
encoding has minimum of 2 + n - 1 overhead bytes. Text encoding for objects
has some more wiggle room with 2 + (n-1) + (n*3) extra bytes. I was
admittedly thinking about more complicated objects.

I'm still worried about transcoding overhead as well. If comparing to a
simple blind storage of JSON text with no validation or normalization, there
is obviously no way to beat a straight copy. However, its not outside the
realm of reason to think that it may be possible to match or beat the clock
if comparing against text to text normalization, or perhaps adding slight
overhead to a validator. The advantage to the binary structure is the
ability to scan it hierarchically in sibling order and provide mutation
operations with simple memcpy's as opposed to parse_to_ast -> modify_ast ->
serialize_ast.

Terry


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-24 12:12:02
Message-ID: AANLkTimV=t-xZrXs+cGK=c7QGq3XoYo53CKT7jnJB_m2@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Oct 24, 2010 at 2:21 AM, Terry Laurenzo <tj(at)laurenzo(dot)org> wrote:
>> It doesn't do particularly well on my previous example of [1,2,3].  It
>> comes out slightly shorter on ["a","b","c"] and better if the strings
>> need any escaping.  I don't think the float4 and float8 formats are
>> very useful; how could you be sure that the output was going to look
>> the same as the input?  Or alternatively that dumping a particular
>> object to text and reloading it will produce the same internal
>> representation?  I think it would be simpler to represent integers
>> using a string of digits; that way you can be assured of going from
>> text -> binary -> text without change.
>>
>> Perhaps it would be enough to define the high two bits as follows: 00
>> = array, 01 = object, 10 = string, 11 = number/true/false/null.  The
>> next 2 bits specify how the length is stored.  00 = remaining 4 bits
>> store a length of up to 15 bytes, 01 = remaining 4 bits + 1 additional
>> byte store a 12-bit length of up to 4K, 10 = remaining 4 bits + 2
>> additional bytes store a 20-bit length of up to 1MB, 11 = 4 additional
>> bytes store a full 32-bit length word.  Then, the array, object, and
>> string representations can work as you've specified them.  Anything
>> else can be represented by itself, or perhaps we should say that
>> numbers represent themselves and true/false/null are represented by a
>> 1-byte sequence, t/f/n (or perhaps we could define 111100{00,01,10} to
>> mean those values, since there's no obvious reason for the low bits to
>> be non-zero if a 4-bit length word ostensibly follows).
>>
>> So [1,2,3] = 06 C1 '1' C1 '2' C1 '3' and ["a","b","c"] = 06 81 'a' 81 'b'
>> 81 'c'
>>
>> (I am still worried about the serialization/deserialization overhead
>> but that's a different issue.)
>>
>
> Thanks.  I'll play around with the bit and numeric encodings you've
> recommended.  Arrays are certainly the toughest to economize on as a text
> encoding has minimum of 2 + n - 1 overhead bytes.  Text encoding for objects
> has some more wiggle room with 2 + (n-1) + (n*3) extra bytes.  I was
> admittedly thinking about more complicated objects.
> I'm still worried about transcoding overhead as well.  If comparing to a
> simple blind storage of JSON text with no validation or normalization, there
> is obviously no way to beat a straight copy.  However, its not outside the
> realm of reason to think that it may be possible to match or beat the clock
> if comparing against text to text normalization, or perhaps adding slight
> overhead to a validator.  The advantage to the binary structure is the
> ability to scan it hierarchically in sibling order and provide mutation
> operations with simple memcpy's as opposed to parse_to_ast -> modify_ast ->
> serialize_ast.

Yeah, my concern is not whether the overhead will be zero; it's
whether it will be small, yet allow large gains on other operations.
Like, how much slower will it be to pull out a moderately complex 1MB
JSON blob (not just a big string) out of a single-row, single-column
table? If it's 5% slower, that's probably OK, since this is a
reasonable approximation of a worst-case scenario. If it's 50%
slower, that sounds painful. It would also be worth testing with a
much smaller size, such as a 1K object with lots of internal
structure. In both cases, all data cached in shared_buffers, etc.

Then on the flip side how do we do on val[37]["whatever"]? You'd like
to hope that this will be significantly faster than the text encoding
on both large and small objects. If it's not, there's probably not
much point.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-10-24 16:36:00
Message-ID: AANLkTin3vKF3Z6YPp=A7zYMNGSnRJ8r3ViKcmgxOqy+c@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

>
> Yeah, my concern is not whether the overhead will be zero; it's
> whether it will be small, yet allow large gains on other operations.
> Like, how much slower will it be to pull out a moderately complex 1MB
> JSON blob (not just a big string) out of a single-row, single-column
> table? If it's 5% slower, that's probably OK, since this is a
> reasonable approximation of a worst-case scenario. If it's 50%
> slower, that sounds painful. It would also be worth testing with a
> much smaller size, such as a 1K object with lots of internal
> structure. In both cases, all data cached in shared_buffers, etc.
>
> Then on the flip side how do we do on val[37]["whatever"]? You'd like
> to hope that this will be significantly faster than the text encoding
> on both large and small objects. If it's not, there's probably not
> much point.
>
>
We're on the same page. I'm implementing the basic cases now and then will
come up with some benchmarks.

Terry


From: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-11-09 02:52:05
Message-ID: AANLkTi=z=Wa4bGUCxF7DTEBxyn+v9Zed1cowaF_pfTiR@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Are there any activities in JSON data types for the next commitfest?
I'd like to help you if you are working on the topic,
or I can make up previous works and discussions by myself.

On Mon, Oct 25, 2010 at 1:36 AM, Terry Laurenzo <tj(at)laurenzo(dot)org> wrote:
>> Yeah, my concern is not whether the overhead will be zero; it's
>> whether it will be small, yet allow large gains on other operations.
>> Like, how much slower will it be to pull out a moderately complex 1MB
>> JSON blob (not just a big string) out of a single-row, single-column
>> table?  If it's 5% slower, that's probably OK, since this is a
>> reasonable approximation of a worst-case scenario.  If it's 50%
>> slower, that sounds painful.  It would also be worth testing with a
>> much smaller size, such as a 1K object with lots of internal
>> structure.  In both cases, all data cached in shared_buffers, etc.
>>
>> Then on the flip side how do we do on val[37]["whatever"]?  You'd like
>> to hope that this will be significantly faster than the text encoding
>> on both large and small objects.  If it's not, there's probably not
>> much point.
>
> We're on the same page.  I'm implementing the basic cases now and then will
> come up with some benchmarks.

--
Itagaki Takahiro


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-11-09 22:48:08
Message-ID: AANLkTikUNvZdE5m1OoMDL-qKp2dN2kXbRTVhYs92=bAy@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Mon, Nov 8, 2010 at 9:52 PM, Itagaki Takahiro
<itagaki(dot)takahiro(at)gmail(dot)com> wrote:
> Are there any activities in JSON data types for the next commitfest?

I'm leaning toward the view that we shouldn't commit a JSON
implementation to core (or contrib) for 9.1. We have at least three
competing proposals on the table. I thought of picking it up and
hacking on it myself, but then we'd have four competing proposals on
the table. Even if we could come to some consensus on which of those
proposals is technically superior, the rate at which new ideas are
being proposed suggests to me that it would be premature to anoint any
single implementation as our canonical one. I'd like to see some of
these patches finished and put up on pgfoundry or github, and then
consider moving one of them into core when we have a clear and stable
consensus that one of them is technically awesome and the best thing
we're going to get.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-11-09 23:11:44
Message-ID: AANLkTimhrUfXa3Fp5pkK14LojSVQ3OpUvSNX61G3nKNe@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Robert,
I think I agree. At a minimum, I would like to see the "chosen" of the
competing priorities live on as an outside module for use by previous
versions. Even having proposed one, and soon to be two of the competing
implementations, it makes me nervous to commit to one at this juncture.

I'm wrapping some items up this week but expect to have some time over the
next two weeks to complete my implementation.

Here's a quick status on where I'm at:
- Binary format has been implemented as specified here:
https://github.com/tlaurenzo/pgjson/blob/master/pgjson/jsonlib/BINARY-README.txt
- Hand coded a JSON-text lexer/parser and JSON-binary parser and
transcoders
- Ran out of time to do exhaustive tests, but typical payloads yield
significant space savings
- Based on an admittedly small number of test cases, I identified that
the process of encoding a string literal is the most expensive operation in
the general case, accounting for 1/2 to 2/3 of the time spent in a
transcoding operation. This is fairly obvious but good to know. I did not
spend any time looking into this further.
- Drastically simplified the code in preparation to build a stand-alone
module

As soon as I get a bit of time I was going to do the following:
- Create a simple PGXS based build, stripping out the rest of the bits I
was doodling on
- Re-implement the PG module based on the new jsonlib binary format and
parser
- Add JSONPath and some encoding bits back in from the original patch
- Do some holistic profiling between the JSON-as-Text approach and the
JSON-as-Binary approach

This is still a bit of a fishing expedition, imo and I would have a hard
time getting this ready for commit on Monday. If getting something in right
now is critical, Joey's original patch is the most complete at this point.

Terry

On Tue, Nov 9, 2010 at 3:48 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:

> On Mon, Nov 8, 2010 at 9:52 PM, Itagaki Takahiro
> <itagaki(dot)takahiro(at)gmail(dot)com> wrote:
> > Are there any activities in JSON data types for the next commitfest?
>
> I'm leaning toward the view that we shouldn't commit a JSON
> implementation to core (or contrib) for 9.1. We have at least three
> competing proposals on the table. I thought of picking it up and
> hacking on it myself, but then we'd have four competing proposals on
> the table. Even if we could come to some consensus on which of those
> proposals is technically superior, the rate at which new ideas are
> being proposed suggests to me that it would be premature to anoint any
> single implementation as our canonical one. I'd like to see some of
> these patches finished and put up on pgfoundry or github, and then
> consider moving one of them into core when we have a clear and stable
> consensus that one of them is technically awesome and the best thing
> we're going to get.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


From: Terry Laurenzo <tj(at)laurenzo(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-11-21 05:31:17
Message-ID: AANLkTikL3dfjHsw1vpgHW7Cq=e6EUCDfoQXGYzUhpF+c@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

I've got a new stripped down version of the binary json plugin on github:
https://github.com/tlaurenzo/pgjson

With all due warning of contrived benchmarks, I wrote some tests to see
where things stand. The test script is here:
https://github.com/tlaurenzo/pgjson/blob/master/testdocs/runbenchmark.sh
The results from my laptop (First gen Macbook Pro 32bit with Snow Leopard
and Postgresql 9.0) are here and copied into this email at the end:

https://github.com/tlaurenzo/pgjson/blob/master/testdocs/benchmark-results-2010-11-20-mbp32.txt

And for some commentary...
I copied the 5 sample documents from json.org's example section for these
tests. These are loaded into a table with a varchar column 1000 times each
(so the test table has 5000 rows in it). In all situations, the binary
encoding was smaller than the normalized text form (between 9 and 23%
smaller). I think there are cases where the binary form will be larger than
the corresponding text form, but I don't think they would be very common.

For the timings, various operations are performed on the 5000 row test table
for 100 iterations. In all situations, the query returns the Length of the
transformed document instead of the document itself so as to factor out
variable client IO between the test runs. The Null Parse is essentially
just a select from the table and therefore represents the baseline. The
times varied a little bit between runs but did not change materially.

What we see from this is that parsing JSON text and generating a binary
representation is cheap, representing approximately 10% of the base case
time. Conversely, anything that involves generating JSON text is expensive,
accounting for 30-40% of the base case time. Some incidental profiling
shows that while the entire operation is expensive, the process of
generating string literals dominates this time. There is likely room for
optimization in this method, but it should be noted that most of these
documents are lightly escaped (if escaped at all) which represents the happy
path through the string literal output function.

While I have not profiled any advanced manipulation of the binary structure
within the server, it stands to reason that manipulating the binary
structure should be significantly faster than an approach that requires
(perhaps multiple) transcoding between text representations in order to
complete a sequence of operations.

Assuming that the JSON datatype (at a minimum) normalizes text for storage,
then the text storage option accounts for about the most expensive path but
with none of the benefits of an internal binary form (smaller size, ability
to cheaply perform non-trivial manipulation within the database server).

Of course, just having a JSON datatype that blindly stores text will beat
everything, but I'm getting closer to thinking that the binary option is
worth the tradeoff.

Comments?
Terry

Running benchmark with 100 iterations per step
Loading data...
Data loaded.
=== DOCUMENT SIZE STATISTICS ===
Document Name | Original Size | Binary Size | Normalized Size |
Percentage Savings
----------------------+---------------+-------------+-----------------+--------------------
jsonorg_sample1.json | 582 | 311 | 360 |
13.6111111111111
jsonorg_sample2.json | 241 | 146 | 183 |
20.2185792349727
jsonorg_sample3.json | 601 | 326 | 389 |
16.1953727506427
jsonorg_sample4.json | 3467 | 2466 | 2710 |
9.00369003690037
jsonorg_sample5.json | 872 | 469 | 613 |
23.4910277324633
(5 rows)

=== TEST PARSE AND SERIALIZATION ===
Null Parse:
12.12 real 2.51 user 0.13 sys
Parse to Binary:
13.38 real 2.51 user 0.14 sys
Serialize from Binary:
16.65 real 2.51 user 0.13 sys
Normalize Text:
18.99 real 2.51 user 0.13 sys
Roundtrip (parse to binary and serialize):
18.58 real 2.51 user 0.14 sys


From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-11-21 13:11:18
Message-ID: AANLkTikUiG5qk9ZVh6Mnh0b8CwKfzQkjHuSd=i=zVc_8@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Sun, Nov 21, 2010 at 12:31 AM, Terry Laurenzo <tj(at)laurenzo(dot)org> wrote:
> What we see from this is that parsing JSON text and generating a binary
> representation is cheap, representing approximately 10% of the base case
> time.  Conversely, anything that involves generating JSON text is expensive,
> accounting for 30-40% of the base case time.  Some incidental profiling
> shows that while the entire operation is expensive, the process of
> generating string literals dominates this time.  There is likely room for
> optimization in this method, but it should be noted that most of these
> documents are lightly escaped (if escaped at all) which represents the happy
> path through the string literal output function.

Ouch! That's kind of painful. But certainly for some use cases it
will work out to a huge speedup, if you're doing subscripting or
similar.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


From: "David E(dot) Wheeler" <david(at)kineticode(dot)com>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-11-21 17:40:07
Message-ID: 75E6BD8C-E861-4AA3-B447-74EFADA756EE@kineticode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Nov 20, 2010, at 9:31 PM, Terry Laurenzo wrote:

> Assuming that the JSON datatype (at a minimum) normalizes text for storage, then the text storage option accounts for about the most expensive path but with none of the benefits of an internal binary form (smaller size, ability to cheaply perform non-trivial manipulation within the database server).
>
> Of course, just having a JSON datatype that blindly stores text will beat everything, but I'm getting closer to thinking that the binary option is worth the tradeoff.
>
> Comments?

benchmarks++

Nice to have some data points for this discussion.

Best,

David, still hoping for the JSON data type in 9.1…


From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Terry Laurenzo <tj(at)laurenzo(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
Date: 2010-11-21 18:27:03
Message-ID: 4CE96477.5060706@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11/21/2010 12:31 AM, Terry Laurenzo wrote:
>
> I copied the 5 sample documents from json.org <http://json.org>'s
> example section for these tests. These are loaded into a table with a
> varchar column 1000 times each (so the test table has 5000 rows in
> it). In all situations, the binary encoding was smaller than the
> normalized text form (between 9 and 23% smaller). I think there are
> cases where the binary form will be larger than the corresponding text
> form, but I don't think they would be very common.
>

Is that a pre-toast or post-toast comparison?

Even if it's post-toast, that doesn't seem like enough of a saving to
convince me that simply storing as text, just as we do for XML, isn't a
sensible way to go, especially when the cost of reproducing the text for
delivery to clients (including, say, pg_dump) is likely to be quite high.

cheers

andrew