Re: Performance Improvement by reducing WAL for Update Operation

Lists: pgsql-hackers
From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-11-14 11:26:26
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C38285495B0@szxeml509-mbx
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Thu, 8 Nov 2012 17:33:54 +0000 Amit Kapila wrote:
On Mon, 29 Oct 2012 20:02:11 +0530 Amit Kapila wrote:
On Sunday, October 28, 2012 12:28 AM Heikki Linnakangas wrote:
>>> One idea is to use the LZ format in the WAL record, but use your
>>> memcmp() code to construct it. I believe the slow part in LZ compression
>>> is in trying to locate matches in the "history", so if you just replace
>>> that with your code that's aware of the column boundaries and uses
>>> simple memcmp() to detect what parts changed, you could create LZ
>>> compressed output just as quickly as the custom encoded format. It would
>>> leave the door open for making the encoding smarter or to do actual
>>> compression in the future, without changing the format and the code to
>>> decode it.

>>This is good idea. I shall try it.

>>In the existing algorithm for storing the new data which is not present in
>>the history, it needs 1 control byte for
>>every 8 bytes of new data which can increase the size of the compressed
>>output as compare to our delta encoding approach.

>>Approach-2
>---------------
>>Use only one bit for control data [0 - Length and new data, 1 - pick from
>>history based on OFFSET-LENGTH]
>>The modified bit value (0) is to handle the new field data as a continuous
>>stream of data, instead of treating every byte as a new data.

> Attached are the patches
> 1. wal_update_changes_lz_v4 - to use LZ Approach with memcmp to construct WAL record
> 2. wal_update_changes_modified_lz_v5 - to use modified LZ Approach as mentioned above as Approach-2

> The main Changes as compare to previous patch are as follows:
> 1. In heap_delta_encode, use LZ encoding instead of Custom encoding.
> 2. Instead of get_tup_info(), introduced heap_getattr_with_len() macro based on suggestion from Noah.
> 3. LZ macro's moved from .c to .h, as they need to be used for encoding.
> 4. Changed the format for function arguments for heap_delta_encode()/heap_delta_decode() based on suggestion from Noah.

Please find the updated patches attached with this mail.

Modification in these Patches apart from above:

1. Traverse the tuple only once (previously it needs to traverse 3 times) to check if particular offset matches and get the offset to generate encoded tuple.

To achieve this I have modified function heap_tuple_attr_equals() to heap_attr_get_length_and_check_equals(), so that it can get the length of tuple attribute

which can be used to calculate offset. A separate function can also be written to achieve the same.

2. Improve the comments in code.

Performance Data:

1. Please refer testcase in attached file pgbench_250.c

Refer Function used to create random string at end of mail.

2. The detail data and configuration settings can be reffered in attached files (pgbench_encode_withlz_ff100 & pgbench_encode_withlz_ff80).

Benchmark results with -F 100:

-Patch- -tps(at)-c1- -tps(at)-c2- -tps(at)-c4- -tps(at)-c8- -WAL(at)-c8-
xlogscale 802 1453 2253 2643 13.99 GB
xlogscale+org lz 807 1602 3168 5140 9.50 GB
xlogscale+mod lz 796 1620 3216 5270 9.16 GB

Benchmark results with -F 80:

-Patch- -tps(at)-c1- -tps(at)-c2- -tps(at)-c4- -tps(at)-c8- -WAL(at)-c8-
xlogscale 811 1455 2148 2704 13.6 GB
xlogscale+org lz 829 1684 3223 5325 9.13 GB
xlogscale+mod lz 801 1657 3263 5488 8.86 GB

> I shall write the wal_update_changes_custom_delta_v6, and then we can compare all the three patches performance data and decide which one to go based on results.

The results with this are not better than above 2 Approaches, so I am not attaching it.

Function used to create randome string

--------------------------------------------------------

CREATE OR REPLACE FUNCTION random_text_md5_v2(INTEGER)
RETURNS TEXT
LANGUAGE SQL
AS $$

select upper(
substring(
(
SELECT string_agg(md5(random()::TEXT), '')
FROM generate_series(1, CEIL($1 / 32.)::integer)
),
$1)
);

$$;

Suggestions/Comments?

With Regards,

Amit Kapila.

Attachment Content-Type Size
wal_update_changes_lz_v4.patch application/octet-stream 45.9 KB
wal_update_changes_mod_lz_v5.patch application/octet-stream 47.7 KB
pgbench_250.c text/plain 63.5 KB
pgbench_encode_withlz_ff100.htm text/html 47.3 KB
pgbench_encode_withlz_ff80.htm text/html 47.3 KB

From: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
To: amit(dot)kapila(at)huawei(dot)com
Cc: hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-07 08:58:26
Message-ID: 20121207.175826.210399757.horiguchi.kyotaro@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hello, I looked into the patch and have some comments.

From the restriction of the time for this rather big patch,
please excuse that these comments are on a part of it. Others
will follow in few days.

==== heaptuple.c

noncachegetattr(_with_len):

- att_getlength should do strlen as worst case or VARSIZE_ANY
which is heavier than doing one comparizon, so I recommend to
add 'if (len)' as the restriction for doing this, and give NULL
as &len to nocachegetattr_with_len in nocachegetattr.

heap_attr_get_length_and_check_equals:

- Size seems to be used conventionary as the type for memory
object length, so it might be better using Size instead of
int32 as the type for *tup[12]_attr_len in parameter.

- This function returns always false for attrnum <= 0 as whole
tuple or some system attrs comparison regardless of the real
result, which is a bit different from the anticipation which
the name gives. If you need to keep this optimization, it
should have the name more specific to the purpose.

haap_delta_encode:

- Some misleading variable names (like match_not_found),
some reatitions of similiar codelets (att_align_pointer, pglz_out_tag),
misleading slight difference of the meanings of variables of
similar names(old_off and new_off and the similar pairs),
and bit tricky use of pglz_out_add and pglz_out_tag with length = 0.

These are welcome to be modified for better readability.

==== heapam.c

fastgetattr_with_len

- Missing left paren in the line 867 ('nocachegetattr_with_len(tup)...')

- Missing enclosing paren in heapam.c:879 (len, only on style)

- Allowing len = NULL will be good for better performance, like
noncachegetattr.

fastgetattr

- I suppose that the coding covension here is that macro and
alternative c-code are expected to be look similar. fastgetattr
looks quite differ to corresponding macro.

...

regards,

--
Kyotaro Horiguchi
NTT Open Source Software Center


From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-07 13:06:23
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C383BEA1C90@szxeml509-mbx
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


On Friday, December 07, 2012 2:28 PM Kyotaro HORIGUCHI wrote:

> Hello, I looked into the patch and have some comments.

Thank you for reviewing the patch.

> From the restriction of the time for this rather big patch,
> please excuse that these comments are on a part of it. Others
> will follow in few days.

It's perfectly fine.

>==== heaptuple.c
>
>noncachegetattr(_with_len):
>
>- att_getlength should do strlen as worst case or VARSIZE_ANY
> which is heavier than doing one comparizon, so I recommend to
> add 'if (len)' as the restriction for doing this, and give NULL
> as &len to nocachegetattr_with_len in nocachegetattr.
Fixed.

>heap_attr_get_length_and_check_equals:
>
>- Size seems to be used conventionary as the type for memory
> object length, so it might be better using Size instead of
> int32 as the type for *tup[12]_attr_len in parameter.

Fixed.

>- This function returns always false for attrnum <= 0 as whole
> tuple or some system attrs comparison regardless of the real
> result, which is a bit different from the anticipation which
> the name gives. If you need to keep this optimization, it
> should have the name more specific to the purpose.

The heap_attr_get_length_and_check_equals function is similar to heap_tuple_attr_equals,
the attrnum <= 0 check is required for heap_tuple_attr_equals.

>haap_delta_encode:
>
>- Some misleading variable names (like match_not_found),
> some reatitions of similiar codelets (att_align_pointer, pglz_out_tag),
> misleading slight difference of the meanings of variables of
> similar names(old_off and new_off and the similar pairs),
> and bit tricky use of pglz_out_add and pglz_out_tag with length = 0.
>
> These are welcome to be modified for better readability.

The variable names are modified, please check them once.

The (att_align_pointer, pglz_out_tag) repetition code is added to take care of padding only incase of values are equal.
Use of pglz_out_add and pglz_out_tag with length = 0 is done because of code readability.

>==== heapam.c
>
>fastgetattr_with_len
>
>- Missing left paren in the line 867 ('nocachegetattr_with_len(tup)...')
>
>- Missing enclosing paren in heapam.c:879 (len, only on style)
>
>- Allowing len = NULL will be good for better performance, like
> noncachegetattr.

Fixed. except len=NULL because fastgetattr is modified as below comment.

>fastgetattr
>
>- I suppose that the coding covension here is that macro and
> alternative c-code are expected to be look similar. fastgetattr
> looks quite differ to corresponding macro.

Fixed.

Another change is also done to handle the history size of 2 bytes which is possible with the usage of LZ macro's for delta encoding.

With Regards,
Amit Kapila.

Attachment Content-Type Size
wal_update_changes_lz_v5.patch application/octet-stream 47.3 KB
wal_update_changes_mod_lz_v6.patch application/octet-stream 51.3 KB

From: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
To: amit(dot)kapila(at)huawei(dot)com
Cc: hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-10 09:11:05
Message-ID: 20121210.181105.256337385.horiguchi.kyotaro@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Thank you.

> >heap_attr_get_length_and_check_equals:
..
> >- This function returns always false for attrnum <= 0 as whole
> > tuple or some system attrs comparison regardless of the real
> > result, which is a bit different from the anticipation which
> > the name gives. If you need to keep this optimization, it
> > should have the name more specific to the purpose.
>
> The heap_attr_get_length_and_check_equals function is similar to heap_tuple_attr_equals,
> the attrnum <= 0 check is required for heap_tuple_attr_equals.

Sorry, you're right.

> >haap_delta_encode:
> >
> >- Some misleading variable names (like match_not_found),
> > some reatitions of similiar codelets (att_align_pointer, pglz_out_tag),
> > misleading slight difference of the meanings of variables of
> > similar names(old_off and new_off and the similar pairs),
> > and bit tricky use of pglz_out_add and pglz_out_tag with length = 0.
> >
> > These are welcome to be modified for better readability.
>
> The variable names are modified, please check them once.
>
> The (att_align_pointer, pglz_out_tag) repetition code is added to take care of padding only incase of values are equal.
> Use of pglz_out_add and pglz_out_tag with length = 0 is done because of code readability.

Oops! Sorry for mistake. My point was that the bases for old_off
(of match_off) and dp, not new_off. It is no unnatural. Namings
had not been the problem and the function was perfect as of the
last patch. I'd been confised by the asymmetry between match_off
to pglz_out_tag and dp to pglz_out_add.

> Another change is also done to handle the history size of 2 bytes which is possible with the usage of LZ macro's for delta encoding.

Good catch. This seems to have been a potential bug which does no
harm when called from pglz_compress..

==========

Looking into wal_update_changes_mod_lz_v6.patch, I understand
that this patch experimentally adds literal data segment which
have more than single byte in PG-LZ algorithm. According to
pglz_find_match, memCMP is slower than 'while(*s && *s == *d)' if
len < 16 and I suppose it is probably true at least for 4 byte
length data. This is also applied on encoding side. If this mod
does no harm to performance, I want to see this applied also to
pglz_comress.

By the way, the comment on pg_lzcompress.c:690 seems to quite
differ from what the code does.

regards,

*1: http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C38285495B0@szxeml509-mbx

--
Kyotaro Horiguchi
NTT Open Source Software Center


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: <hlinnakangas(at)vmware(dot)com>, <noah(at)leadboat(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-10 12:34:52
Message-ID: 011401cdd6d2$c13ba440$43b2ecc0$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Monday, December 10, 2012 2:41 PM Kyotaro HORIGUCHI wrote:
> Thank you.
>
> > >heap_attr_get_length_and_check_equals:
> ..
> > >- This function returns always false for attrnum <= 0 as whole
> > > tuple or some system attrs comparison regardless of the real
> > > result, which is a bit different from the anticipation which
> > > the name gives. If you need to keep this optimization, it
> > > should have the name more specific to the purpose.
> >
> > The heap_attr_get_length_and_check_equals function is similar to
> heap_tuple_attr_equals,
> > the attrnum <= 0 check is required for heap_tuple_attr_equals.
>
> Sorry, you're right.
>
> > >haap_delta_encode:
> > >
> > >- Some misleading variable names (like match_not_found),
> > > some reatitions of similiar codelets (att_align_pointer,
> pglz_out_tag),
> > > misleading slight difference of the meanings of variables of
> > > similar names(old_off and new_off and the similar pairs),
> > > and bit tricky use of pglz_out_add and pglz_out_tag with length =
> 0.
> > >
> > > These are welcome to be modified for better readability.
> >
> > The variable names are modified, please check them once.
> >
> > The (att_align_pointer, pglz_out_tag) repetition code is added to take
> care of padding only incase of values are equal.
> > Use of pglz_out_add and pglz_out_tag with length = 0 is done because
> of code readability.
>
> Oops! Sorry for mistake. My point was that the bases for old_off
> (of match_off) and dp, not new_off. It is no unnatural. Namings
> had not been the problem and the function was perfect as of the
> last patch.

I think new naming I have done are more meaningful, do you think I should
revert to previous patch one's.

> I'd been confised by the asymmetry between match_off
> to pglz_out_tag and dp to pglz_out_add.

If we see the usage of pglz_out_tag and pglz_out_literal in pglz_compress(),
it is same as I have used.


> > Another change is also done to handle the history size of 2 bytes
> which is possible with the usage of LZ macro's for delta encoding.
>
> Good catch. This seems to have been a potential bug which does no
> harm when called from pglz_compress..
>
> ==========
>
> Looking into wal_update_changes_mod_lz_v6.patch, I understand
> that this patch experimentally adds literal data segment which
> have more than single byte in PG-LZ algorithm. According to
> pglz_find_match, memCMP is slower than 'while(*s && *s == *d)' if
> len < 16 and I suppose it is probably true at least for 4 byte
> length data. This is also applied on encoding side. If this mod
> does no harm to performance, I want to see this applied also to
> pglz_comress.

Where in pglz_comress(), do you want to see similar usage?
Or do you want to see such use in function
heap_attr_get_length_and_check_equals(), where it compares 2 attributes.

> By the way, the comment on pg_lzcompress.c:690 seems to quite
> differ from what the code does.

I shall fix this.

With Regards,
Amit Kapila.


From: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
To: amit(dot)kapila(at)huawei(dot)com
Cc: hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-14 09:02:07
Message-ID: 20121214.180207.208220355.horiguchi.kyotaro@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hello, I took the perfomance figures for this patch.

CentOS6.3/Core i7
wal_level = archive, checkpoint_segments = 30 / 5min

A. Vanilla pgbench, postgres is HEAD
B. Vanilla pgbench, postgres is with this patch (wal_update_changes_lz_v5)
C. Modified pgbench(Long text), postgres is HEAD
D. Modified pgbench(Long text), postgres is with this patch

Running doing pgbench -s 10 -i, pgbench -c 20 -T 2400

#trans/s WAL MB WAL kB/tran
1A 437 1723 1.68
1B 435 (<1% slower than A) 1645 1.61 (96% of A)
1C 149 5073 14.6
1D 174 (17% faster than C) 5232 12.8 (88% of C)

Restoring with the wal archives yielded during the first test.

Recv sec s/trans
2A 61 0.0581
2B 62 0.0594 (2% slower than A)
2C 287 0.805
2D 314 0.750 (7% faster than C)

For vanilla pgbench, WAL size shrinks slightly and performance
seems very slightly worse than unpatched postgres(1A vs. 1B). It
can be safely say that no harm on performance even outside of the
effective range of this patch. On the other hand, the performance
gain becomes 17% within the effective range (1C vs. 1D).

Recovery performance looks to have the same tendency. It looks to
produce very small loss outside of the effective range (2A
vs. 2B) and significant gain within (2C vs. 2D ).

As a whole, this patch brings very large gain in its effective
range - e.g. updates of relatively small portions of tuples, but
negligible loss of performance is observed outside of its
effective range.

I'll mark this patch as 'Ready for Committer' as soon as I get
finished confirming the mod patch.

==========
> I think new naming I have done are more meaningful, do you think I should
> revert to previous patch one's.

New naming is more meaningful, and a bit long. I don't think it
should be reverted.

> > Looking into wal_update_changes_mod_lz_v6.patch, I understand
> > that this patch experimentally adds literal data segment which
> > have more than single byte in PG-LZ algorithm. According to
> > pglz_find_match, memCMP is slower than 'while(*s && *s == *d)' if
> > len < 16 and I suppose it is probably true at least for 4 byte
> > length data. This is also applied on encoding side. If this mod
> > does no harm to performance, I want to see this applied also to
> > pglz_comress.
>
> Where in pglz_comress(), do you want to see similar usage?
> Or do you want to see such use in function
> heap_attr_get_length_and_check_equals(), where it compares 2 attributes.

My point was the format for literal segments. It seems to reduce
about an eighth of literal segments. But the effectiveness under
real environment does not promising.. Forget it. It's just a
fancy.

regards,

--
Kyotaro Horiguchi
NTT Open Source Software Center


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: <hlinnakangas(at)vmware(dot)com>, <noah(at)leadboat(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-14 13:42:57
Message-ID: 003001cdda00$ee02f9c0$ca08ed40$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Friday, December 14, 2012 2:32 PM Kyotaro HORIGUCHI wrote:
> Hello, I took the perfomance figures for this patch.
>
> CentOS6.3/Core i7
> wal_level = archive, checkpoint_segments = 30 / 5min
>
> A. Vanilla pgbench, postgres is HEAD
> B. Vanilla pgbench, postgres is with this patch
> (wal_update_changes_lz_v5)
> C. Modified pgbench(Long text), postgres is HEAD
> D. Modified pgbench(Long text), postgres is with this patch
>
> Running doing pgbench -s 10 -i, pgbench -c 20 -T 2400
>
> #trans/s WAL MB WAL kB/tran
> 1A 437 1723 1.68
> 1B 435 (<1% slower than A) 1645 1.61 (96% of A)
> 1C 149 5073 14.6
> 1D 174 (17% faster than C) 5232 12.8 (88% of C)
>
> Restoring with the wal archives yielded during the first test.
>
> Recv sec s/trans
> 2A 61 0.0581
> 2B 62 0.0594 (2% slower than A)
> 2C 287 0.805
> 2D 314 0.750 (7% faster than C)
>
> For vanilla pgbench, WAL size shrinks slightly and performance
> seems very slightly worse than unpatched postgres(1A vs. 1B). It
> can be safely say that no harm on performance even outside of the
> effective range of this patch. On the other hand, the performance
> gain becomes 17% within the effective range (1C vs. 1D).
>
> Recovery performance looks to have the same tendency. It looks to
> produce very small loss outside of the effective range (2A
> vs. 2B) and significant gain within (2C vs. 2D ).
>
> As a whole, this patch brings very large gain in its effective
> range - e.g. updates of relatively small portions of tuples, but
> negligible loss of performance is observed outside of its
> effective range.
>
> I'll mark this patch as 'Ready for Committer' as soon as I get
> finished confirming the mod patch.

Thank you very much.

With Regards,
Amit Kapila.


From: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
To: amit(dot)kapila(at)huawei(dot)com
Cc: hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-28 08:07:48
Message-ID: 20121228.170748.90887322.horiguchi.kyotaro@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Hello, I saw this patch and confirmed that

- Coding style looks good.
- Appliable onto HEAD.
- Some mis-codings are fixed.

And took the performance figures for 4 types of modification
versus 2 benchmarks.

I've see small performace gain (4-8% for execution, and 6-12% for
recovery) and 16% WAL shrink for modified pgbench enhances the
benefit of this patch.

On the other hand I've found no significant loss of performance
for execution and 4% reduction of WAL for original pgbench, but
there might be 4-8% performance loss for recovery.

Attached patches are listed below.

wal_update_changes_lz_v5.patch

Rather straight implement of wal compression using existing
pg_lz compress format.

wal_update_changes_mod_lz_v6_2.patch

Modify pg_lz to have bulk literal segment format which is
available only for WAL compression. Misplaced comment fixed.

The detail of performance follows.
=====
I've tested involving the mod patch and 'modified' mod
patch.

CentOS6.3/Core i7
wal_level = archive, checkpoint_segments = 30 / 5min

wal_update_changes_mod_lz_v6+ is the version in which memcpy for
segment shorter than 16 bytes to be copied by while(*s)
*d++=*s++.

postgres pgbench
A. HEAD Original
B. wal_update_changes_lz_v5 Original
C. wal_update_changes_mod_lz_v6 Original
D. wal_update_changes_mod_lz_v6+ Original
E. HEAD attached with this patch
F. wal_update_changes_lz_v5 attached with this patch
G. wal_update_changes_mod_lz_v6 attached with this patch
H. wal_update_changes_mod_lz_v6+ attached with this patch

Running doing pgbench -s 10 -i, pgbench -c 10 -j 10 -T 1200

#trans/s WAL MB WAL kB/tran
1A 346 760 1.87
1B 347 730 1.80 (96% of A)
1C 346 729 1.80 (96% of A)
1D 347 730 1.80 (96% of A)

1E 192 2790 6.20
1F 200 (4% faster than E) 2431 5.19 (84% of D)
1G 207 (8% faster than E) 2563 5.28 (85% of D)
1H 199 (4% faster than E) 2421 5.19 (84% of D)

Recovery time

Recv sec us/trans
2A 26 62.6
2B 27 64.8 (4% slower than A)
2C 28 67.4 (8% slower than A)
2D 26 62.4 (same as A)

2E 130 629
2F 149 579 ( 8% faster than E)
2G 128 592 ( 6% faster than E)
2H 130 553 (12% faster than E)

For vanilla pgbench, WAL size shrinks slightly and performance
seems same as unpatched postgres(1A vs. 1B, 1C, 1D). For modified
pgbench, WAL size shrinks by about 17% and performance seems to
have a gain by several percent.

Recovery performance looks to have the same tendency. It looks to
produce very small loss outside of the effective range (2A
vs. 2B, 2C) and significant gain within (2E vs. 2F, 2G, 2H).

As a whole, this patch brings very large gain in its effective
range - e.g. updates of relatively small portions in a tuple, but
negligible loss of performance is observed outside of its
effective range on the test machine. I suppose the losses will be
emphasized by the more higher performance of seq write of WAL
devices

regards,

--
Kyotaro Horiguchi
NTT Open Source Software Center

Attachment Content-Type Size
wal_update_changes_lz_v5.patch text/x-patch 47.3 KB
wal_update_changes_mod_lz_v6_2.patch text/x-patch 44.4 KB

From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: amit(dot)kapila(at)huawei(dot)com, hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-28 10:21:38
Message-ID: CA+U5nMLSrKZrpQVN-QPDEmtC1tNDSqWsUqobcyNHo+0aDGntig@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 December 2012 08:07, Kyotaro HORIGUCHI
<horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:

> Hello, I saw this patch and confirmed that
>
> - Coding style looks good.
> - Appliable onto HEAD.
> - Some mis-codings are fixed.

I've had a quick review of the patch to see how close we're getting.
The perf tests look to me like we're getting what we wanted from this
and I'm happy with the recovery performance trade-offs. Well done to
both author and testers.

My comments

* There is a fixed 75% heuristic in the patch. Can we document where
that came from? Can we have a parameter that sets that please? This
can be used to have further tests to confirm the useful setting of
this. I expect it to be removed before we release, but it will help
during beta.

* The compression algorithm depends completely upon new row length
savings. If the new row is short, it would seem easier to just skip
the checks and include it anyway. We can say if old and new vary in
length by > 50% of each other, just include new as-is, since the rows
very clearly differ in a big way. Also, if tuple is same length as
before, can we compare the whole tuple at once to save doing
per-column checks?

* If full page writes is on and the page is very old, we are just
going to copy the whole block. So why not check for that rather than
do all these push ups and then just copy the page anyway?

* TOAST is not handled at all. No comments about it, nothing. Does
that mean it hasn't been considered? Or did we decide not to care in
this release? Presumably that means we are comparing toast pointers
byte by byte to see if they are the same?

* I'd like to see a specific test in regression that is designed to
exercise the code here. That way we will be certain that the code is
getting regularly tested.

* The internal docs are completely absent. We need at least a whole
page of descriptive comment, discussing trade-offs and design
decisions. This is very important because it will help locate bugs
much faster if these things are clealry documented. It also helps
reviewers. This is a big timewaster for committers because you have to
read the whole patch and understand it before you can attempt to form
opinions. Commits happen quicker and easier with good comments.

* Lots of typos in comments. Many comments say nothing more than the
words already used in the function name itself

* "flags" variables are almost always int or uint in PG source.

* PGLZ_HISTORY_SIZE needs to be documented in the place it is defined,
not the place its used. The test if (oldtuplen < PGLZ_HISTORY_SIZE)
really needs to be a test inside the compression module to maintain
better modularity, so the value itself needn't be exported

* Need to mention the WAL format change, or include the change within
the patch so we can see

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Simon Riggs'" <simon(at)2ndQuadrant(dot)com>, "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: <hlinnakangas(at)vmware(dot)com>, <noah(at)leadboat(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-28 11:27:23
Message-ID: 00a201cde4ee$4f759df0$ee60d9d0$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Friday, December 28, 2012 3:52 PM Simon Riggs wrote:
> On 28 December 2012 08:07, Kyotaro HORIGUCHI
> <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
>
> > Hello, I saw this patch and confirmed that
> >
> > - Coding style looks good.
> > - Appliable onto HEAD.
> > - Some mis-codings are fixed.
>
> I've had a quick review of the patch to see how close we're getting.
> The perf tests look to me like we're getting what we wanted from this
> and I'm happy with the recovery performance trade-offs. Well done to
> both author and testers.
>
> My comments
>
> * There is a fixed 75% heuristic in the patch. Can we document where
> that came from?

It is from LZ compression strategy. Refer PGLZ_Strategy.
I will add comment for it.

> Can we have a parameter that sets that please? This
> can be used to have further tests to confirm the useful setting of
> this. I expect it to be removed before we release, but it will help
> during beta.

I shall add that for test purpose.

> * The compression algorithm depends completely upon new row length
> savings. If the new row is short, it would seem easier to just skip
> the checks and include it anyway. We can say if old and new vary in
> length by > 50% of each other, just include new as-is, since the rows
> very clearly differ in a big way.

I think it makes more sense. So I shall update the patch.

> Also, if tuple is same length as
> before, can we compare the whole tuple at once to save doing
> per-column checks?

I shall evaluate and discuss with you.

> * If full page writes is on and the page is very old, we are just
> going to copy the whole block. So why not check for that rather than
> do all these push ups and then just copy the page anyway?

I shall check once and update the patch.

> * TOAST is not handled at all. No comments about it, nothing. Does
> that mean it hasn't been considered? Or did we decide not to care in
> this release?

> Presumably that means we are comparing toast pointers
> byte by byte to see if they are the same?

Yes, currently this patch is doing byte by byte comparison for toast
pointers. I shall add comment.
In future, we can evaluate if further optimizations can be done.

> * I'd like to see a specific test in regression that is designed to
> exercise the code here. That way we will be certain that the code is
> getting regularly tested.

I shall add more specific tests.


> * The internal docs are completely absent. We need at least a whole
> page of descriptive comment, discussing trade-offs and design
> decisions. This is very important because it will help locate bugs
> much faster if these things are clealry documented. It also helps
> reviewers. This is a big timewaster for committers because you have to
> read the whole patch and understand it before you can attempt to form
> opinions. Commits happen quicker and easier with good comments.

Do you have any suggestion for where to put this information, any particular
ReadMe?

> * Lots of typos in comments. Many comments say nothing more than the
> words already used in the function name itself
>
> * "flags" variables are almost always int or uint in PG source.

> * PGLZ_HISTORY_SIZE needs to be documented in the place it is defined,
> not the place its used. The test if (oldtuplen < PGLZ_HISTORY_SIZE)
> really needs to be a test inside the compression module to maintain
> better modularity, so the value itself needn't be exported

I shall update the patch to address it.


> * Need to mention the WAL format change, or include the change within
> the patch so we can see

Sure, I will update this in code comments and internals docs.

With Regards,
Amit Kapila.


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: <hlinnakangas(at)vmware(dot)com>, <noah(at)leadboat(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-28 11:28:41
Message-ID: 00a301cde4ee$7d7a5150$786ef3f0$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Friday, December 28, 2012 1:38 PM Kyotaro HORIGUCHI wrote:
> Hello, I saw this patch and confirmed that
>
> - Coding style looks good.
> - Appliable onto HEAD.
> - Some mis-codings are fixed.
>
> And took the performance figures for 4 types of modification versus 2
> benchmarks.
>
> As a whole, this patch brings very large gain in its effective range -
> e.g. updates of relatively small portions in a tuple, but negligible
> loss of performance is observed outside of its effective range on the
> test machine. I suppose the losses will be emphasized by the more
> higher performance of seq write of WAL devices

Thank you very much for the review.

With Regards,
Amit Kapila.


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-28 11:55:48
Message-ID: CA+U5nMLvM6MBVqDv=UDh_PKavqSN-=2D6admPLFaLofBAQhcVA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 December 2012 11:27, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

>> * The internal docs are completely absent. We need at least a whole
>> page of descriptive comment, discussing trade-offs and design
>> decisions. This is very important because it will help locate bugs
>> much faster if these things are clealry documented. It also helps
>> reviewers. This is a big timewaster for committers because you have to
>> read the whole patch and understand it before you can attempt to form
>> opinions. Commits happen quicker and easier with good comments.
>
> Do you have any suggestion for where to put this information, any particular
> ReadMe?

Location is less relevant, since it will show up as additions in the patch.

Put it wherever makes most sense in comparison to existing related
comments/README. I have no preference myself.

If its any consolation, I notice a common issue with patches is lack
of *explanatory* comments, as opposed to line by line comments. So
same review comment to 50-75% of patches I've reviewed recently, which
is also likely why.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2012-12-28 12:03:20
Message-ID: CA+U5nMKpyR0Dsv4NQ3D5bR+0HGxWevSX242=YWYkam_jQ4qLRQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 December 2012 11:27, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

>> * TOAST is not handled at all. No comments about it, nothing. Does
>> that mean it hasn't been considered? Or did we decide not to care in
>> this release?
>
>> Presumably that means we are comparing toast pointers
>> byte by byte to see if they are the same?
>
> Yes, currently this patch is doing byte by byte comparison for toast
> pointers. I shall add comment.
> In future, we can evaluate if further optimizations can be done.

Just a comment to say that the comparison takes place after TOASTed
columns have been removed. TOAST is already optimised for whole value
UPDATE anyway, so that is the right place to produce the delta.

It does make me think that we can further optimise TOAST by updating
only the parts of a toasted datum that have changed. That will be
useful for JSON and XML applications that change only a portion of
large documents.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Simon Riggs'" <simon(at)2ndQuadrant(dot)com>, "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: <hlinnakangas(at)vmware(dot)com>, <noah(at)leadboat(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-04 13:53:58
Message-ID: 00a001cdea82$f29743a0$d7c5cae0$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Friday, December 28, 2012 3:52 PM Simon Riggs wrote:
> On 28 December 2012 08:07, Kyotaro HORIGUCHI
> <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
>
> > Hello, I saw this patch and confirmed that
> >
> > - Coding style looks good.
> > - Appliable onto HEAD.
> > - Some mis-codings are fixed.
>
> I've had a quick review of the patch to see how close we're getting.
> The perf tests look to me like we're getting what we wanted from this
> and I'm happy with the recovery performance trade-offs. Well done to
> both author and testers.
>
>
> * The compression algorithm depends completely upon new row length
> savings. If the new row is short, it would seem easier to just skip
> the checks and include it anyway. We can say if old and new vary in
> length by > 50% of each other, just include new as-is, since the rows
> very clearly differ in a big way.

> Also, if tuple is same length as
> before, can we compare the whole tuple at once to save doing
> per-column checks?

If we have to do whole tuple comparison then storing of changed parts might
need to be
be done in a byte-by-byte way rather then at column offset boundaries.
This might not be possible with current algorithm as it stores in WAL
information column-by-column and decrypts also in similar way.

> The internal docs are completely absent. We need at least a whole page of
descriptive > comment, discussing trade-offs and design decisions.

Currently I have planned to put it transam/README, as most of WAL
description is present there.

With Regards,
Amit Kapila.


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-04 14:33:56
Message-ID: CA+U5nMLNvgMrdRB4qCxR4O-+ruEL1oJGpjev_4fSwCg5=Xu4xQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 4 January 2013 13:53, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
> On Friday, December 28, 2012 3:52 PM Simon Riggs wrote:
>> On 28 December 2012 08:07, Kyotaro HORIGUCHI
>> <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
>>
>> > Hello, I saw this patch and confirmed that
>> >
>> > - Coding style looks good.
>> > - Appliable onto HEAD.
>> > - Some mis-codings are fixed.
>>
>> I've had a quick review of the patch to see how close we're getting.
>> The perf tests look to me like we're getting what we wanted from this
>> and I'm happy with the recovery performance trade-offs. Well done to
>> both author and testers.
>>
>>
>> * The compression algorithm depends completely upon new row length
>> savings. If the new row is short, it would seem easier to just skip
>> the checks and include it anyway. We can say if old and new vary in
>> length by > 50% of each other, just include new as-is, since the rows
>> very clearly differ in a big way.
>
>> Also, if tuple is same length as
>> before, can we compare the whole tuple at once to save doing
>> per-column checks?
>
> If we have to do whole tuple comparison then storing of changed parts might
> need to be
> be done in a byte-by-byte way rather then at column offset boundaries.
> This might not be possible with current algorithm as it stores in WAL
> information column-by-column and decrypts also in similar way.

OK, please explain in comments.

>> The internal docs are completely absent. We need at least a whole page of
> descriptive > comment, discussing trade-offs and design decisions.
>
> Currently I have planned to put it transam/README, as most of WAL
> description is present there.

But also in comments for each major function.

Thanks

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-09 08:05:02
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C383BEAAE32@szxeml509-mbx
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


On Friday, January 04, 2013 8:03 PM Simon Riggs wrote:
On 4 January 2013 13:53, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
> On Friday, December 28, 2012 3:52 PM Simon Riggs wrote:
>> On 28 December 2012 08:07, Kyotaro HORIGUCHI
>> <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
>>
>> > Hello, I saw this patch and confirmed that
>> >
>> > - Coding style looks good.
>> > - Appliable onto HEAD.
>> > - Some mis-codings are fixed.
>>
>> I've had a quick review of the patch to see how close we're getting.
>> The perf tests look to me like we're getting what we wanted from this
>> and I'm happy with the recovery performance trade-offs. Well done to
>> both author and testers.
>>
Update patch contains handling of below Comments
>
>* There is a fixed 75% heuristic in the patch. Can we document where
>that came from? Can we have a parameter that sets that please? This
>can be used to have further tests to confirm the useful setting of
>this. I expect it to be removed before we release, but it will help
>during beta.

Added a guc variable wal_update_compression_ratio to set the compression ratio.
It can be removed during beta.

>* The compression algorithm depends completely upon new row length
>savings. If the new row is short, it would seem easier to just skip
>the checks and include it anyway. We can say if old and new vary in
>length by > 50% of each other, just include new as-is, since the rows
>very clearly differ in a big way.

Added a check in heap_delta_encode to identify whether the tuples are differ in length by 50%.

>* If full page writes is on and the page is very old, we are just
>going to copy the whole block. So why not check for that rather than
>do all these push ups and then just copy the page anyway?

Added a function which is used to identify whether the page needs a backup block or not.
based on the result the optimization is applied.

>* I'd like to see a specific test in regression that is designed to
>exercise the code here. That way we will be certain that the code is
>getting regularly tested.

Added the regression tests which covers all the changes done for the optimization except recovery.

>* The internal docs are completely absent. We need at least a whole
>page of descriptive comment, discussing trade-offs and design
>decisions. This is very important because it will help locate bugs
>much faster if these things are clealry documented. It also helps
>reviewers. This is a big timewaster for committers because you have to
>read the whole patch and understand it before you can attempt to form
>opinions. Commits happen quicker and easier with good comments.
>* Need to mention the WAL format change, or include the change within
>the patch so we can see

backend/access/transam/README is updated with details.

>* Lots of typos in comments. Many comments say nothing more than the
>words already used in the function name itself

corrected the typos and removed unnecessary comments.

>* "flags" variables are almost always int or uint in PG source.
>
>* PGLZ_HISTORY_SIZE needs to be documented in the place it is defined,
>not the place its used. The test if (oldtuplen < PGLZ_HISTORY_SIZE)
>really needs to be a test inside the compression module to maintain
>better modularity, so the value itself needn't be exported

(oldtuplen < PGLZ_HISTORY_SIZE) validation is moved inside the heap_delta_encode
and updated the flags variable also.

Test results with modified pgbench (1800 record size) on the latest patch:

-Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -WAL(at)-c2-
Head 831 4.17 GB 1416 7.13 GB
WAL modification 846 2.36 GB 1712 3.31 GB

-Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -WAL(at)-c8-
Head 2196 11.01 GB 2758 13.88 GB
WAL modification 3295 5.87 GB 5472 9.02 GB

With Regards,
Amit Kapila.

Attachment Content-Type Size
wal_update_changes_v7.patch application/octet-stream 60.4 KB

From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-09 11:26:51
Message-ID: CA+U5nM+E-+UV2X3qrfiFjXFbenxL+aWyNMW0Lfp24qVLnBJ+ug@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 9 January 2013 08:05, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

> Update patch contains handling of below Comments

Thanks

> Test results with modified pgbench (1800 record size) on the latest patch:
>
> -Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -WAL(at)-c2-
> Head 831 4.17 GB 1416 7.13 GB
> WAL modification 846 2.36 GB 1712 3.31 GB
>
> -Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -WAL(at)-c8-
> Head 2196 11.01 GB 2758 13.88 GB
> WAL modification 3295 5.87 GB 5472 9.02 GB

And test results on normal pgbench?

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Simon Riggs'" <simon(at)2ndQuadrant(dot)com>
Cc: "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, <hlinnakangas(at)vmware(dot)com>, <noah(at)leadboat(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-09 12:12:38
Message-ID: 00b401cdee62$9e738930$db5a9b90$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wednesday, January 09, 2013 4:57 PM Simon Riggs wrote:
> On 9 January 2013 08:05, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
>
> > Update patch contains handling of below Comments
>
> Thanks
>
>
> > Test results with modified pgbench (1800 record size) on the latest
> patch:
> >
> > -Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -
> WAL(at)-c2-
> > Head 831 4.17 GB 1416 7.13
> GB
> > WAL modification 846 2.36 GB 1712 3.31
> GB
> >
> > -Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -
> WAL(at)-c8-
> > Head 2196 11.01 GB 2758 13.88
> GB
> > WAL modification 3295 5.87 GB 5472 9.02
> GB
>
> And test results on normal pgbench?

As there was no gain for original pgbench as was shown in performance
readings, so I thought it is not mandatory.
However I shall run for normal pgbench as it should not lead any further dip
in normal pgbench.
Thanks for pointing.

With Regards,
Amit Kapila.


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Simon Riggs'" <simon(at)2ndQuadrant(dot)com>
Cc: "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, <hlinnakangas(at)vmware(dot)com>, <noah(at)leadboat(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 10:40:46
Message-ID: 004b01cdefe8$1e2cb530$5a861f90$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Wednesday, January 09, 2013 4:57 PM Simon Riggs wrote:
> On 9 January 2013 08:05, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
>
> > Update patch contains handling of below Comments
>
> Thanks
>
>
> > Test results with modified pgbench (1800 record size) on the latest
> patch:
> >
> > -Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -
> WAL(at)-c2-
> > Head 831 4.17 GB 1416 7.13
> GB
> > WAL modification 846 2.36 GB 1712 3.31
> GB
> >
> > -Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -
> WAL(at)-c8-
> > Head 2196 11.01 GB 2758 13.88
> GB
> > WAL modification 3295 5.87 GB 5472 9.02
> GB
>
> And test results on normal pgbench?

configuration:

shared_buffers = 4GB
wal_buffers = 16MB
checkpoint_segments = 256
checkpoint_interval = 15min
autovacuum = off
server_encoding = SQL_ASCII
client_encoding = UTF8
lc_collate = C
lc_ctype = C

init:

pgbench -s 75 -i -F 80

run:

pgbench -T 600

Test results with original pgbench (synccommit off) on the latest patch:

-Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -WAL(at)-c2-
Head 1459 1.40 GB 2491 1.70 GB
WAL modification 1558 1.38 GB 2441 1.59 GB

-Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -WAL(at)-c8-
Head 5139 2.49 GB 10651 4.72 GB
WAL modification 5224 2.28 GB 11329 3.96 GB

Test results with original pgbench (synccommit on) on the latest patch:

-Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -WAL(at)-c2-
Head 146 0.45 GB 167 0.49 GB
WAL modification 144 0.44 GB 166 0.49 GB

-Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -WAL(at)-c8-
Head 325 0.77 GB 603 1.03 GB
WAL modification 321 0.76 GB 604 1.01 GB

The results are similar as noted by Kyotaro-San. The WAL size is reduced
even for original pgbench.
There is slight performance dip in some of the cases for original pgbench.

With Regards,
Amit Kapila.


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 10:57:45
Message-ID: CA+U5nM+Smq3rMqPOUBg48-3oxcy4HWejroZOX8vYN31Ue=qLRA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11 January 2013 10:40, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

> Test results with original pgbench (synccommit off) on the latest patch:
>
>
> -Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -WAL(at)-c2-
> Head 1459 1.40 GB 2491 1.70 GB
> WAL modification 1558 1.38 GB 2441 1.59 GB
>
>
> -Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -WAL(at)-c8-
> Head 5139 2.49 GB 10651 4.72 GB
> WAL modification 5224 2.28 GB 11329 3.96 GB

> There is slight performance dip in some of the cases for original pgbench.

Is this just one run? Can we see 3 runs please?

Can we investigate the performance dip at c=2?

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Simon Riggs'" <simon(at)2ndQuadrant(dot)com>
Cc: "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, <hlinnakangas(at)vmware(dot)com>, <noah(at)leadboat(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 12:30:27
Message-ID: 006801cdeff7$705f8080$511e8180$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Friday, January 11, 2013 4:28 PM Simon Riggs wrote:
> On 11 January 2013 10:40, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
>
> > Test results with original pgbench (synccommit off) on the latest
> patch:
> >
> >
> > -Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -
> WAL(at)-c2-
> > Head 1459 1.40 GB 2491 1.70
> GB
> > WAL modification 1558 1.38 GB 2441 1.59
> GB
> >
> >
> > -Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -
> WAL(at)-c8-
> > Head 5139 2.49 GB 10651 4.72
> GB
> > WAL modification 5224 2.28 GB 11329 3.96
> GB
>
> > There is slight performance dip in some of the cases for original
> pgbench.
>
> Is this just one run? Can we see 3 runs please?

This average of 3 runs.

-Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -WAL(at)-c2-
Head-1 1648 1.47 GB 2491 1.69 GB
Head-2 1538 1.43 GB 2529 1.72 GB
Head-3 1192 1.31 GB 2453 1.70 GB

AvgHead 1459 1.40 GB 2491 1.70 GB

WAL modification-1 1618 1.40 GB 2351 1.56
GB
WAL modification-2 1623 1.40 GB 2411 1.59
GB
WAL modification-3 1435 1.34 GB 2562 1.61
GB

WAL modification-Avg 1558 1.38 GB 2441 1.59
GB

-Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -WAL(at)-c8-
Head-1 5285 2.53 GB 11858 5.43
GB
Head-2 5105 2.47 GB 10724 4.98
GB
Head-3 5029 2.46 GB 9372 3.75
GB

AvgHead 5139 2.49 GB 10651 4.72
GB

WAL modification-1 5117 2.26 GB 12092 4.42
GB
WAL modification-2 5142 2.26 GB 9965 3.48
GB
WAL modification-3 5413 2.33 GB 11930 3.99
GB

WAL modification-Avg 5224 2.28 GB 11329 3.96
GB

> Can we investigate the performance dip at c=2?
Please consider following points for this dip:
1. For synchronous commit = off, there is always slight variation in data.
2. The size of WAL is reduced.
3. For small rows (128 bytes), sometimes the performance difference
created by this algorithm doesn't help much,
as the size is not reduced significantly and there is equivalent
overhead for delta compression.
We can put check that this optimization should be applied if row length
is greater than some
threshold(128 bytes, 200 bytes), but I feel as performance dip is not
much and WAL reduction gain is also
there, so it should be okay without any check as well.

With Regards,
Amit Kapila.


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 12:47:34
Message-ID: CA+U5nMLnGyVwmKfFiYyoKJgLi0hF44wP8_s=rHHX4jQFD2uEEw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11 January 2013 12:30, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

>> Is this just one run? Can we see 3 runs please?
>
> This average of 3 runs.

The results are so variable its almost impossible to draw any
conclusions at all. I think if we did harder stats on those we'd get
nothing.

Can you do something to bring that in? Or just do more tests to get a
better view?

> -Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -WAL(at)-c2-
> Head-1 1648 1.47 GB 2491 1.69 GB
> Head-2 1538 1.43 GB 2529 1.72 GB
> Head-3 1192 1.31 GB 2453 1.70 GB
>
> AvgHead 1459 1.40 GB 2491 1.70 GB
>
> WAL modification-1 1618 1.40 GB 2351 1.56
> GB
> WAL modification-2 1623 1.40 GB 2411 1.59
> GB
> WAL modification-3 1435 1.34 GB 2562 1.61
> GB
>
> WAL modification-Avg 1558 1.38 GB 2441 1.59
> GB
>
>
> -Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -WAL(at)-c8-
> Head-1 5285 2.53 GB 11858 5.43
> GB
> Head-2 5105 2.47 GB 10724 4.98
> GB
> Head-3 5029 2.46 GB 9372 3.75
> GB
>
> AvgHead 5139 2.49 GB 10651 4.72
> GB
>
> WAL modification-1 5117 2.26 GB 12092 4.42
> GB
> WAL modification-2 5142 2.26 GB 9965 3.48
> GB
> WAL modification-3 5413 2.33 GB 11930 3.99
> GB
>
> WAL modification-Avg 5224 2.28 GB 11329 3.96
> GB
>
>
>> Can we investigate the performance dip at c=2?
> Please consider following points for this dip:
> 1. For synchronous commit = off, there is always slight variation in data.
> 2. The size of WAL is reduced.
> 3. For small rows (128 bytes), sometimes the performance difference
> created by this algorithm doesn't help much,
> as the size is not reduced significantly and there is equivalent
> overhead for delta compression.
> We can put check that this optimization should be applied if row length
> is greater than some
> threshold(128 bytes, 200 bytes), but I feel as performance dip is not
> much and WAL reduction gain is also
> there, so it should be okay without any check as well.
>
> With Regards,
> Amit Kapila.
>

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: amit(dot)kapila(at)huawei(dot)com, hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 13:15:14
Message-ID: CA+U5nMKy+b6VpckodQDkt2iHXrx3aM3Cm7eZiS0okkg5M2H3sA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 28 December 2012 10:21, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:

> * There is a fixed 75% heuristic in the patch.

I'm concerned that we're doing extra work while holding the buffer
locked, which will exacerbate any block contention that exists.

We have a list of the columns that the UPDATE is touching since we use
that to check column permissions for the UPDATE. Which means we should
be able to use that list to check only the columns actually changing
in this UPDATE statement.

That will likely save us some time during the compression check.

Can you look into that please? I don't think it will be much work.

I've moved this to the next CF. I'm planning to review this one first.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Simon Riggs'" <simon(at)2ndQuadrant(dot)com>
Cc: "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, <hlinnakangas(at)vmware(dot)com>, <noah(at)leadboat(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 13:49:15
Message-ID: 006c01cdf002$726bf150$5743d3f0$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Friday, January 11, 2013 6:18 PM Simon Riggs wrote:
> On 11 January 2013 12:30, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
>
> >> Is this just one run? Can we see 3 runs please?
> >
> > This average of 3 runs.
>
> The results are so variable its almost impossible to draw any
> conclusions at all. I think if we did harder stats on those we'd get
> nothing.
>
> Can you do something to bring that in? Or just do more tests to get a
> better view?

To be honest, I have tried this set of 3 readings 2 times and there is
similar fluctuation for sync commit =off
What I can do is early next week,
a. I can run this test for 10 times to see the results.
b. run the tests for record length-256 instead of 128

However I think my results of sync commit = on is matching with Kyotaro-San.

Please suggest if you have anything in mind?

This is for sync mode= off, if see the result on sync mode= on, it is
comparatively consistent.
I think for sync commit = off, there is always fluctuation in results.
The sync mode= on, results are as below:

-Patch- -tps(at)-c1- -WAL(at)-c1- -tps(at)-c2- -WAL(at)-c2-
Head-1 149 0.46 GB 160 0.48
GB
Head-2 145 0.45 GB 180 0.52
GB
Head-3 144 0.45 GB 161 0.48
GB

WAL modification-1 142 0.44 GB 161 0.48 GB
WAL modification-2 146 1.45 GB 162 0.48 GB
WAL modification-3 144 1.44 GB 175 0.51 GB

-Patch- -tps(at)-c4- -WAL(at)-c4- -tps(at)-c8- -WAL(at)-c8-
Head-1 325 0.77 GB 602 1.03
GB
Head-2 328 0.77 GB 606 1.03
GB
Head-3 323 0.77 GB 603 1.03
GB

WAL modification-1 324 0.76 GB 604 1.01 GB
WAL modification-2 322 0.76 GB 604 1.01 GB
WAL modification-3 317 0.75 GB 604 1.01 GB
> >
> >
> >> Can we investigate the performance dip at c=2?
> > Please consider following points for this dip:
> > 1. For synchronous commit = off, there is always slight variation
> in data.
> > 2. The size of WAL is reduced.
> > 3. For small rows (128 bytes), sometimes the performance difference
> > created by this algorithm doesn't help much,
> > as the size is not reduced significantly and there is equivalent
> > overhead for delta compression.
> > We can put check that this optimization should be applied if row
> length
> > is greater than some
> > threshold(128 bytes, 200 bytes), but I feel as performance dip
> is not
> > much and WAL reduction gain is also
> > there, so it should be okay without any check as well.
> >
> > With Regards,
> > Amit Kapila.
> >

With Regards,
Amit Kapila.


From: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
To: "'Simon Riggs'" <simon(at)2ndQuadrant(dot)com>, "'Kyotaro HORIGUCHI'" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
Cc: <hlinnakangas(at)vmware(dot)com>, <noah(at)leadboat(dot)com>, <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 14:24:38
Message-ID: 007001cdf007$6415a5b0$2c40f110$@kapila@huawei.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Friday, January 11, 2013 6:45 PM Simon Riggs wrote:
> On 28 December 2012 10:21, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>
> > * There is a fixed 75% heuristic in the patch.
>
> I'm concerned that we're doing extra work while holding the buffer
> locked, which will exacerbate any block contention that exists.
>
> We have a list of the columns that the UPDATE is touching since we use
> that to check column permissions for the UPDATE. Which means we should
> be able to use that list to check only the columns actually changing
> in this UPDATE statement.
>
> That will likely save us some time during the compression check.
>
> Can you look into that please? I don't think it will be much work.

IIUC, I have done that way in the initial version of the patch that is do
encoding for modified columns.
I have mentioned reference of my initial patch as below:

modifiedCols = (rt_fetch(resultRelInfo->ri_RangeTableIndex,
+
estate->es_range_table)->modifiedCols);

http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C3828
52DE51(at)szxeml509-mbs

1. However Heikki has pointed, it has some problems similar to for HOT
implementation and that is the reason we have done memcmp for HOT.
2. Also we have found in initial readings that this doesn't have any
performance difference as compare to current Approach.

> I've moved this to the next CF. I'm planning to review this one first.

Thank you.

With Regards,
Amit Kapila.


From: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, amit(dot)kapila(at)huawei(dot)com, hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 14:29:49
Message-ID: 20130111142949.GA4208@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

Simon Riggs wrote:
> On 28 December 2012 10:21, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>
> > * There is a fixed 75% heuristic in the patch.
>
> I'm concerned that we're doing extra work while holding the buffer
> locked, which will exacerbate any block contention that exists.
>
> We have a list of the columns that the UPDATE is touching since we use
> that to check column permissions for the UPDATE. Which means we should
> be able to use that list to check only the columns actually changing
> in this UPDATE statement.

But that doesn't include columns changed by triggers, AFAIR, so you
could only use that if there weren't any triggers.

I was also worried about the high variance in the results. Those
averages look rather meaningless. Which would be okay, I think, because
it'd mean that performance-wise the patch is a wash, but it is still
achieving a lower WAL volume, which is good.

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, amit(dot)kapila(at)huawei(dot)com, hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 14:34:33
Message-ID: CA+U5nMKVMJk=XbSOi34vTr_HYiLs+jkVGPjj2VHsv4Bp8qnB9g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11 January 2013 14:29, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> wrote:

> But that doesn't include columns changed by triggers, AFAIR, so you
> could only use that if there weren't any triggers.

True, well spotted

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, hlinnakangas(at)vmware(dot)com, noah(at)leadboat(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 15:57:53
Message-ID: CA+U5nML4hzVvO0xtR_B6wUknY7TMz7OhQmDtyw0qqGbOiNyJhQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11 January 2013 14:24, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

> http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C3828
> 52DE51(at)szxeml509-mbs
>
> 1. However Heikki has pointed, it has some problems similar to for HOT
> implementation and that is the reason we have done memcmp for HOT.
> 2. Also we have found in initial readings that this doesn't have any
> performance difference as compare to current Approach.

OK, forget that idea.

>> I've moved this to the next CF. I'm planning to review this one first.
>
> Thank you.

Just reviewing the patch now, making more sense with comments added.

In heap_delta_encode() do we store which columns have changed? Do we
store the whole new column value?

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 17:08:14
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C383BEAEF91@szxeml509-mbs
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


On Friday, January 11, 2013 9:27 PM Simon Riggs wrote:
On 11 January 2013 14:24, Amit Kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

>> http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C3828
>> 52DE51(at)szxeml509-mbs
>
>> 1. However Heikki has pointed, it has some problems similar to for HOT
>> implementation and that is the reason we have done memcmp for HOT.
>> 2. Also we have found in initial readings that this doesn't have any
>> performance difference as compare to current Approach.

>OK, forget that idea.

>>> I've moved this to the next CF. I'm planning to review this one first.
>
>> Thank you.

> Just reviewing the patch now, making more sense with comments added.

>In heap_delta_encode() do we store which columns have changed?

Not the attribute bumberwise, but offsetwise it is stored.

> Do we store the whole new column value?

Yes, please refer else part of code

+ else
+ {
+ data_len = new_tup_off - change_off;
+ if ((bp + (2 * data_len)) - bstart >= result_max)
+ return false;
+
+ /* Copy the modified column data to the output buffer if present */
+ pglz_out_add(ctrlp, ctrlb, ctrl, bp, data_len, dp);
+

With Regards,
Amit Kapila.


From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 17:30:16
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C383BEAFFB1@szxeml509-mbs
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Friday, January 11, 2013 7:59 PM Alvaro Herrera wrote:
Simon Riggs wrote:
> On 28 December 2012 10:21, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>

> I was also worried about the high variance in the results. Those
> averages look rather meaningless. Which would be okay, I think, because
> it'd mean that performance-wise the patch is a wash,

For larger tuple sizes (>1000 && < 1800), the performance gain will be good.
Please refer performance results by me and Kyotaro-san in below links:

http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C383BEAAE32@szxeml509-mbx
http://archives.postgresql.org/message-id/20121228.170748.90887322.horiguchi.kyotaro@lab.ntt.co.jp

In fact, I believe for all tuples with length between 200 to 1800 bytes and changed values around 15~20%, there will be both performance gain as well as WAL reduction.
The reason for keeping the logic same for smaller tuples (<=128 bytes) also same, that there is no much performance difference but still WAL reduction gain is visible.

> but it is still achieving a lower WAL volume, which is good.

With Regards,
Amit Kapila.


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 17:39:24
Message-ID: CA+U5nMLna3CfqgEg=QvD-LO0Ct8BcxvdWMs1X7sVRPWwW+6MPg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11 January 2013 17:08, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

>> Just reviewing the patch now, making more sense with comments added.
>
>>In heap_delta_encode() do we store which columns have changed?
>
> Not the attribute bumberwise, but offsetwise it is stored.

(Does that mean "numberwise"??)

Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns?

>> Do we store the whole new column value?
>
> Yes, please refer else part of code
>
> + else
> + {
> + data_len = new_tup_off - change_off;
> + if ((bp + (2 * data_len)) - bstart >= result_max)
> + return false;
> +
> + /* Copy the modified column data to the output buffer if present */
> + pglz_out_add(ctrlp, ctrlb, ctrl, bp, data_len, dp);
> +
>

"modified column data" could mean either 1) (modified column) data
i.e. the data for the modified column, or 2) modified (column data)
i.e. the modified data in the column. I read that as (2) and didn't
look at the code. ;-)

Happy now that I know its (1)

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 17:42:09
Message-ID: CA+U5nML6Zx6=fv5wRxFHYsfAm7YSX2TT-Oqc5wya-_hSRp89-A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11 January 2013 17:30, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
> On Friday, January 11, 2013 7:59 PM Alvaro Herrera wrote:
> Simon Riggs wrote:
>> On 28 December 2012 10:21, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>
>
>> I was also worried about the high variance in the results. Those
>> averages look rather meaningless. Which would be okay, I think, because
>> it'd mean that performance-wise the patch is a wash,
>
> For larger tuple sizes (>1000 && < 1800), the performance gain will be good.
> Please refer performance results by me and Kyotaro-san in below links:
>
> http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C383BEAAE32@szxeml509-mbx
> http://archives.postgresql.org/message-id/20121228.170748.90887322.horiguchi.kyotaro@lab.ntt.co.jp

AFAICS your tests are badly variable, but as Alvaro says, they aren't
accurate enough to tell there's a regression.

I'll assume not and carry on.

(BTW the rejection of the null bitmap patch because of a performance
regression may also need to be reconsidered).

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 18:11:11
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C383BEAFFE7@szxeml509-mbs
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


On Friday, January 11, 2013 11:09 PM Simon Riggs wrote:
On 11 January 2013 17:08, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

>>> Just reviewing the patch now, making more sense with comments added.
>
>>>In heap_delta_encode() do we store which columns have changed?
>
>> Not the attribute bumberwise, but offsetwise it is stored.

> (Does that mean "numberwise"??)
Yes.

> Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns?
As per current algorithm, we can't as it is based on offsets.
What I mean to say is that the basic idea to reconstruct tuple during recovery
is copy data from old tuple offset-wise (offsets stored in encoded tuple) and use new data (modified column data)
from encoded tuple directly. So we don't need exact column numbers.

With Regards,
Amit Kapila.


From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 18:15:10
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C383BEAFFFC@szxeml509-mbs
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


On Friday, January 11, 2013 11:12 PM Simon Riggs wrote:
On 11 January 2013 17:30, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
> On Friday, January 11, 2013 7:59 PM Alvaro Herrera wrote:
> Simon Riggs wrote:
>> On 28 December 2012 10:21, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>
>
>>> I was also worried about the high variance in the results. Those
>>> averages look rather meaningless. Which would be okay, I think, because
>>> it'd mean that performance-wise the patch is a wash,
>
>> For larger tuple sizes (>1000 && < 1800), the performance gain will be good.
>> Please refer performance results by me and Kyotaro-san in below links:
>
>> http://archives.postgresql.org/message-id/6C0B27F7206C9E4CA54AE035729E9C383BEAAE32@szxeml509-mbx
>> http://archives.postgresql.org/message-id/20121228.170748.90887322.horiguchi.kyotaro@lab.ntt.co.jp

>AFAICS your tests are badly variable, but as Alvaro says, they aren't
>accurate enough to tell there's a regression.

>I'll assume not and carry on.

> (BTW the rejection of the null bitmap patch because of a performance
> regression may also need to be reconsidered).

I can post detailed numbers during next commit fest.

With Regards,
Amit Kapila.


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-11 18:53:47
Message-ID: CA+U5nM+RTojWTwG0CeWJ16=-GG9wUdJ+BaRLJwxmn7ZrnFCCZQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11 January 2013 18:11, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

>> Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns?
> As per current algorithm, we can't as it is based on offsets.
> What I mean to say is that the basic idea to reconstruct tuple during recovery
> is copy data from old tuple offset-wise (offsets stored in encoded tuple) and use new data (modified column data)
> from encoded tuple directly. So we don't need exact column numbers.

Another patch is going through next CF related to reassembling changes
from WAL records.

To do that efficiently, we would want to store a bitmap showing which
columns had changed in each update. Would that be an easy addition, or
is that blocked by some aspect of the current design?

The idea would be that we could re-construct an UPDATE statement that
would perform exactly the same change, yet without needing to refer to
a base tuple.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "hlinnakangas(at)vmware(dot)com" <hlinnakangas(at)vmware(dot)com>, "noah(at)leadboat(dot)com" <noah(at)leadboat(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-12 03:50:10
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C383BEB00D5@szxeml509-mbs
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Saturday, January 12, 2013 12:23 AM Simon Riggs wrote:
On 11 January 2013 18:11, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:

>>> Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns?
>> As per current algorithm, we can't as it is based on offsets.
>> What I mean to say is that the basic idea to reconstruct tuple during recovery
>> is copy data from old tuple offset-wise (offsets stored in encoded tuple) and use new data (modified column data)
>> from encoded tuple directly. So we don't need exact column numbers.

> Another patch is going through next CF related to reassembling changes
> from WAL records.

> To do that efficiently, we would want to store a bitmap showing which
> columns had changed in each update. Would that be an easy addition, or
> is that blocked by some aspect of the current design?

I don't think it should be a problem, as it can go in current way of WAL tuple construction as
we do in this patch when old and new buf are different. This differentiation is done in
log_heap_update.

IMO, for now we can avoid this optimization (way we have done incase updated tuple is not on same page)
for the bitmap storing patch and later we can evaluate if we can do this optimization for
the feature of that patch.

> The idea would be that we could re-construct an UPDATE statement that
> would perform exactly the same change, yet without needing to refer to
> a base tuple.

I understood, that such a functionality would be needed by logical replication.

With Regards,
Amit Kapila.


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-12 10:15:43
Message-ID: CA+U5nMLk7tzsqAKRpF_62qs=RtkvftF0TcY+rXKVGShOdM=hWw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 12 January 2013 03:50, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
> On Saturday, January 12, 2013 12:23 AM Simon Riggs wrote:
> On 11 January 2013 18:11, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
>
>>>> Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns?
>>> As per current algorithm, we can't as it is based on offsets.
>>> What I mean to say is that the basic idea to reconstruct tuple during recovery
>>> is copy data from old tuple offset-wise (offsets stored in encoded tuple) and use new data (modified column data)
>>> from encoded tuple directly. So we don't need exact column numbers.
>
>> Another patch is going through next CF related to reassembling changes
>> from WAL records.
>
>> To do that efficiently, we would want to store a bitmap showing which
>> columns had changed in each update. Would that be an easy addition, or
>> is that blocked by some aspect of the current design?
>
> I don't think it should be a problem, as it can go in current way of WAL tuple construction as
> we do in this patch when old and new buf are different. This differentiation is done in
> log_heap_update.
>
> IMO, for now we can avoid this optimization (way we have done incase updated tuple is not on same page)
> for the bitmap storing patch and later we can evaluate if we can do this optimization for
> the feature of that patch.

Yes, we can simply disable this feature. But that is just bad planning
and we should give some thought to having new features play nicely
together.

I would like to work out how to modify this so it can work with wal
decoding enabled. I know we can do this, I want to look at how,
because we know we're going to do it.

>> The idea would be that we could re-construct an UPDATE statement that
>> would perform exactly the same change, yet without needing to refer to
>> a base tuple.
>
> I understood, that such a functionality would be needed by logical replication.

Yes, though the features being added are to allow decoding of WAL for
any purpose.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Simon Riggs <simon(at)2ndQuadrant(dot)com>
To: Amit Kapila <amit(dot)kapila(at)huawei(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-12 11:06:05
Message-ID: CA+U5nMLOZXTkPAhJnZv-WdwnQiY6sM_C4u3QAPBafT0BpDfxqQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On 11 January 2013 15:57, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:

>>> I've moved this to the next CF. I'm planning to review this one first.
>>
>> Thank you.
>
> Just reviewing the patch now, making more sense with comments added.

Making more sense, but not yet making complete sense.

I'd like you to revisit the patch comments since some of them are
completely unreadable.

Examples

"Frames the original tuple which needs to be inserted into the heap by
decoding the WAL tuplewith the help of old Heap tuple."
"The delta tuples for update WAL is to eliminate copying the entire
the new record to WAL for the update operation."

I don't mind rewording the odd line here and there, that's just normal
editing, but this needs extensive work in terms of number of places
requiring change and the level of change at each place. That's not
easy for me to do when I'm trying to understand the patch in the first
place. My own written English isn't that great, so please read some of
the other comments in other parts of the code so you can see the level
of clarity that's needed in PostgreSQL.

Copying chunks of text from other comments doesn't help much either,
especially when you miss out parts of the explanation. You refer to a
"history tag" but don't define it that well, and don't explain why it
might sometimes be 3 bytes, or what that means. pg_lzcompress doesn't
call it that either, which is confusing. If you use a concept from
elsewhere you should either use the same name, or if you rename it,
rename it in both places.

/*
* Do only the delta encode when the update is going to the same page and
* buffer doesn't need a backup block in case of full-pagewrite is on.
*/
if ((oldbuf == newbuf) && !XLogCheckBufferNeedsBackup(newbuf))

The comment above says nothing. I can see that oldbuf and newbuf must
be the same and the call to XLogCheckBufferNeedsBackup is clear
because the function is already well named.

What I'd expect to see here is a discussion of why this test is being
applied and maybe why it is applied here. Such an important test
deserves a long discussion, perhaps 10-20 lines of comment.

Thanks

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-12 14:07:26
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C383BEB016D@szxeml509-mbs
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers

On Saturday, January 12, 2013 3:45 PM Simon Riggs wrote:
On 12 January 2013 03:50, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
> On Saturday, January 12, 2013 12:23 AM Simon Riggs wrote:
> On 11 January 2013 18:11, Amit kapila <amit(dot)kapila(at)huawei(dot)com> wrote:
>
>>>>> Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns?
>>>> As per current algorithm, we can't as it is based on offsets.
>>>> What I mean to say is that the basic idea to reconstruct tuple during recovery
>>>> is copy data from old tuple offset-wise (offsets stored in encoded tuple) and use new data (modified column data)
>>>> from encoded tuple directly. So we don't need exact column numbers.
>
>>> Another patch is going through next CF related to reassembling changes
>>> from WAL records.
>
>>> To do that efficiently, we would want to store a bitmap showing which
>>> columns had changed in each update. Would that be an easy addition, or
>>> is that blocked by some aspect of the current design?
>
>> I don't think it should be a problem, as it can go in current way of WAL tuple construction as
>> we do in this patch when old and new buf are different. This differentiation is done in
>> log_heap_update.
>
>> IMO, for now we can avoid this optimization (way we have done incase updated tuple is not on same page)
>> for the bitmap storing patch and later we can evaluate if we can do this optimization for
>> the feature of that patch.

> Yes, we can simply disable this feature. But that is just bad planning
> and we should give some thought to having new features play nicely
> together.

> I would like to work out how to modify this so it can work with wal
> decoding enabled. I know we can do this, I want to look at how,
> because we know we're going to do it.

I am sure this can be done, as for WAL decoding we mainly new values and column numbers
So if we include bitmap in WAL tuple and teach WAL decoding method how to decode this new format WAL tuple
it can be done.
However it will need changes in algorithm for both the patches and it can be risk for one or for both patches.
I am open to have discussion about how both can work together, but IMHO at this moment (as this will be last CF)
it will be little risky.
If there is some way such that with minor modifications, we can address this scenario, I will be happy to see both
working together.

With Regards,
Amit Kapila.


From: Amit kapila <amit(dot)kapila(at)huawei(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance Improvement by reducing WAL for Update Operation
Date: 2013-01-12 14:10:45
Message-ID: 6C0B27F7206C9E4CA54AE035729E9C383BEB017E@szxeml509-mbs
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-hackers


On Saturday, January 12, 2013 4:36 PM Simon Riggs wrote:
On 11 January 2013 15:57, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:

>>>> I've moved this to the next CF. I'm planning to review this one first.
>>
>>> Thank you.
>
>> Just reviewing the patch now, making more sense with comments added.

> Making more sense, but not yet making complete sense.

> I'd like you to revisit the patch comments since some of them are
> completely unreadable.

I will once again review all the comments and make them more meaningful.

With Regards,
Amit Kapila.