Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ]

From: Euler Taveira <euler(at)timbira(dot)com(dot)br>
To: Dilip kumar <dilip(dot)kumar(at)huawei(dot)com>
Cc: Jan Lentfer <Jan(dot)Lentfer(at)web(dot)de>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ]
Date: 2014-01-16 14:23:02
Message-ID: 52D7EB46.6030504@timbira.com.br
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 08-11-2013 06:20, Dilip kumar wrote:
> On 08 November 2013 13:38, Jan Lentfer
>
>
>> For this use case, would it make sense to queue work (tables) in order of their size, starting on the largest one?
>
>> For the case where you have tables of varying size this would lead to a reduced overall processing time as it prevents large (read: long processing time) tables to be processed in the last step. While processing large tables at first and filling up "processing slots/jobs" when they get free with smaller tables one after the other would safe overall execution time.
> Good point, I have made the change and attached the modified patch.
>
Don't you submit it for a CF, do you? Is it too late for this CF?

--
Euler Taveira Timbira - http://www.timbira.com.br/
PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2014-01-16 14:24:16 Re: WAL Rate Limiting
Previous Message Alvaro Herrera 2014-01-16 14:20:50 Re: [PATCH] Relocation of tablespaces in pg_basebackup