Drop big index

Lists: pgsql-general
From: Vojtěch Rylko <vojta(dot)rylko(at)seznam(dot)cz>
To: Postgres General <pgsql-general(at)postgresql(dot)org>
Subject: Drop big index
Date: 2012-02-15 16:12:26
Message-ID: 4F3BD96A.9050406@seznam.cz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Hi,

I need to drop some b-tree indexes because they are not used anymore.
Size of indexes vary between 700 MB and 7 GB. I tried common DROP
INDEX... but this query performed so long and blocked table so I had to
interrupt it. Is there any way how to drop large indexes in non-blocking
or /faster/ way?

Regards,
Vojta R.


From: "Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at>
To: Vojtěch Rylko *EXTERN* <vojta(dot)rylko(at)seznam(dot)cz>, "Postgres General" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Drop big index
Date: 2012-02-16 08:38:17
Message-ID: D960CB61B694CF459DCFB4B0128514C2077EBFF3@exadv11.host.magwien.gv.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Vojtech Rylko wrote:
> I need to drop some b-tree indexes because they are not used anymore.
> Size of indexes vary between 700 MB and 7 GB. I tried common DROP
> INDEX... but this query performed so long and blocked table so I had to
> interrupt it. Is there any way how to drop large indexes in non-blocking
> or /faster/ way?

Unfortunately not (yet).
PostgreSQL 9.2 will have DROP INDEX CONCURRENTLY.

Yours,
Laurenz Albe


From: Marti Raudsepp <marti(at)juffo(dot)org>
To: Vojtěch Rylko <vojta(dot)rylko(at)seznam(dot)cz>
Cc: Postgres General <pgsql-general(at)postgresql(dot)org>
Subject: Re: Drop big index
Date: 2012-02-16 08:53:59
Message-ID: CABRT9RDu3uK0BMR7vLWnmH2m4mn4aQfJvPCeMGfhPU4SnUpjUA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

2012/2/15 Vojtěch Rylko <vojta(dot)rylko(at)seznam(dot)cz>:
> this query performed so long and blocked table so I had to interrupt it. Is
> there any way how to drop large indexes in non-blocking or /faster/ way?

Usually the problem is not with the size of the index -- but some
other running transactions that hold a read lock on the table, and
preventing the DROP INDEX from getting an exclusive lock. If the
exclusive lock is granted, the drop index is usually very fast.

Run 'select * from pg_stat_activity' and see if there are any "<IDLE>
in transaction" connections. It's normal to have these for a second or
few, but longer idle transactions usually indicate an application bug
-- it started a transaction, but "forgot" to rollback or commit. These
are problematic for this exact reason -- locks can't be released until
the transaction finishes.

Regards,
Marti


From: Andreas Kretschmer <akretschmer(at)spamfence(dot)net>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Drop big index
Date: 2012-02-16 12:49:45
Message-ID: 20120216124945.GA24392@tux
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Albe Laurenz <laurenz(dot)albe(at)wien(dot)gv(dot)at> wrote:

> Vojtech Rylko wrote:
> > I need to drop some b-tree indexes because they are not used anymore.
> > Size of indexes vary between 700 MB and 7 GB. I tried common DROP
> > INDEX... but this query performed so long and blocked table so I had to
> > interrupt it. Is there any way how to drop large indexes in non-blocking
> > or /faster/ way?
>
> Unfortunately not (yet).
> PostgreSQL 9.2 will have DROP INDEX CONCURRENTLY.

Really? Great!

I have a use-case for that: an partial index, daily re-created (the
index based on the date and includes the last N days), and
sometimes we have trouble with this.

Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect. (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly." (unknown)
Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°


From: Vojtěch Rylko <vojta(dot)rylko(at)seznam(dot)cz>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Drop big index
Date: 2012-02-16 14:56:13
Message-ID: 4F3D190D.9030103@seznam.cz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Lists: pgsql-general

Dne 16.2.2012 9:53, Marti Raudsepp napsal(a):
> 2012/2/15 Vojtěch Rylko<vojta(dot)rylko(at)seznam(dot)cz>:
>> this query performed so long and blocked table so I had to interrupt it. Is
>> there any way how to drop large indexes in non-blocking or /faster/ way?
> Usually the problem is not with the size of the index -- but some
> other running transactions that hold a read lock on the table, and
> preventing the DROP INDEX from getting an exclusive lock. If the
> exclusive lock is granted, the drop index is usually very fast.
>
> Run 'select * from pg_stat_activity' and see if there are any "<IDLE>
> in transaction" connections. It's normal to have these for a second or
> few, but longer idle transactions usually indicate an application bug
> -- it started a transaction, but "forgot" to rollback or commit. These
> are problematic for this exact reason -- locks can't be released until
> the transaction finishes.
>
> Regards,
> Marti
>

Thanks! Caused by "IDLE in transaction". My nightmare solved. Droping 7
GB index in 2353 ms.

Regards,
Vojtěch R.