Re: Crash dumps

From: Radosław Smogura <rsmogura(at)softperience(dot)eu>
To: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
Cc: PG Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Crash dumps
Date: 2011-07-04 11:03:38
Message-ID: a6237ee4eb53019dac510df00954ae55@mail.softperience.eu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, 04 Jul 2011 12:58:46 +0800, Craig Ringer wrote:
> On 15/06/2011 2:37 AM, Radosław Smogura wrote:
>> Hello,
>>
>> Because, I work a little bit on streaming protocol and from time to
>> time
>> I have crashes. I want ask if you wont crash reporting (this is one
>> of
>> minors products from mmap playing) those what I have there is mmaped
>> areas, and call stacks, and some other stuff.
>
> Core files already contain all that, don't they? They omit shared
> memory segments by default on most platforms, but should otherwise be
> quite complete.
>
> The usual approach on UNIXes and linux is to use the built-in OS
> features to generate a core dump of a crashing process then analyze
> it
> after the fact. That way the crash is over as fast as possible and
> you
> can get services back up and running before spending the time, CPU
> and
> I/O required to analyze the core dump.

Actually this, what I was thinking about was, to add dumping of GUC,
etc. List of mappings came from when I tired to mmap PostgreSQL, and due
to many of errors, which sometimes occurred in unexpected places, I was
in need to add something that will be useful for me and easy to analyse
(I could simple find pointer, and then check which region failed). The
idea to try to evolve this come later.

I think report should looks like:
This is crash report of PostgreSQL database, generated on
Here is list of GUC variables:
Here is list of files:
Here is backtrace:
Here is detailed backtrace:
Here is list of m-mappings (you may get what library are linked in)
Here is your free memory
Here is your disk usage
Here is your custom addition

>> This based reports works
>> for Linux with gdb, but there is some pluggable architecture, which
>> connects with segfault
>
> Which process does the debugging? Does the crashing process fork() a
> copy of gdb to debug its self?
>
> One thing I've been interested in is giving the postmaster (or more
> likely a helper for the postmaster) the ability to handle "backend is
> crashing" messages, attach a debugger to the crashing backend and
> generate a dump and/or backtrace. This might be workable in cases
> where in-process debugging can't be done due to a smashed stack, full
> heap causing malloc() failure, etc.

Currently I do everything in segfault handler (no fork), but I like the
idea of fork (in segfault), this may resolve some problems.

Regards,
Radosław Smogura

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bernd Helmle 2011-07-04 11:10:32 Initial Review: JSON contrib modul was: Re: Another swing at JSON
Previous Message Craig Ringer 2011-07-04 09:28:52 Re: flex on win64 - workaround for "flex: fatal internal error, exec failed"