Re: FUNC_MAX_ARGS benchmarks

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Rod Taylor <rbt(at)zort(dot)ca>
Cc: Neil Conway <nconway(at)klamath(dot)dyndns(dot)org>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: FUNC_MAX_ARGS benchmarks
Date: 2002-08-02 14:39:47
Message-ID: 120.1028299187@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Rod Taylor <rbt(at)zort(dot)ca> writes:
> Perhaps I'm not remembering correctly, but don't SQL functions still
> have an abnormally high cost of execution compared to plpgsql?

> Want to try the same thing with a plpgsql function?

Actually, plpgsql is pretty expensive too. The thing to be benchmarking
is applications of plain old built-in-C functions and operators.

Also, there are two components that I'd be worried about: one is the
parser's costs of operator/function lookup, and the other is runtime
overhead. Runtime overhead is most likely concentrated in the fmgr.c
interface functions, which tend to do MemSets to zero out function
call records. I had had a todo item to eliminate the memset in favor
of just zeroing what needs to be zeroed, at least in the one- and two-
argument cases which are the most heavily trod code paths. This will
become significantly more important if FUNC_MAX_ARGS increases.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Greg Copeland 2002-08-02 14:51:27 Re: WAL file location
Previous Message Tom Lane 2002-08-02 14:35:37 Re: Why is MySQL more chosen over PostgreSQL?