Position of rules definitions

Liviu Nicoara nicoara at roguewave.com
Thu May 3 14:07:51 UTC 2007

Robert Elz wrote:
>     Date:        Wed, 02 May 2007 18:12:16 -0400
>     From:        Liviu Nicoara <nicoara at roguewave.com>
>     Message-ID:  <46390CC0.5020605 at roguewave.com>
>   | I find it more logical to have the 
>   | definition of an entity before its first use.
> It isn't really - in the computer world we've gotten used to it that
> way because of the demands of early resource scarce parsers (and these days,
> lazy ones, plus those obeying established rules of course).
> In other fields, it's normal to see a reference to an object before you
> seek its definition, or you never have any idea why this object is
> being defined, and hence have no idea why it exists or why you'd
> bother caring about its definition - after you've seen a use for the
> object, you know why you need to know what it is.   Definition after
> use is more logical, it's just harder for the parser to cope with.

I must admit I am agnostic of such fields and, while they might be 
attractive to others, I prefer to think in the "C/C++" set of mind.

But so far it all seems to me a matter of personal preferences with no 
strong arguments on either side. At least I haven't offered any, because 
I assumed it would be obvious.

Allow me to offer a performance reason which would probably show where I 
am heading; a two-step parser would have to deal with:

1. Failure to resolve an undefined reference at a cost of O(logN) per 
search in a set of N rules.
2. Creating a set of unresolved references - at a total cost of O(M) for 
M unresolved references if the set is unordered (plus a constant if 
necessary adornment of the dangling reference is needed, etc.).
3. Search the [smaller] set of unresolved references and for each one 
(M) perform a search at a minimum cost of O(logN) for a set of N defined 

(The above with fairly basic data structures and algorithms and further 
optimization may be possible.)

A one-step parser would only incur the penalty of step 1 because it 
would always find the [already] defined rule.

All of the above may seem academic, and indeed the sets are small enough 
that we could ignore, e.g., the differences between linear and binary 
searches, but it just does not seem right. If all we are challenging is 
convenience, then I believe it is worth a second look regardless of the 
capabilities of existing parsers.

I hope I have not offended anyone by insisting on this relatively 
unimportant issue.


More information about the tz mailing list