HP Forums

Full Version: My own infix to prefix approach. What do you think?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi all. After using the Casio fx-115es Plus, Sharp EL-W516X and TI-36 Pro, I noticed a pattern to its two-number functions like Polar & Rectangular conversion. The form I noticed was func(a,b). In addition, studying the two prefix examples in 'Everything You've Always Wanted To Know About RPN...,' I thought my ideas could be expanded.

The way I see it is if I interpret each one or two number operation as func(arg1, arg2...) or func(arg), then, after writing this form of the expression out, I replace the parenthese and commas with spaces for readability and syntax, the end result is a prefix expression.

For example: (2+3)x(4+5) becomes x(+(2,3),+(4,5)).
Thus, replacing parentheses and commas with spaces gives x + 2 3 + 4 5 as the decomposed result.

Do you think my methodology is plausible and valid?
(04-29-2014 12:47 AM)Matt Agajanian Wrote: [ -> ]The way I see it is if I interpret each one or two number operation as func(arg1, arg2...) or func(arg), then, after writing this form of the expression out, I replace the parenthese and commas with spaces for readability and syntax, the end result is a prefix expression.

For example: (2+3)x(4+5) becomes x(+(2,3),+(4,5)).
Thus, replacing parentheses and commas with spaces gives x + 2 3 + 4 5 as the decomposed result.

Do you think my methodology is plausible and valid?

Matt, you're exploring the ideas that underlie parsing of formal languages, finite state machines, compilers, etc. For example, your idea can be implemented very easily using a parser toolkit like ANTLR (Another Toolkit for Language Recognition). In particular, ANTLR will let you write a grammar which will parse infix expressions and spit out prefix notation with virtually no programming at all. In fact, while implementing a parser and interpreter for a domain-specific language I've been working on, my diagnostic dumps of the ANTLR parse trees looked very much like your example. ANTLR also has an associated template engine which allows you to format the output so it could look exactly like your example, with no programming at all.

See http://www.antlr.org/ and http://www.stringtemplate.org/ for more information. If you do decide to investigate this, Terence Parr's books, "The Definitive ANTLR 4 Reference" and especially "Language Implementation Patterns" would get you up to speed very quickly. While ANTLR itself is implemented in Java, it can produce run-times (i.e. parsers to be used in compilers or interpreters) in many other languages.

I've often thought that ANTLR would be an easy way to implement 'cross-compilers' to translate, e.g. BASIC and other languages into RPN or RPL.
(04-29-2014 03:08 AM)Les Bell Wrote: [ -> ]your idea can be implemented very easily using a parser toolkit

Or you can simply use the Shunting-yard algorithm.

Cheers
Thomas
(04-29-2014 03:08 AM)Les Bell Wrote: [ -> ]Matt, you're exploring the ideas that underlie parsing of formal languages, finite state machines, compilers, etc. For example, your idea can be implemented very easily using a parser toolkit like ANTLR (Another Toolkit for Language Recognition). In particular, ANTLR will let you write a grammar which will parse infix expressions and spit out prefix notation with virtually no programming at all. In fact, while implementing a parser and interpreter for a domain-specific language I've been working on, my diagnostic dumps of the ANTLR parse trees looked very much like your example. ANTLR also has an associated template engine which allows you to format the output so it could look exactly like your example, with no programming at all.

See http://www.antlr.org/ and http://www.stringtemplate.org/ for more information. If you do decide to investigate this, Terence Parr's books, "The Definitive ANTLR 4 Reference" and especially "Language Implementation Patterns" would get you up to speed very quickly. While ANTLR itself is implemented in Java, it can produce run-times (i.e. parsers to be used in compilers or interpreters) in many other languages.

I've often thought that ANTLR would be an easy way to implement 'cross-compilers' to translate, e.g. BASIC and other languages into RPN or RPL.

Thanks Les! I'll look those up and see what I can uncover.
(04-29-2014 04:16 AM)Thomas Klemm Wrote: [ -> ]Or you can simply use the Shunting-yard algorithm.

Or write a recursive-descent parser - that's always a fun exercise (my first-ever language implementation was a recursive-descent parser for an xBASE interpreter, written in PL/I). But if Matt wants to extend his ideas further, then ANTLR is powerful enough to take him a long way, and the Language Implementation Patterns book (which is reasonably inexpensive on Kindle) covers everything required for someone interested in "our" kinds of languages, such as types (including type safety, type casting and type promotion), scoping rules, context-dependent grammars and lots of fun stuff.
(04-29-2014 04:16 AM)Thomas Klemm Wrote: [ -> ]
(04-29-2014 03:08 AM)Les Bell Wrote: [ -> ]your idea can be implemented very easily using a parser toolkit

Or you can simply use the Shunting-yard algorithm.

Cheers
Thomas

Thomas, that Shunting Yard Algorithm looks quite promising. I'll look into that as well. And Les, now that I know thar ANTLR book is a Kindle version, I can read it at my leisure (pun intended).
(04-29-2014 04:40 AM)Matt Agajanian Wrote: [ -> ]And Les, now that I know thar ANTLR book is a Kindle version, I can read it at my leisure (pun intended).

Matt, here's a basic ANTLR grammar that will parse simple algebraic expressions (e.g. it parses your '(2+3)*(4+5)' example perfectly. From this grammar, ANTLR will generate a lexical analyzer, parser and related classes such as a listener that can walk the resultant parse tree and perform operations on it.

Code:

grammar Expr;

options {
  language = Java;
}

fragment
NEWLINE
    : '\r'? '\n' -> skip
    ;

WS
  : ( ' '
    | '\t'
    | ( '\r\n'    // DOS/Windows
      | '\r'      // Mac
      | '\n'      // Unix
      )
    ) -> skip
  ;

fragment
LETTER
  : ('a'..'z' | 'A'..'Z')
  ;

ID
  :  LETTER (LETTER | '0'..'9' | '.')*
  ;

FLOAT
  :  '0'..'9'+ ('.' '0'..'9'+)? ( ('E'|'e')( '+' | '-')? '0'..'9'+)?
  ;

expression
  : term (( '+' | '-' ) term) *
  ;

term
  : unaryfactor (( '*' | '/' ) factor) *
  ;
  
 unaryfactor
  : '-' factor
  | factor
  ;

factor
 : ID
 | FLOAT
 | '(' expression ')'
 ;

And here's a simple Java program that will use the lexer and parser to parse an expression typed into it, and will then print the tree:

Code:

import java.io.IOException;

import org.antlr.v4.runtime.ANTLRInputStream;
import org.antlr.v4.runtime.CommonTokenStream;
import org.antlr.v4.runtime.tree.ParseTree;
import org.antlr.v4.runtime.tree.ParseTreeWalker;

/**
 * 
 */

/**
 * @author les
 *
 */
public class ExpressionParser {

    /**
     * @param args
     * @throws IOException 
     */
    public static void main(String[] args) throws IOException {
        ANTLRInputStream input = new ANTLRInputStream(System.in);
        
        ExprLexer lexer = new ExprLexer(input);
        
        CommonTokenStream tokens = new CommonTokenStream(lexer);
        
        ExprParser parser = new ExprParser(tokens);
        
        ParseTree tree = parser.expression();
        
        System.out.println(tree.toStringTree());


    }

}

I've only just started using ANTLR 4 myself (all my work previously was with ANTLR 3.3) so I haven't gone any further, but I'll probably play around with this example some more when I get time to learn ANTLR 4.

Anyway, there's the beginning of some actual code that can parse infix and emit prefix or postfix notation, if you want to play with it.
(04-29-2014 04:16 AM)Thomas Klemm Wrote: [ -> ]Or you can simply use the Shunting-yard algorithm.

Thomas beat me to it but this one-hit wonder from Dijkstra appears to be a milestone of algorithmic approach to this. The only (obvious) thing the vanilla algorithm description doesn't seem to cover is function calls with multiple arguments. For expressions it seems to work fine.

Unlike many wiki pages, the one Thomas referenced is actually pretty usable. A reasonable implementation can be worked out by understanding what's written on the page.

IBM had described another method for this in an early FORTRAN (pre FORTRAN IV) manual. I am sorry, but I cannot remember the particulars including whether IBM or Dijkstra got there first. But I do remember their approach involved consuming the expression and parenthesizing everything until the order of evaluation was unequivocal, and then converting it into object code. I realize that is not much of a description but you can find more online.

You are in an area now where there is a basic decision to be made. Do you want to understand things at a low level and write your own implementation from the algorithmic description, or do you want to write a high-level driver program that uses other people's library functions or tools to accomplish what you want? Of course, there is always the possibility of developing even the algorithm yourself, but for classical problems like this most things have been settled already.
(04-29-2014 08:55 AM)HP67 Wrote: [ -> ]Thomas beat me to it but this one-hit wonder from Dijkstra appears to be a milestone of algorithmic approach to this.

Ah, but there's the rub - it's a one-hit wonder - which topped the charts in 1961, but the world has moved on since then. How do you add support for complex numbers? Vectors? Arrays? Flow control such as function definition, if-then-else and while? Local vs global variables? Classes? And so it goes. . . The decision really is, does Matt want to learn a generally-useful approach to parsing and language translation, or one algorithm for one specific problem? Hand-coding the shunting-yard algorithm is a basic exercise in coding stacks and queues, but ANTLR gets more work done, faster, using much more powerful techniques. Much better for exploring the underlying ideas and turning them into something useful.
(04-29-2014 12:35 PM)Les Bell Wrote: [ -> ]
(04-29-2014 08:55 AM)HP67 Wrote: [ -> ]Thomas beat me to it but this one-hit wonder from Dijkstra appears to be a milestone of algorithmic approach to this.

Ah, but there's the rub - it's a one-hit wonder - which topped the charts in 1961,

I meant it's the only thing Dijkstra has contributed that was actually useful.

(04-29-2014 12:35 PM)Les Bell Wrote: [ -> ]but the world has moved on since then.

Really? I haven't seen anything formalized.

(04-29-2014 12:35 PM)Les Bell Wrote: [ -> ]How do you add support for complex numbers? Vectors? Arrays? Flow control such as function definition, if-then-else and while? Local vs global variables? Classes? And so it goes

I won't put words in your mouth but it sounds like you're suggesting complex numbers, vectors, and arrays didn't exist in Dijkstra's time but they did. The shunting-yard algorithm is not an implementation, it's an algorithm. It works, and it's been extended usefully. I don't see any problems with the algorithm breaking on anything you mentioned.

As for your coments about function definition, local vs global variables, classes, etc, none of that is in context here. The shunting yard algorithm is about translating an infix expression to a postfix expression. That's all. You've complicated the discussion by conflating scanning with parsing, and by suggesting the OP use a bigger hammer where it isn't even clear a hammer is needed in the first place.

(04-29-2014 12:35 PM)Les Bell Wrote: [ -> ]. . . The decision really is, does Matt want to learn a generally-useful approach to parsing and language translation, or one algorithm for one specific problem?

The 800-pound gorilla dressed as a "generally-useful approach" you seem to be suggesting is other people's tools. That is one of the best ways I know to never learn anything at all and to never become a good coder.

And that is why I asked "Do you want to understand things at a low level and write your own implementation from the algorithmic description, or do you want to write a high-level driver program that uses other people's library functions or tools to accomplish what you want?"

(04-29-2014 12:35 PM)Les Bell Wrote: [ -> ]Hand-coding the shunting-yard algorithm is a basic exercise in coding stacks and queues, but ANTLR gets more work done, faster, using much more powerful techniques. Much better for exploring the underlying ideas and turning them into something useful.

That's just a matter of opinion, and it's one that I strongly disagree with. But, I agree it's a matter of choice that should at least be articulated. Most of the coders today are cut and paste jockeys. All the big and popular languages have so many libraries that you really don't have to be much of anything to "write" a program. You can be real productive until something goes wrong.

On the other hand, if you want to understand things nothing beats writing your own programs and your own tools.
(04-29-2014 01:28 PM)HP67 Wrote: [ -> ]I meant it's the only thing Dijkstra has contributed that was actually useful.

Wow! -I guess you missed Dijkstra's shortest path algorithm, which is used in OSPF routing, his invention of semaphores, the "GO TO Considered Harmful" paper, his work on formal methods and his ACM Turing Award. The shunting algorithm is but a small part of his work and very far from the most useful.

(04-29-2014 01:28 PM)HP67 Wrote: [ -> ]Really? I haven't seen anything formalized.

You mean, you haven't seen the Dragon Book ("Compilers: Principles, Techniques, & Tools", by Aho, Ullman, et. al.)? That book was significantly updated in the 20 years between the first and second edition, and there have been significant developments since then. You haven't seen YACC and LEX, the tools used in Kernighan and Pike's classic "The UNIX Programming Environment"? Nor BISON, the GNU equivalent?

I'm not going to respond in detail to the rest of your post; it simply reflects your view of coding as a craft.

I took it that Matt was thinking and learning about how basic language translation might be done, not looking for a specific algorithm. He's experimenting with his own ideas, and ANTLR (and similar tools) provide a "playpen" for that learning activity, along with guidance, in the form of the books I recommended. I expect Matt is already a competent coder, but now he's thinking about well-developed ideas in the realm of computer science.

The key point is that the shunting yard algorithm does one thing only - parse infix expressions and emit a prefix representation. However, before you can parse the expression, it has to be tokenized (and yes, I do know the difference between scanning and parsing, thank you!), and then once you've got your infix expression, now what?

If Matt is interested in implementing something useful with it, he'll need to extend it and place it in the context of additional features, which could be any or all of named variables (symbol tables), types, function definition, flow control, etc. The ANTLR approach makes this a natural extension.

However, if he's more interested in exploring the ideas involved, then learning (by example, with ANTLR) how a grammar defines a language and how a parser translates it would be a rewarding activity. Trying new ideas becomes a simple matter of editing the grammar, rather than rewriting reams of low-level code.

To paraphrase, I'll simply say that with the shunting yard algorithm, Matt has been given a fish; but ANTLR is a complete fishing rod, hooks, bait, pots, pans and a vegetable garden with all the things he'll need to cook that fish and any others he catches.
Exploring the Programs in the EXAMPLES Directory
(HP 48G Series User's Guide, page 29-19)
  • run the TEACH command
  • this creates a directory EXAMPLES in your HOME directory
  • within PRGS you will find the program ->RPN

I recommend to single-step through the program (see page 29-8).

For your example you will get:
'(2+3)*(4+5)'
->RPN
{ 2 3 + 4 5 + * }


Cheers
Thomas
Yet another approach is to use the compiler and disassemble the result:

C

File rpn.c:
Code:
int f() {
    int a = 2;
    int b = 3;
    int c = 4;
    int d = 5;
    return (a+b)*(c+d);
}

Compile and disassemble:
gcc -g -c -o rpn.o rpn.c
objdump -S rpn.o


Code:
0000000000000000 <f>:
int f() {
   0:   55                      push   %rbp
   1:   48 89 e5                mov    %rsp,%rbp
        int a = 2;
   4:   c7 45 f0 02 00 00 00    movl   $0x2,-0x10(%rbp)
        int b = 3;
   b:   c7 45 f4 03 00 00 00    movl   $0x3,-0xc(%rbp)
        int c = 4;
  12:   c7 45 f8 04 00 00 00    movl   $0x4,-0x8(%rbp)
        int d = 5;
  19:   c7 45 fc 05 00 00 00    movl   $0x5,-0x4(%rbp)
        return (a+b)*(c+d);
  20:   8b 45 f4                mov    -0xc(%rbp),%eax
  23:   8b 55 f0                mov    -0x10(%rbp),%edx
  26:   8d 0c 02                lea    (%rdx,%rax,1),%ecx
  29:   8b 45 fc                mov    -0x4(%rbp),%eax
  2c:   8b 55 f8                mov    -0x8(%rbp),%edx
  2f:   8d 04 02                lea    (%rdx,%rax,1),%eax
  32:   0f af c1                imul   %ecx,%eax
}
  35:   c9                      leaveq
  36:   c3                      retq


Java

File rpn.java:
Code:
class rpn {
    int f() {
        int a = 2;
        int b = 3;
        int c = 4;
        int d = 5;
        return (a+b)*(c+d);
    }
}

Compile and disassemble:
javac rpn.java
javap -c rpn


Code:
  int f();
    Code:
       0: iconst_2
       1: istore_1
       2: iconst_3
       3: istore_2
       4: iconst_4
       5: istore_3
       6: iconst_5
       7: istore        4
       9: iload_1
      10: iload_2
      11: iadd
      12: iload_3
      13: iload         4
      15: iadd
      16: imul
      17: ireturn


Python

File rpn.py:
Code:
def f():
    a, b, c, d = 2, 3, 4, 5
    return (a+b)*(c+d)

Load and disassemble:
Code:
Python 2.6.6 (r266:84292, Nov 21 2013, 10:50:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from rpn import f
>>> from dis import dis
>>> dis(f)
  2           0 LOAD_CONST               5 ((2, 3, 4, 5))
              3 UNPACK_SEQUENCE          4
              6 STORE_FAST               0 (a)
              9 STORE_FAST               1 (b)
             12 STORE_FAST               2 (c)
             15 STORE_FAST               3 (d)

  3          18 LOAD_FAST                0 (a)
             21 LOAD_FAST                1 (b)
             24 BINARY_ADD
             25 LOAD_FAST                2 (c)
             28 LOAD_FAST                3 (d)
             31 BINARY_ADD
             32 BINARY_MULTIPLY
             33 RETURN_VALUE

Cheers
Thomas
(04-29-2014 11:31 PM)Les Bell Wrote: [ -> ]The key point is that the shunting yard algorithm does one thing only - parse infix expressions and emit a prefix representation.

The shunting-yard algorithm doesn't parse anything. It is a relatively dumb state machine. If the input token stream represents a valid arithmetic infix expression the machine produces a representative postfix expression as a token stream. It is up to the implementer to write his own scanner and various other pieces. As far as the algorithm goes, and even as far as implementing it goes, parsing is not part of the discussion.

But yes, that is all the OP asked for: To convert an infix expression to a postfix expression. Therefore Thomas' answer is the right one. Telling the OP to go learn a parser generator toolkit in Java is confusing, harmful, and most of all doesn't answer the question.

(04-29-2014 11:31 PM)Les Bell Wrote: [ -> ]To paraphrase, I'll simply say that with the shunting yard algorithm, Matt has been given a fish; but ANTLR is a complete fishing rod, hooks, bait, pots, pans and a vegetable garden with all the things he'll need to cook that fish and any others he catches.

All the OP asked was how to convert an infix expression into a postfix expression. He doesn't need anything you mentioned to understand that. What Thomas pointed him to was about the best answer we could give. Indeed, your answer to use a parser generator not only doesn't answer the question, it goes one further and totally obscures the exact knowledge the OP asked about.

I'm sure it wasn't intentional but you essentially thread-jacked a discussion on "how do infix expressions get converted to postfix expressions" into "let's write a parser using automated parser generator tools." Your approach would be acceptable in a one semester course where students are supposed to "write" their own compiler. They can't do that from scratch, so they use all the tools you mention. But none of that is what the OP asked about. It's totally off-topic.

Neither the Dragon book (in various versions) nor Lexx, Yacc, Bison, etc. have advanced the state of the art of converting infix expressions to postfix expressions. They have nothing to do with that at all. There is a big difference between building a parse tree for abstract syntax or grammer and the conversion of a mathematic infix expression to postfix, which is the subject of this thread. And also no, Dijkstra hasn't made any impact on our work, nor has he impressed any of us who write software for a living.
(04-30-2014 10:43 AM)HP67 Wrote: [ -> ]The shunting-yard algorithm doesn't parse anything. It is a relatively dumb state machine. If the input token stream represents a valid arithmetic infix expression the machine produces a representative postfix expression as a token stream. It is up to the implementer to write his own scanner and various other pieces. As far as the algorithm goes, and even as far as implementing it goes, parsing is not part of the discussion.

Wow. For anyone who knows what a parser is and does, not to mention the relationship between lexers, finite state machines and regular languages, I don't really need to add anything to that ball of wax. For those who don't, just read the first sentence of the linked Wikipedia page on the shunting-yard algorithm.

I will just note that Matt didn't ask, "What is an algorithm for converting infix into prefix notation?". He described his ideas and asked if they were "plausible and valid". I replied that they were, and described a way in which they could be easily implemented (including the scanner [lexer]), experimented with and taken further, if he chose to do so. I stand by that.
Ok, back to planet earth and the original topic.


(04-29-2014 12:47 AM)Matt Agajanian Wrote: [ -> ]The way I see it is if I interpret each one or two number operation as func(arg1, arg2...) or func(arg), then, after writing this form of the expression out, I replace the parenthese and commas with spaces for readability and syntax, the end result is a prefix expression.

For example: (2+3)x(4+5) becomes x(+(2,3),+(4,5)).
Thus, replacing parentheses and commas with spaces gives x + 2 3 + 4 5 as the decomposed result.

Do you think my methodology is plausible and valid?

It's incomplete because it doesn't address areas not limited to operator and function call precendence and functions with variable numbers of arguments. In your example, all operators are binary. But that is only a subset of normal operators. For example what about -2 + 3? What about 5! + 4

How would you process unparenthesized expressions like

2 + 7 * 8

or

2 ^ 3 + 3

The IBM algorithm I mentioned earlier approaches this from a point of parenthesizing everything according to the rules of mathematics and then evaluating it after ambiguities have been removed. Dijkstra's method handles this by delaying the output of subexpressions while higher precedence operators are still on the stack.

What's neat about the Dijkstra algorithm is you can use it to generate numerous different kinds of outputs. You can create a postfix stream and since postfix doesn't have any order of operation ambiguity, you can actually evaluate the postfix stream as you're creating it and just output the result rather than the postfix.

Edit: That should have been "How would" not "Wow would". Edited, but too late since already quoted by Matt. Oops! Wink
(04-30-2014 02:57 PM)HP67 Wrote: [ -> ]Ok, back to planet earth and the original topic.


(04-29-2014 12:47 AM)Matt Agajanian Wrote: [ -> ]The way I see it is if I interpret each one or two number operation as func(arg1, arg2...) or func(arg), then, after writing this form of the expression out, I replace the parenthese and commas with spaces for readability and syntax, the end result is a prefix expression.

For example: (2+3)x(4+5) becomes x(+(2,3),+(4,5)).
Thus, replacing parentheses and commas with spaces gives x + 2 3 + 4 5 as the decomposed result.

Do you think my methodology is plausible and valid?

It's incomplete because it doesn't address areas not limited to operator and function call precendence and functions with variable numbers of arguments. In your example, all operators are binary. But that is only a subset of normal operators. For example what about -2 + 3? What about 5! + 4

Wow would you process unparenthesized expressions like

2 + 7 * 8

or

2 ^ 3 + 3

The IBM algorithm I mentioned earlier approaches this from a point of parenthesizing everything according to the rules of mathematics and then evaluating it after ambiguities have been removed. Dijkstra's method handles this by delaying the output of subexpressions while higher precedence operators are still on the stack.

What's neat about the Dijkstra algorithm is you can use it to generate numerous different kinds of outputs. You can create a postfix stream and since postfix doesn't have any order of operation ambiguity, you can actually evaluate the postfix stream as you're creating it and just output theresult rather than the postfix.

Dijkstra's method sound like I'll have a field day with it!! Thanks!
What are you actually trying to do, Matt?
(05-01-2014 05:47 AM)Les Bell Wrote: [ -> ]What are you actually trying to do, Matt?

Actually I just thought I'd throw my idea out to you folks to see if I was on the right track of a concept that I drew up. I'm not necessarily looking to embark on a full-scale project. I was just pondering the concept for a practical way as a manual exercise to parse infix expressions into postfix statements.
(05-02-2014 12:57 AM)Matt Agajanian Wrote: [ -> ]Actually I just thought I'd throw my idea out to you folks to see if I was on the right track of a concept that I drew up. I'm not necessarily looking to embark on a full-scale project. I was just pondering the concept for a practical way as a manual exercise to parse infix expressions into postfix statements.

Ah, OK. The general idea is that a parser constructs a tree for the expression. For your example of "(2+3)x(4+5)", the tree starts at the top with "x" and it has two children. The left child is "+" and it has two children: "2" on the left and "3" on the right. Similarly, the right child is also "+" and it has two children: "4" and "5". Something like (please excuse bad ASCII art!):

x
+ +
2 3 4 5

Having constructed the tree, you walk around it, starting with the root node; for each node you process the left child and then the right child, respectively, along with the content of the node itself.

If your processing is to output the content of each node before processing the children, then the result will be to print the prefix version of the expression (and this is the default behaviour of ANTLR v3, as I mentioned, although I notice that ANTLR v4 spits out a bit more diagnostic information, making it harder to see).

If your processing is to process the children and then output the content of the node, then it will output the postfix version of the expression.

And, of course, if your processing for each node is to perform the specified operation on the two children and return the result, then it will evaluate the expression, and you've just built an interpreter.

The grammar I posted builds a parser that does all that, plus unary minus. The upper-case rules also build a lexer which breaks text up into the right kinds of tokens. ID and FLOAT allow recognition of variable names and floating point numbers, while WS and NEWLINE let it deal with white space between the tokens. As far as I can see, it all works, and would only need (according to the ANTLR 4 book) a few (~8) lines of code to make it pretty-print infix or postfix output, or do the evaluation. I might try that this coming weekend, if I get time.

From there, it's easy enough to add things like x ^ y, variable assignment, etc. Everything falls into the tree pattern, although sometimes a node has just one child (e.g. unary minus) and sometimes three or more.

I figured you were just exploring the ideas here, but it would actually be practical to automatically translate programs for some of the infix calculators into RPN or RPL. Anyway, have fun - parsing and language translation has some of the most elegant ideas in computer science.
Reference URL's