Re: Yes, I'm serious ! Message #107 Posted by James M. Prange (Michigan) on 5 Dec 2005, 5:17 p.m., in response to message #106 by Valentin Albillo
Hi Valentin,
Quote:
"Other than the problem of possibly overwriting a variable with
the same name, I don't see any reason why whatever name I'm
solving for couldn't be used."
It has to do with nesting. Both FNROOT and INTEGRAL can be nested,
i.e., an FNROOT can solve a function which itself calls FNROOT,
and INTEGRAL can integrate a function which itself calls INTEGRAL,
and further FNROOT can solve INTEGRALs and INTEGRAL can integrate
FNROOTs.
That being so, were you to use a normal variable, say "X", that
"X" would have different meanings and would hold
different values depending on the level of nesting where
it was used. That cannot be accomodated with a single, normal user
variable, which is a unique location in memory, while FVAR is
actually a *keyword* which holds the value of your solved or
integrated variable at any and *all* levels of nesting,
transparently to the user.
So, if you were to single-step your function while being solved or
integrated by a nested call, you'd see that FVAR isn't the same
value from one level of nesting to another, while the keyword
(i.e.: formal variable in this case) is one and the same to you:
FVAR.
"I gather that FVAR represents the X in your FNF(X)? Sort of like
a reserved variable in RPL?"
That's correct, FVAR is the X in FNF(X), the Y in FNF(Y), in other
words, the formal variable. But internally it's not a variable but
a keyword which holds a set of values, one for each level
of nesting. I don't know if that corresponds to a reserved
variable in RPL but seems unlikely.
I had a look at FNROOT and INTEGRAL in the Math Pak
Owner's Manual, and it looks to me as if FVAR and IVAR
are more like RPL's variable of integration and variable for
plotting. "Formal variable" does seem like a good name for them.
The FVALUE and FGUESS stored by FNROOT, and IVALUE and IBOUND
stored by INTEGRAL, are examples of what I'd call "reserved
variables" in RPL.
Quote:
"You're saying that solving and integrating are occurring at the
same time?"
Why, yes. In Karl's particular example, that's
not the case, you're actually integrating a
function which isn't being solved, the solving procedure is only
performed twice to compute the limits of integration and it's
completely over before the integration process proper starts.
But have a look at this example (already published at this Forum
as part of a Test Suite for advanced HP models, search the
archives for "Turtle and Hare"), where you're solving for a root
of a function that itself is an integral of an implicit function
(!), i.e.: you're Solving an Integral whose integrand requires
Solve:
Test 7: Solving a definite integral of an implicit function
/X
|
Find X in [0,1] such that | y(x).dx = 1/3,
|
/0
where y(x) is the implicit function defined by:
y5 + y = x
using a precision of 1E-3 for the integral. HP-71B code:
FNROOT(0,1, INTEGRAL(O, FVAR,1E-3, FNROOT(0,1, FVAR^5+FVAR-IVAR))-1/3)
gives 0.854136725005 (precision = 1E-3)
or 0.854138746461 (precision = 1E-6)
As you may see, solving and integrating are absolutely intertwined
in this example. You can also see that the first FVAR appearing in
that expression corresponds to the variable being solved by the
outer FNROOT, while the 2nd and 3rd FVAR correspond to
the variable being solved by the inner FNROOT. You
wouldn't be able to do that using a regular, user variable like
'X', say, because in this case both 'X' would have different
meanings. Of cousrse, IVAR is the formal integration variable and
you can see as well that it has to be different from FVAR because
they refer to wholly unrelated values.
Actually, it took me a while to see what was going on in that
example.
Anyway, in the first program following, I use only two distinct
names, and the 48 and 49 series manage to keep track of what they
mean. Of course this makes for truly obfuscated code, so normally
I use distinct names for distinct variables. Not only does using
distinct names make things clearer to me, but it avoids any
possible problem with the calculator getting them mixed up.
%%HP: T(3)A(D)F(.);
@ Overwrites any 'X' or 'IERR' in current directory!
@ Takes a number argument to set an error tolerance;
@ for example, 3 specifies an error tolerance of 10-3.
@ Sets display mode to FIX while running, but normally restores
@ original mode.
@ Results from the BYTES command:
@ 48 series checksum: # B2BCh
@ 49 series checksum: # B3E6h
@ Size: 197
\<<
RCLF SWAP FIX
'X' PURGE
\<<
\-> x
\<<
'X^5.+X-x'
'X'
{ 0. 1. }
ROOT
\>>
\>>
\-> x
\<<
\<<
0. X
'x(X)'
'X'
\.S
1. 3. / -
\>>
'X'
{ 0. 1. }
ROOT
\>>
SWAP STOF
\>>
With an error tolerance of 10-3, this returns
.854136725005, with 3.33478395859E-4 stored as IERR.
With an error tolerance of 10-6, this returns
.85413874646, with 3.33369684455E-7 stored as IERR.
In case anyone's interested, here are some approximate timings, in
seconds:
Error tolerance 10-3 Error tolerance 10-6
48SX: 267 48SX: 548
48GX: 181 48GX: 376
49G: 223 49G: 455
49g+: 81.2 49g+: 166
Maybe this next example will look a bit clearer (but maybe not):
%%HP: T(3)A(D)F(.);
@ Overwrites any 'X', 'Z', or 'IERR' in current directory!
@ Takes a number argument to set an error tolerance;
@ for example, 3 specifies an error tolerance of 10-3.
@ Sets display mode to FIX while running, but normally restores
@ original mode.
@ Results from the BYTES command:
@ 48 series checksum: # 2FB6h
@ 49 series checksum: # 2EECh
@ Size: 197
\<<
RCLF SWAP FIX
\<<
\-> a
\<<
'Z^5.+Z-a'
'Z'
{ 0. 1. }
ROOT
\>>
\>>
\-> f
\<<
\<<
0. X
'f(Y)'
'Y'
\.S
1. 3. / -
\>>
'X'
{ 0. 1. }
ROOT
\>>
SWAP STOF
\>>
This returns the same as the previous program, except of course
that the "inner" root 'Z' doesn't get overwritten by the "outer"
root 'X'.
Quote:
By the way, the above expression is a command-line expression, not
a program,
Okay, I wrote mine on the PC, originally as two separate programs,
but I decided that I may as well just combine them into one.
Maybe you do this sort of thing frequently, but it strikes me as
an unusual case. Ordinarily, I use the solver instead of the ROOT
command to find roots and intersections. I don't use integration
often, but I've always just typed the arguments into the command
line and pressed the integrate key. For this case, it seems
especially unfortunate that ROOT isn't a function, because the
integrate function on the 48/49 series requires the integrand to
be an algebraic.
Quote:
and uses no local/global variables at all, flags, modes, or
whatever. So, it will work as-is, no need to change modes, be
aware of the status of some flags, or keep care that you don't
overwrite some variable.
I see very little problem with using local variables; they're
local to the defining procedure and are abandoned when
the procedure finishes. A local variable will definitely never
overwrite any other variable. The only thing to be careful of with
them is that while a local variable exists, there's no way to
access anything else (including a previously bound local variable)
with the same name. For example, if you're going to use e or i, or
for that matter, SIN, in the procedure, then don't use it for a
local name.
Hmm, on second thought, there is another possible problem with
local names. Consider the sequence:
0 \-> x \<< 'x' \>> [ENTER]
That briefly stores 0 in the local variable x, puts the quoted
local name 'x' on the stack, and abandons the variable. Any
attempt to evaluate this 'x' while no local variable named 'x' is
defined results in an "Undefined Local Name" error. Note however,
that if I key in another 'x', then this new x is compiled as a
global name, so evaluating the new 'x' doesn't error out.
Although the variable of integration can be either a global or
local name, I believe that the system changes it to a new local
name to store values with, so any existing variable with the same
name is ignored, and is not overwritten by the integration. I
suppose that it could be called a "formal variable", rather like
your FVAR and IVAR, except that I can use whatever name I choose.
Yes, sometimes global variables are used (and possibly
overwritten), as with ROOT and the solver. I typically use
one-letter names for these, and have never had any problem with
confusing them with the names of variables that I really do want
to keep long-term.
"Reserved variables" are global variables such as IERR that the
system uses for its own purposes. There are quite a few of these,
so there's some danger that the user could inadvertently choose a
reserved name for his own variable, but I've never found it to be
a problem. Rather like you wouldn't usually store anything with
the names FVALUE, FGUESS, IVALUE, or IBOUND, knowing that the
system would overwrite them the next time FNROOT and INTEGRAL were
used.
Using global variables has a possible advantage that they're saved
until overwritten or purged, and of course that there's a name to
go with each value. For the solvers, there may be several
variables, some with values set by the user, and some solved by
the calculator. The solution is placed on the stack, tagged with
its name, but having them all stored as global variables with
their names displayed in the menu seems useful to me. In the case
of the ROOT command, the result is stored in the variable as well
as being pushed onto the stack; I suspect that they had the solver
in mind when they designed ROOT. For the integrate function, IERR
is a supplementary result; it very likely won't be used in
subsequent calculations, so there wouldn't be much sense in
pushing it onto the stack, but it may be worth checking.
Regarding modes, as long as you don't have any modes that could
affect the result, or can safely assume that they're already in
the state you need, fine. I'm in the habit of forcing (and
restoring, of course) modes when I know they'll make a difference,
and even sometimes when I'm not absolutely sure but suspect that
they will. Of course, when I'm not writing a program and I'm
reasonably sure that a mode is already the way I need it, there's
no need to force it.
Quote:
"Isn't RPN (mostly) postfix too? How do the vast majority of
people find that?"
Repelent, of course. The difference is that while most people find
RPN repelent except those few that do like it, RPL is found
repelent almost universally, even among RPN fans.
Or maybe the difference is that while most people find RPL
repellent except for those few who do like it, "Classic RPN" is
found repellent almost universally, even among RPL fans?
But I really don't know which of RPL or Classic RPN has more fans.
I do expect that there are a lot of 12C users out there.
Quote:
As for prefix-postfix, RPN is indeed postfix for all the parts
that need to be postfix, namely mathematical operations. Utility
operations such as changing flags, modes, or storing/recalling
numbers which have no mathematical nature have no reason to be
postfix and they aren't.
I think that the idea is that any result should be available as an
argument to subsequent operations. For a stack-based system,
putting all results on the stack and using all postfix operations
strikes me as the most elegant method to achieve this (as long as
the stack can be deep enough).
Quote:
To make them postfix as well is perverting the original intent of
the elegant RPN's postfix notation, cluttering up the stack with
all kinds of non-mathematical 'operands' and 'arguments' which are
nothing but control parameters which don't belong in the stack to
begin with.
Note that RPL operations can take "symbolic" arguments and return
"symbolic" results, which are just RPN sequences of a special
type, very similar to SysRPL programs. The start and stop values
of FOR loops, for example, often depend on previous calculations.
In my program above, I expect the user to have the value for FIX
in the command line or on the stack, where FIX in the program can
find it, saving the user the slight trouble of switching to alpha
mode and keying in FIX. Sure, we could store anything deemed
non-mathematical in a named variable (for use with prefix
operations) instead of putting it on the stack, but
having to do it that way strikes me as a lot of extra
bother.
Quote:
Doing that, in the name of 'consistency', only results in
unnatural, unreadable code, unnecessary and constant growth and
shrinking of the stack, and redundant stack manipulations,
throwing out the window the innately elegant paradigm so well
served by classic RPN.
Ah well, I expect that we'll never agree on this. To me, it seems
more like it's for functionality, not just for consistency. Is
keeping track of a growing and shrinking stack all that difficult?
It's true, although to a much lesser extent, of Classic RPN too.
What's so hard about using, for example, 3 FIX instead of FIX 3? I
put 3 on the stack and it grows one level, but FIX takes the 3 off
so it's right back where it was. The consistency helps too; I
always know where any operation will take its argument from
(though not always which order is needed for multi-argument
operations that I seldom use). As long as the hardware to make a
very deep stack is available, why not take advantage of it? With
RPL, the user is free to choose to take results off of the stack
for storage in named variables, leave the results on the stack for
as long as they're needed, or whatever mixture of these he's
comfortable with.
Quote:
"Good for you! I'm glad that you're so familiar with your
calculator!"
James, please ... there's no need to be 'so familiar' with my
calculator to write down:
Print Integrate( FindLimit1, FindLimit2, Precision, Function )
I can do it in my dreams, and you probably could, too !! Does it
actually feel so difficult or unfathomable to you !?
Well, I suppose that if I were familiar with the keywords and
which order the arguments needed to be in, I'd catch on soon
enough. It looks as if functions can't be used as postfix
commands, and everything is either prefix or algebraic notation.
Now that I've glanced at the owners manuals, I wouldn't mind
having a 71B to play around with, although I'll stick with the 48
series for "real work". Besides, having a 71B available while I
read the HP-71B Hardware Internal Design
Specification might be handy. But I expect that a 71B
would cost more than I could justify to myself.
But it always amazes me that so many seem to find RPL difficult.
After playing with a display sample of the 28S in the store for a
few minutes, I knew that it was a lot easier for me than switching
between "CAL", "PRO", and "RUN" on my Sharp or using short
algebraic "keystroke" programs on my little Radio Shack model. To
me, RPL has seemed so very intuitive right from day one that it's
hard for me to imagine how anyone could find it overly
complicated.
But I can understand that if someone has a Classic RPN model that
does everything he wants from a calculator and he's quite familiar
with it, then getting an RPL model and getting used to the
differences would probably be a mistake, except perhaps as a
hobby.
Actually, I sometimes get irritated with some of the changes in
the 49 series. To be sure, some of the new features are nice, and
I can happily ignore a new command when I don't have the foggiest
notion of what it's good for, but when the 49 series return
something that looks a lot more complicated to me than what the 48
series returns for the same commands, it makes me wonder what they
were thinking of with this new CAS stuff.
Quote:
a x b x c x d x e * + * + * + * +
which couldn't even be used with a 4-register stack."[/italic]
You're right, your constant using of RPL has really spoiled you if
you can come up with that particularly inefficient, multi-level
scheme.
Sure; with RPL, I can enter a polynomial as an algebraic. Why
would I bother factoring it? Anyway, all I remembered of Horner's
method was that it was factoring a polynomial to eliminate all
powers; I didn't remember what use it had beyond that, until you
pointed it out. Leaving it in ascending order, then factoring it
and converting it from an algebraic expression to an RPN sequence
was just an exploratory first step in my seeing what else Horner's
method would be useful for on an RPN calculator.
Regards, James
|