Since HP Prime CAS is based on XCAS why not implement Variable Precision Floating Arithmetic routines in CAS Mode. Since HP Prime has distinct Home and CAS environments, it makes sense to have two distinct Floating Point implementations. A traditional IEEE Double Precision in Home purely Numerical environment, and desirably a Variable Precision implementation in CAS mode. Its really a Great Task, but we are talking about XXI Century calculators with Hardware good enough for such acomplishment! In every smartphone or tablet one can run XCAS with its Full Potential, including Variable Precision Floating Point, so why Not on HP Prime ?

Licensing. The infinite precision libaries in use are all either GPL or LGPL. In the current state, we can't use those. Another solution of some kind will need to be found to implement that type of feature.

Xcas uses GMP+MPFR, they are licensed under the LGPL, inclusion in closed source code is allowed, but it requires that everyone can rebuild the firmware with object files from the closed part of the source code and the source code of GMP+MPFR. I really hope that we will have longfloat support on the Prime (and also interval artihmetic), because this is already available on the ti nspire with the latest version of Xcas's port (khicas).

(06-07-2015 06:47 PM)parisse Wrote: [ -> ]Xcas uses GMP+MPFR, they are licensed under the LGPL, inclusion in closed source code is allowed, but it requires that everyone can rebuild the firmware with object files from the closed part of the source code and the source code of GMP+MPFR. I really hope that we will have longfloat support on the Prime (and also interval artihmetic), because this is already available on the ti nspire with the latest version of Xcas's port (khicas).

Dear Bernard,

thanks very much for the attention ( and my apologies for the long time to restablish contact ).

I would like to suggest a Long Term Project by the addition of [ Inverse Symbolic Calculation ] features ( by means of PSLQ routines ) to Giac/XCAS, which could eventually be also futurely Ported to HP Prime.

Just for reference I am proving a few links to relevant Implementations of Numerical [ Constant Recognition ] features from some other CAS packages and Web Driven Resources.

[

Inverse Symbolic Calculator Plus ]

[

Plouffe's Inverter - OEIS Wiki ]

[

SymPy Number Identification ]

[

Maple Identify ]

[

Mathematica RootApproximant ]

Another very interesting Numeric related feature is Closed Form [ Sequence ] identification, as provided by the [

Online Encyclopedia of Integer Sequences ] and by Mathematica [

FindSequenceFunction ] which would also consist on a very desirable addition for Future versions of Giac/XCAS ( and subsequent ports to HP Prime ).

I have only recently been aware of [ KhiCAS ] for the TI-Nspire, and will surely benefit from this Extremely Valuable resource publicly made available by your More than Admirable Effort and Long Term Dedication to the Symbolic Algebra cause.

I have been "captivated" by Numerical Analysis since my 13's, when at that time HP-67's and TI-59's were the most Powerful computational resources affordable by simple Math Fascinated mortals.

Since then I have been continuously "plugged" on Numerical and Symbolic Calculations, following all sorts of Computational "Waves" during the last four decades ... ( from HP-41C, through HP-48, MuMath, Derive, Maple, Mathematica, TI-89, HP-50g, Netbooks, Tablets, Smartphones, and now TI-Nspire & HP Prime ).

It's pointless to say that Powerful Calculators, Precious Book Gems, Extremely Valuable Masterpieces of Software and the Dedication of Passionate and Forward Estimulating Teachers made me what I am, and I owe a Great debt of profound Respect to all such individuals, which have contributed to the Advance of Numerical and Symbolic Computation.

Thanks very much for all the attention,

with my Most Sincere and Profound Grateful Best Wishes,

Prof. Ricardo Duarte

Adding algorithms on multi-precision floats is certainly in the scope of Giac, but I'm not that convinced that an advanced inverter with table lookups really is. Of course if someone else want to add this functionnality, he is welcome, after all Giac is a GPL-ed C++ library that anyone can extend.

I don't know if this would be of any help to anybody, but some 20+ years ago, I wrote the arbitrary precision floating point library that was used in Fractint. The copyright on the library was exceedingly loose (give credit, etc...). I had to google to find it myself, but it can be found at

http://www.nahee.com/spanky/www/fractint/arb-prec.html
Originally, it was written mostly in 80x86 assembly and some C (in Fractint speed was everything), but I later rewrote the assembly into C for general use and ports to xwindows. The lib had arbitrary length binary floating point, decimal floating point, and binary fixed point formats. (Fixed point was especially useful for generating those deep zoom fractals.)

If I needed a library today, I'd use one of the professional quality GPL/LGPL libraries that are available, but if anybody needs something without those licenses, feel free to use mine. Just keep in mind that it was written for a specific purpose (Fractint) and as such does not have the IEEE rounding, nan, error checking and such. (Who cares if a pixel is off by 1?)

The documentation has the details of use, but to those I'll add:

Quote: *** No publicly making fun of my amateurish code. *** :-)

Also keep in mind that I have not looked at the code in many years. I have no idea what you might find. ;-)

-wes

I would like to chip in to say that variable precision arithmetic, if attempted should be a form of interval arithmetic.

Interval arithmetic allows the capture of numerical uncertainty. So that unstable calculations do not lead to introduction of garbage. A classic example is the cross product:

A*B - B*C

if A*B is close to B*C, many digits of precision can be annihilated. For a fixed number of digits implementation, you wind up "pulling up" garbage digits from the bottom to fill those removed from the top without really knowing it.

Cross products like this appear all the time. eg regular complex number multiply.

The other reason is, of course, you need dynamic precision in examples like this to temporarily extend the working space for the A*B multiply.

I have implemented a version of dynamic precision interval arithmetic that i use for my own calculations. The implementation follows ideas of Oliver Aberth, 1988 who suggests representing the interval an "extra digit".

Aberth explains how you can represent an interval [a,b] in what would otherwise be two numbers in only slightly more space than just one number:

First, represent [a,b] as a mid point and a range, [m-r,m+r]

You can think of this representation as "m", the number to display and "r" the error, or uncertainty. Treating `r' is an error term allow it to be contained in a single digit.

eg.

M = 3.14159

R = 0.00002

says we have [3.14157,3.14161]

Here we arrange the `r' term to coincide with the least significant digit of m.

Suppose now the error term increases due to unstable calculation to,

M = 3.14159

R = 0.00020

= [3.14139,3.14179]

Then the final 9 of m is swamped by our error and can be dropped. so instead we adjust our range and mid, as follows:

M = 3.1415

R = 0.0003

= [1.1412,3.1418]

So we've lost a digit of m due to the increase in error. You may also notice that our error term has increased by one in order to accommodate rounding, because we _must_ ensure our range does not decrease.

In my implementation, i do not use one decimal digits as illustrated above, but bunches of 4 digits. My BCD implementation is based on 10000 rather than 10. Each "digit" is actually 4 decimal digits, each from 0 to 9999.

This implementation has many advantages. The biggest is that your arithmetic calculates 4 digits at a time and not just one. Each of my 4 digits, called a "4dec" is stored internally as a 16 bit _binary_ number. Accordingly, arithmetic on each 4dec proceeds as normal machine binary operations with carry over 9999.

Even embedded hardware has 16 bit * 16bit -> 32 bit hardware integer multiply. which is a major boon to the multiplier.

This technique brings the performance of 4DEC BCD into a comparable ballpark as software binary floats. rather than being way, way slower.

BUT there's another big win for range arithmetic. the error term is now a 4dec and can thus accommodate a wider range before rounding. In other words, there's is less truncation of M as R increases; R could gain 10 fold without losing a M 4dec as it does with the simple BCD above.

Back to representation we store, eg

M = 3.1415 x 10^-3

R = 0.0003 x 10^-3

internally as:

3.14153 exponent -3

and print out as either:

3.1415+/-3 x 10^-3 or just 3.141 x 10^-3

Since normally we suppress all uncertainty in display.

This also illustrates that the exponent in both M and range are is same number.

The bottom line is that you get to store interval arithmetic, more usefully as a term with an error range as just ONE EXTRA DIGIT of storage.

For sure this is how calculators should work and, in fact, how they should of worked for the last 20 years.

I'd be happy to advise with this kind of implementation. my own code words for the usual arithmetic and some scientific functions. I have to go back to the project to finish the other scientific functions (and test them).

The implementation of scientific functions is a bit more involved than normal, but not a lot. You have to have an error term to feed back to your range.

for example, say you use Taylor series for Sin(x). you can use the truncated term as an error bound that you merge with any existing error range introduced by the series arithmetic itself. This gives you a final upper bound on your error.

Everything has to have an error upper bound and then you're good.

Giac/Xcas has both, based on MPFR and MPFI. You need to have both, because interval arithmetic gives most of the time intervals that are overly pessimistic. Indeed rounding errors are normally independant, which means if you make say n additions, error is most of the time of order sqrt(n)*epsilon, while interval arithmetic will return an interval of size n*epsilon.

Parisse's last comment noted, I think the key message from Hugh's post is that he has a variable precision library which is entirely his own code and he is willing to licence to HP on terms to be agreed.

I hope that the calculator team are at least able to make contact to discuss options and the amount of work required.

The decNumber library the 34S uses has a very liberal licence too: ICU 1.8.1. I think this page:

https://ssl.icu-project.org/repos/icu/ic...cense.html sums it up (above the horizontal lines). Not all that many functions are implemented in the library but it is a solid basis and supports any precision.

- Pauli

Implementing range arithmetic as an "extra digit" is an opportunity not to be missed, given the small amount of additional overhead compared to the enormous gain of having an error bound for _every_ calculation.

There is virtually no performance hit using this method over a regular BCD implementation. For example, basic arithmetic calculates the error as it handles the end of the mantissa.

I searched for ages for the ideal way calculators should do their math. I did not want to store two numbers in place of one and I did not want the overhead of representing reals as continued fractions.

I also wanted a drop-in replacement for regular BCD that I've been using in the past, that protects me from unstable results and is compatible.

Today's calculators may be relatively powerful, but they still require an embedded solution. I develop apps for mobiles and, more recently, wearable devices. Today's smart phones have x-GHz processors and y-GBs of RAM, but you can't go wasting it because power consumption is your real enemy. If phones, or indeed calculators ran at their advertised speed for more than a short time, the battery would surely be drained quite rapidly.

The Prime has some support for interval arithmetic already, but it is limited to portions of the Advanced Graphing app.