HP Forums
Large integer algorithms - Printable Version

+- HP Forums (https://www.hpmuseum.org/forum)
+-- Forum: Not HP Calculators (/forum-7.html)
+--- Forum: Not quite HP Calculators - but related (/forum-8.html)
+--- Thread: Large integer algorithms (/thread-4132.html)



Large integer algorithms - Claudio L. - 06-11-2015 02:36 PM

Anyone here has done some testing or has knowledge of not-so-large integer arithmetic algorithms?
I'm analyzing the possibility of ditching the mpdecimal library (for newRPL) and develop my own algorithms for arbitrary precision integers and reals.

Why? A few reasons:
  • Library was designed for VERY large numbers (in the millions of digits)
  • Memory management not very well suited for embedded projects
  • Algorithms optimized for processors with DIV instructions

It works, but I'm wondering if we can do better on our own.

But the real question is about algorithms.
These "big" libraries all use advanced state-of-the-art algorithms for large numbers (integer or floating point, doesn't matter), so they are hard to beat, but...
The truth is, these algorithms usually have a big overhead and using a small number of digits they can underperform the most naive methods.

newRPL will use up to 2000 digits, with 36 digits being the default, and for example I've seen some tests on the internet where Newton-Raphson division breaks even at around 1000 digits with standard long division.
Has anyone here done or seen any research or testing of algorithms for < 2000 digits? Most tests I've seen online show results for 0, 5000, 10000 digits, etc. so they completely miss the interval I'm looking for, I want several data points between 0 and 2000 digits to really compare.


RE: Large integer algorithms - Marcus von Cube - 06-13-2015 07:00 AM

The decNumber library supports arbitrary precision. We use it in the 34S. External (register) formats have a fixed number of digits but the internal format allows for thousands of digits (if needed). The memory format for the latter isn't very memory effective, it's more suited to processing. Hence the distinction between the two.


RE: Large integer algorithms - Claudio L. - 06-13-2015 06:54 PM

(06-13-2015 07:00 AM)Marcus von Cube Wrote:  The decNumber library supports arbitrary precision. We use it in the 34S. External (register) formats have a fixed number of digits but the internal format allows for thousands of digits (if needed). The memory format for the latter isn't very memory effective, it's more suited to processing. Hence the distinction between the two.

Thanks, we used decNumber way back in the HPGCC days, and I'm well aware of its pros and cons. mpdecimal is superior in many ways, but has the same basic traits when it comes to memory management. The internal format is quite similar, with 9-digit radix in a 32-bit integer (decNumber is configurable at compile time, but this is the format by default on a 32-bit processor).
Some things are strange, as the words are ordered with the highest significant word first, but little endian processors can be more effective with the opposite ordering. Makes sense only for printing a number, but not for calculations.
For example, you could add two words at once in a 64-bit operation, then propagate the carry, and would be faster than having to do two 32-bit operations because the words are inverted in memory.
So a simple addition could be almost twice as fast if using the opposite order.

I wrote a couple of basic arithmetic routines, and they work, but I'm thinking it's a big task to do them all, guarantee proper rounding for all corner cases, etc.
I think I'll stick to mpdecimal for the time being so the project can progress in other areas.

Perhaps I'll try to optimize it in some key routines but keep the bulk.