modular exponentiation?
06-25-2018, 04:31 PM (This post was last modified: 06-25-2018 04:44 PM by Thomas Klemm.)
Post: #21
 Thomas Klemm Senior Member Posts: 1,447 Joined: Dec 2013
RE: modular exponentiation?
Finally a solution for Java that might be accepted:

Map Reduce
Code:
import java.math.BigInteger; import java.util.stream.IntStream; public class SumModPow {     private static final BigInteger M = BigInteger.valueOf(10).pow(10);     public static void main(String[] args) {         System.out.println(             IntStream.rangeClosed(1, 1000)                 .mapToObj(n -> {                     BigInteger m = BigInteger.valueOf(n);                     return m.modPow(m, M);                 })                 .reduce(BigInteger.ZERO, BigInteger::add)                 .mod(M)         );     } }

Parallel Streams
With this small change we can run the calculation in parallel:
Code:
import java.math.BigInteger; import java.util.stream.IntStream; public class SumModPow {     private static final BigInteger M = BigInteger.valueOf(10).pow(10);     public static void main(String[] args) {         System.out.println(             IntStream.rangeClosed(1, 1000)                 .parallel() // add this line                 .mapToObj(n -> {                     BigInteger m = BigInteger.valueOf(n);                     return m.modPow(m, M);                 })                 .reduce(BigInteger.ZERO, BigInteger::add)                 .mod(M)         );     } }
This would not be possible using a classic for-loop.

Will it be faster on a MacBook with 4 cores?
Guess what: it makes it even a bit slower.

These are the results for the real time of 20 runs:

Sequential
Code:
         0.2100 #          0.2200 ####          0.2300 #####          0.2400 #######          0.2500 #          0.2600 ## ---------------+---------+                0        10 min      =          0.2130 max      =          0.2690 mean     =          0.2388 sigma    =          0.0130 mode     =          0.2400 median   =          0.2385

Parallel
Code:
         0.2300 #          0.2400 #########          0.2500 ########          0.2600 ## ---------------+---------+                0        10 min      =          0.2360 max      =          0.2620 mean     =          0.2498 sigma    =          0.0074 mode     =          0.2400 median   =          0.2490

We have to increase the upper limit to 10000000 to notice the difference:

Sequential
Code:
real    0m35.134s user    0m36.712s sys    0m0.298s

Parallel
Code:
real    0m17.546s user    0m56.692s sys    0m0.420s

During this run the CPU was used up to 350% by this program:
Code:
  PID USER      PRI  NI  VIRT   RES S CPU% MEM%   TIME+  Command 32002 tom         0   0  9.6G  253M ? 341.  1.5  0:44.54 /usr/bin/java -cp parallel-10000000 SumModPow
That's why the real-time is smaller than the user-time.

Premature Optimization
Quote:Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered.
We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%.
-- Donald Knuth

My Python program above runs in about 0.045s.
It's not1 optimised but doing so wouldn't gain that much.

It took me much longer to write the Java program.
And today the biggest cost is our time. Not only the time that you spend to implement a piece of code. But also the time you or others must spend debugging and maintaining your code.

Contrary to what others might think I consider this a reasonable interview question.
Because it could lead the discussion into different directions.

[1]: Ok, it's a bit optimized in that instead of a list-comprehension (using []) the corresponding generator (using ()) is used. Thus not the whole list has to be kept im memory. Instead summation and producing the arguments can be interleaved. But even this doesn't change much.
06-25-2018, 05:42 PM (This post was last modified: 06-25-2018 05:44 PM by StephenG1CMZ.)
Post: #22
 StephenG1CMZ Senior Member Posts: 870 Joined: May 2015
RE: modular exponentiation?
(06-25-2018 02:44 AM)Thomas Okken Wrote:
(06-25-2018 02:04 AM)brickviking Wrote:  Man, am I glad I've never tried to apply for a professional programmer's job. I've never even heard of binary multiplication chains. I've certainly heard of powers-of-two multiplication by bit-shifting though. I don't know if this is the same thing, nor would I have any idea how to apply that to the OP.

It's not the same thing. The idea is that, for example,

a^10 = (a^5)^2 = (a*a^4)^2 = (a*(a^2)^2)^2

meaning: you can compute integral powers by a sequence of multiplications and squares. This is more efficient than a sequence of multiplications; the repeated-squaring approach takes O(log n) operations, as opposed to n-1 multiplications for the simplistic approach.

Is this the kind of knowledge that you would require a job applicant to have, where you would toss their application in the trash if they don't implement an integral power this way? I doubt it. I've never been asked this kind of question in a job interview (and I am a programmer, not a scientist or engineer who dabbles in programming). The only time in my life where knowing the repeated-squaring algorithm was useful to me was when I implemented Y^X in Free42.

(06-25-2018 02:04 AM)brickviking Wrote:  I'm a bit lost as to what modular exponentiation is, too. I guess I haven't been kicking around the maths courses long enough.

Taking something to the power of something, and then taking the remainder of that, modulo something else. When the exponents get large, this requires clever algorithms to get accurate results. Again, not something programmers tend to encounter. Mathematicians, maybe, but anyone else? Seems like a weird interview topic to me, for any job that isn't pure mathematics.

(06-25-2018 02:04 AM)brickviking Wrote:  (Post 249)

I have to ask: why do you do that?

Oops: New Reply is inserting a Quote!

I too would have flunked that math test...
My A level math covered modular arithmetic and exponentiation, but it was too long ago to remember them ever being combined.

The straightforward for loop has the advantage of clarity and easily predictable performance (assuming an average execution time for each loop), whereas the mathematical shortcut relies on memory of a trick - and is harder to test for errors if your memory lets you down.

Back at the start of my software engineering career, I don't remember doing well on any math algorithms in interviews, but in the job itselfI did implement a 100% performance gain in a sqrt function. The trick there was to choose a good guess for the initial root.

I also implemented DSP software in assembler, which I understood well enough to code and test - but couldn't begin to talk about in interviews (apart from explaining the code, rather than the math)

Stephen Lewkowicz (G1CMZ)
ANDROID HP Prime App broken offline on some mobiles
06-25-2018, 10:09 PM
Post: #23
 ijabbott Senior Member Posts: 926 Joined: Jul 2015
RE: modular exponentiation?
(06-24-2018 09:08 PM)Valentin Albillo Wrote:  3) Actually, I've been sometimes in charge of selecting programmers to hire and if I were to pose this same question to the people applying for the job and some of them would produce code which computed the integer powers N^N by performing N multiplications in a loop, both their code and their application would've been thrown to the garbage bin at once, for being so utterly inefficient.

I'd choose people who are resourceful, even if they don't know every algorithm off by heart. I've been programming professionally for 34 years and never had any reason to use modular exponentiation, let alone implement it efficiently. (If I specialized in cryptography, it would be a different matter!). I only really discovered the fast binary algorithm for modular exponentiation through solving challenges on HackerRank, since I figured there probably had to be a faster than O(n) algorithm so I went looking on Wikipedia to find it.

— Ian Abbott
06-25-2018, 10:21 PM
Post: #24
 brickviking Senior Member Posts: 334 Joined: Dec 2014
RE: modular exponentiation?
(06-25-2018 02:44 AM)Thomas Okken Wrote:
(06-25-2018 02:04 AM)brickviking Wrote:  (Post 249)

I have to ask: why do you do that?

It's a post counter, and I use it to give me a rough idea of what threads I visited and what order I replied in. It also lets me know how many posts I've actually made on a particular forum; although most forums now have a handy post counter nearby the username, some older forums don't (i.e. the good old newsgroup days).

And besides, it makes a great conversation starter. Or at least a puzzler. You're not the first to ask me, and I'll be willing to be you won't be the last.

Now, let's see if I've got a rough idea of the mathematical concept behind this.

(Post $$3^3*3^2+2^2+3$$)

Regards, BrickViking
HP-50g |Casio fx-9750G+ |Casio fx-9750GII (SH4a)
06-25-2018, 10:45 PM
Post: #25
 Bill Duncan Member Posts: 81 Joined: Jan 2016
RE: modular exponentiation?
(06-25-2018 10:09 PM)ijabbott Wrote:
(06-24-2018 09:08 PM)Valentin Albillo Wrote:  3) Actually, I've been sometimes in charge of selecting programmers to hire and if I were to pose this same question to the people applying for the job and some of them would produce code which computed the integer powers N^N by performing N multiplications in a loop, both their code and their application would've been thrown to the garbage bin at once, for being so utterly inefficient.

I'd choose people who are resourceful, even if they don't know every algorithm off by heart. I've been programming professionally for 34 years and never had any reason to use modular exponentiation, let alone implement it efficiently. (If I specialized in cryptography, it would be a different matter!). I only really discovered the fast binary algorithm for modular exponentiation through solving challenges on HackerRank, since I figured there probably had to be a faster than O(n) algorithm so I went looking on Wikipedia to find it.

I'm with you on that. I didn't know what "modular exponentiation" was myself. I just intuitively knew what I wanted, did "man dc" and found the phrase there..

My colleague was just looking for someone that could "think" and basically know how to get there..

Cheers!
06-28-2018, 12:42 AM
Post: #26
 John Keith Senior Member Posts: 626 Joined: Dec 2013
RE: modular exponentiation?
(06-24-2018 09:18 AM)Didier Lachieze Wrote:  On the HP Prime in CAS mode:
$\LARGE irem( \sum_{n=1}^{1000} (powmod(n,n,10^{10})), 10^{10})$
or, if you want to copy/paste to the emulator:

irem(Σ(powmod(n,n,10^10),n,1,1000),10^10)

Wow, almost instantaneous!

My HP 50 version,

Code:
 \<< PUSH CASDIR MODULO \-> m   \<< 10 DUP ^ MODSTO 0 1 1000     FOR n n DUP POWMOD       IF DUP 0 <       THEN MODULO +       END +     NEXT MODULO MOD m MODSTO POP   \>> \>>

is rather slow and clunky due to the HP 50's quirky POWMOD function. It takes about 130 seconds to run, or about 2.5 seconds on the emulator.

John
06-28-2018, 06:07 PM
Post: #27
 ijabbott Senior Member Posts: 926 Joined: Jul 2015
RE: modular exponentiation?
(06-28-2018 12:42 AM)John Keith Wrote:  My HP 50 version,

Code:
 \<< PUSH CASDIR MODULO \-> m   \<< 10 DUP ^ MODSTO 0 1 1000     FOR n n DUP POWMOD       IF DUP 0 <       THEN MODULO +       END +     NEXT MODULO MOD m MODSTO POP   \>> \>>

is rather slow and clunky due to the HP 50's quirky POWMOD function. It takes about 130 seconds to run, or about 2.5 seconds on the emulator.

I wonder why POWMOD is so slow? Does it do POW followed by MOD?

— Ian Abbott
06-28-2018, 09:40 PM
Post: #28
 John Keith Senior Member Posts: 626 Joined: Dec 2013
RE: modular exponentiation?
(06-28-2018 06:07 PM)ijabbott Wrote:  I wonder why POWMOD is so slow? Does it do POW followed by MOD?

No, it works in the usual way but it is more primitive than the Prime's version, thus requiring the check for negative values. Also integer comparisons are very slow for some reason. About 20 seconds of that run time is taken by the < function.

Additionally, the prime is almost 100 * as fast as the HP 50 which accounts for most of the difference.
06-28-2018, 09:56 PM
Post: #29
 Thomas Klemm Senior Member Posts: 1,447 Joined: Dec 2013
RE: modular exponentiation?
(06-28-2018 09:40 PM)John Keith Wrote:  Also integer comparisons are very slow for some reason. About 20 seconds of that run time is taken by the < function.

You could probably just ditch the following:
Code:
IF DUP 0 < THEN MODULO + END

I have no HP 50g thus I can't verify my assumption.
But the final MODULO MOD should take care of possible negative values.
06-29-2018, 12:47 AM
Post: #30
 John Keith Senior Member Posts: 626 Joined: Dec 2013
RE: modular exponentiation?
(06-28-2018 09:56 PM)Thomas Klemm Wrote:  You could probably just ditch the following:
Code:
IF DUP 0 < THEN MODULO + END

I have no HP 50g thus I can't verify my assumption.
But the final MODULO MOD should take care of possible negative values.

You're right, it returns the correct value in 93.4 seconds without the test clause.
07-08-2018, 09:50 PM
Post: #31
 Bill Duncan Member Posts: 81 Joined: Jan 2016
RE: modular exponentiation?
Has anyone written a POWMOD -like function for the HP-48? Just wondering before I go off and try it myself..
07-09-2018, 02:32 AM
Post: #32
 Joe Horn Senior Member Posts: 1,687 Joined: Dec 2013
RE: modular exponentiation?
(07-08-2018 09:50 PM)Bill Duncan Wrote:  Has anyone written a POWMOD -like function for the HP-48? Just wondering before I go off and try it myself..

There's a "Fast Powering algorithm f exponentiation (mod n)" program called PWR in Anthony Shaw's "MODULO" package for the 48: https://www.hpcalc.org/details/6015

<0|ɸ|0>
-Joe-
07-09-2018, 05:18 AM
Post: #33
 Thomas Klemm Senior Member Posts: 1,447 Joined: Dec 2013
RE: modular exponentiation?
(07-08-2018 09:50 PM)Bill Duncan Wrote:  Has anyone written a POWMOD -like function for the HP-48?

Cf. Modular Arithmetic for the HP-48

Since PLUS is only used by TIMES it can be inlined.
And then you can keep TIMES in a local variable leading to:
Code:
@POWER ( a b n -- a ^ b % n ) « @TIMES ( a b n -- a * b % n )   « 1000000 → a b n k     « a k / IP a k MOD       b k / IP b k MOD       → x y u v       « x u * k SQ *         x v * k *         y u * k *         y v *         4 →LIST         n MOD         @PLUS ( a b -- a + b (mod n) )         « → a b           « a b +             IF n OVER ≤             THEN DROP a n - b +             END           »         »         STREAM       »     »   »   → n TIMES   « 1 ROT ROT     WHILE DUP     REPEAT → r a b       « IF b 2 MOD         THEN a r n TIMES EVAL @ r → a * r (mod n)         ELSE r         END         a DUP n TIMES EVAL @ a → a ^ 2 (mod n)         b 2 / IP       »     END     DROP2   » »

(07-09-2018 02:32 AM)Joe Horn Wrote:  There's a "Fast Powering algorithm f exponentiation (mod n)" program called PWR in Anthony Shaw's "MODULO" package for the 48: https://www.hpcalc.org/details/6015

Quote:MODPND
Returns a (mod n). This function assumes that a and n have been entered using the # symbol, which allows for large integers (up to about 10^18). By using MODPND in PRODN, numbers whose product do not exceed 10^18 can be multiplied (mod n). If these programs were implemented in machine language, larger integer arithmetic could be made feasible. For me, small numbers have plenty of interesting properties, so I'm not so concerned about this.

#9876543210987654
#1234567890123456 ---> #90000006

Thus I assume that it will work only up to 109.

Quote:Just wondering before I go off and try it myself.

Just go for it! And post your result.

Cheers
Thomas
07-09-2018, 09:55 AM
Post: #34
 Gerald H Senior Member Posts: 1,452 Joined: May 2014
RE: modular exponentiation?
Here's a POWMOD programme for the 38G:

07-09-2018, 01:37 PM
Post: #35
 Thomas Klemm Senior Member Posts: 1,447 Joined: Dec 2013
RE: modular exponentiation?
(03-15-2015 08:22 AM)Gerald H Wrote:  The programme MMOD finds

X * H MOD M.

MMOD

X MOD 1000000►I:
X-Ans►X:
H MOD 1000000►J:
H-Ans:
((X*Ans MOD M+((X*J MOD M+I*Ans MOD -M)
MOD -M))MOD -M+I*J )MOD M►X:

Stealing this idea I reduced the TIMES program:
Code:
@POWER ( a b n -- a ^ b % n ) « @TIMES ( a b n -- a * b % n )   « 1000000 → a b n k     « a k MOD a OVER -       b k MOD b OVER -       → y x v u       « x u * n NEG MOD         x v * n MOD         + n NEG MOD         y u * n MOD         + n NEG MOD         y v *         + n MOD       »     »   »   → n TIMES   « 1 ROT ROT     WHILE DUP     REPEAT → r a b       « IF b 2 MOD         THEN a r n TIMES EVAL @ r → a * r (mod n)         ELSE r         END         a DUP n TIMES EVAL @ a → a ^ 2 (mod n)         b 2 / IP       »     END     DROP2   » »

Thanks to Gerald and kind regards
Thomas
10-26-2018, 06:08 PM
Post: #36
 wynen Junior Member Posts: 12 Joined: Sep 2014
RE: modular exponentiation?
I just want to mention, that my favorite HP-16C is also able to solve the original problem.

With the modular exponentiation function, which I presenetd in this thread http://www.hpmuseum.org/forum/thread-11674.html in the "General Software Library", it is possible to compute the last (up to) 18 digits of $$\Bigg(\sum_{n=1}^{1000} n^{n} \bmod 10^{10}\Bigg) \bmod 10^{10}$$ very easily.

There is only one smal problem left: The computation lasts for about 8 hours.

Best regards
Hartmut
10-26-2018, 07:37 PM
Post: #37
 rprosperi Senior Member Posts: 4,473 Joined: Dec 2013
RE: modular exponentiation?
(10-26-2018 06:08 PM)wynen Wrote:  There is only one smal problem left: The computation lasts for about 8 hours.

Wow. I'd hate to see a big problem....

--Bob Prosperi
10-27-2018, 12:24 AM
Post: #38
 Albert Chan Senior Member Posts: 1,247 Joined: Jul 2018
RE: modular exponentiation?
Here is a poor man's version, using old luajit 1.18
No 64-bits integer (only 53-bits float), no built-in powmod ...
PHP Code:
function powmod1e10(x,y)            -- x^y % 1e10    if y == 1 then return x end     -- assumed 0 <= x < 1e10    local p = math.floor(y/2)    local t = powmod1e10(x, p)    local b = t % 1e5               -- last 5 digits    t = (2*t - b) * b % 1e10        -- t*t % 1e10    if y == p+p then return t end    if x <= 9e5 then return t * x % 1e10 end    b = x % 1e5                     -- odd y, big x    return ((x-b) * (t%1e5) + b*t) % 1e10end
PHP Code:
function modsum(n)                  -- sum(i^i, i=1..n) % 1e10    local t = 0    for i = 1,n do t = (t + powmod1e10(i, i)) % 1e10 end    return tend

Some benchmark from my 20 years old Pentium 3:

modsum(1e3) ==> 9110846700 (0.00 seconds)
modsum(1e4) ==> 6237204500 (0.04 seconds)
modsum(1e5) ==> 3031782500 (0.57 seconds)
modsum(1e6) ==> 4077562500 (7.10 seconds)
10-27-2018, 05:46 AM (This post was last modified: 10-27-2018 05:51 AM by Ángel Martin.)
Post: #39
 Ángel Martin Senior Member Posts: 1,184 Joined: Dec 2013
RE: modular exponentiation?
(06-25-2018 03:43 PM)Thomas Okken Wrote:
(06-25-2018 02:49 PM)Bill Duncan Wrote:  When did this forum get so serious?

AFAIK, it's a place that we can share ideas, pass on a little knowledge, learn new things and most of all, have some fun!

[...]

So seriously, let's get back to having fun with the friendly exchange of ideas in the forum?

I honestly have no clue what you are complaining about. Sure, sometimes discussions get heated and even name-calling may happen... but I see no trace of anything like that in this thread. Where do you think we went wrong?

I for one understand Bill's sentiments. More often than none there's a certain Damocles sword pending over always ready to be let loose. So I too claim the right to goof up.

Modular exponentiation, of all things is a tricky one to grasp - more so if you had no exposure to Modular math. Thank you Thomas for your informative responses and clarifications, I'm sure that helped may folks.

Cheers,
ÁM
10-27-2018, 05:55 AM (This post was last modified: 10-27-2018 05:56 AM by Ángel Martin.)
Post: #40
 Ángel Martin Senior Member Posts: 1,184 Joined: Dec 2013
RE: modular exponentiation?
(10-26-2018 06:08 PM)wynen Wrote:  I just want to mention, that my favorite HP-16C is also able to solve the original problem.

There is only one smal problem left: The computation lasts for about 8 hours.

This picked my curiosity: I'm going to try running your program on V41 Turbo mode, using the 16C Simulator Module... will keep you all posted
 « Next Oldest | Next Newest »

User(s) browsing this thread: 1 Guest(s)