# HP Forums

Full Version: New Sum of Powers Log Function
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
Hi All,

I just published a new function on my web site. The function is SopLog which is short for “Sum of Powers Logarithm”. The function is defined as SolLogN(S) = x where N is an integer base (number of integers to add), S is the sum of terms, and x is the power to which is integer is raised. This means that:

S = sum i^x for i=1 to N

The power x is calculated as the root of the following function:

S – sum i^x for i=1 to N

You can download the article here. The article contains listings for SopLog in Excel VBA, Matlab, Python, C++, HP-71B BASIC, Generic legacy Pocket Computer BASIC, HP-41C, HP-41CX (with and without the Advantage module), HP-67/97, and HP-15C.

The article also shows an empirical relationship between x, S, and N.

The SopLog function is good for bench-marking. I leave it to other math-oriented folks to find other uses for SopLog.

Enjoy!

Namir
Nice article Namir ! Thanks for sharing.

I don't know what will be a practical application of this new function, but meanwhile, here is my contribution for the SHARP PC-1211 inspired from the generic program you give for BASIC pocket machines.

Since, it is a venerable and aged pocket, I add a way to get an estimation, for any user in a hurry, who may not have time or patience to wait for the several minutes needed to compute the exact x value by (numerus) iterations.

Code:
```1:Y=1:FOR I=2 TO N:Y=Y+I^Z:NEXT I:Y=S-Y:RETURN 2:INPUT "N.TERMS N=";N,"TOT SUM S=";S:RETURN 3:PAUSE "NEW SOPLOG.N.(S) = X ":PAUSE "S=SUM I^X FOR I=1 TO N":GOTO 2 4:L=LOG N,X=10^(-.314706469-.965654959*LOG L)*LN S-1.1689028*(1-EXP -1.6467435L:GOTO 6 5:H=€-3*(1+ABS X,Z=X:GOSUB 1:F=Y,Z=X+H:GOSUB 1:D=HF/(Y-F,X=X-D,R=R-1:IF RIF ABS D>TGOTO 5 6:BEEP 1:PRINT "SOPLOG";N;"(";S;E\$;X:PRINT E\$;X:RETURN 8:"Z"GOSUB 3:X=1,T=€-7,R=€3,E\$=")=":GOSUB 5:GOTO 8 9:"A"GOSUB 3:E\$=")¥":GOSUB 4:GOTO 9```

In DEF-mode, press shift+Z for the exact value obtained by the iterative process or press shift-A for an immediate estimation.

Code:
``` >                        [mode] DEF >                        [shift][ Z ] NEW SOPLOG.N.(S) = X S= SUM I^X FOR I=1 TO N N.TERM N=_               10 [enter] TOT SUM S=_              250 [enter]                          <... 2'38" ...>   SOPLOG10.(250.)=1.784518 [enter] )=1.784518859```
Code:
```                         [shift][ A ] NEW SOPLOG.N.(S) = X S= SUM I^X FOR I=1 TO N N.TERM N=_               1€3 [enter] TOT SUM S=_              7500 [enter] SOPLOG1000.(7500.)¥3.356 [enter] )¥3.358863814€-01```

P.S.:
¥ is shift-Y (Yen) character that I use to indicate approximative egality.
€ is the standard bold |Exp key that indicates ten's exponentiation on this pocket.
(03-29-2021 04:53 PM)Namir Wrote: [ -> ]S = sum i^x for i=1 to N

We can solve for x, without actually summing N terms.

As a rough approximation, s ≈ ∫(t^x, t=1/2 .. n+1/2)
Since sum(k, k=1..n) = (n²+n)/2, a guess for x is log(s)/log(n) - 1

To get more accurate x, we can use Euler–Maclaurin formula, for f(t) = t^x:
Operator shorthand, we have s = Σf = (D^-1 - 1/2 + D/12 - D^3/720 + D^5/30240 - ...) f

Drop higher-order derivatives, we have:
Code:
```RHS = lambda p,n: ((n+1)**(p+1)-1)/(p+1) - ((n+1)**p-1)/2 + p*((n+1)**(p-1)-1)/12 solvex = lambda n,s: findroot(lambda p: RHS(p,n)/s-1, log(s,n)-1) roughx = lambda n,s: findroot(lambda X: ((n+.5)**X-0.5**X)/(s*X)-1, log(s,n)) - 1```
Note: we solve RHS/LHS-1 = 0, because findroot like numbers in the "normal" range.

For comparision, I copied SopLog.pdf Table 1, for solved (s,x)

>>> from mpmath import *
>>> n=100
>>> SX = (100,0),(150,0.110121),(250,0.245589),(500,0.424944),(750,0.527995)
>>> SX += (1000,0.600423),(1100,0.624305),(1200,0.646061),(1300,0.666037)
>>>
>>> for s,x in SX: print '%g\t%f\t%f\t%f' % (s,x, solvex(n,s), roughx(n,s))
...
100﻿ ﻿ ﻿ ﻿ 0.000000﻿ ﻿ ﻿ ﻿ 0.000000﻿ ﻿ ﻿ ﻿ 0.000000
150﻿ ﻿ ﻿ ﻿ 0.110121﻿ ﻿ ﻿ ﻿ 0.110121﻿ ﻿ ﻿ ﻿ 0.110134
250﻿ ﻿ ﻿ ﻿ 0.245589 ﻿ ﻿ ﻿ 0.245589 ﻿ ﻿ ﻿ 0.245605
500﻿ ﻿ ﻿ ﻿ 0.424944 ﻿ ﻿ ﻿ 0.424944 ﻿ ﻿ ﻿ 0.424956
750﻿ ﻿ ﻿ ﻿ 0.527995 ﻿ ﻿ ﻿ 0.527995 ﻿ ﻿ ﻿ 0.528004
1000﻿ ﻿ ﻿ 0.600423 ﻿ ﻿ ﻿ 0.600423 ﻿ ﻿ ﻿ 0.600430
1100﻿ ﻿ ﻿ 0.624305 ﻿ ﻿ ﻿ 0.624305 ﻿ ﻿ ﻿ 0.624311
1200﻿ ﻿ ﻿ 0.646061 ﻿ ﻿ ﻿ 0.646061 ﻿ ﻿ ﻿ 0.646067
1300﻿ ﻿ ﻿ 0.666037 ﻿ ﻿ ﻿ 0.666037 ﻿ ﻿ ﻿ 0.666042
(03-29-2021 10:47 PM)Albert Chan Wrote: [ -> ]Since sum(k, k=1..n) = (n²+n)/2, a guess for x is log(s)/log(n) - 1

If we have LambertW, we can get a much better guess, without solver.

s ≈ ∫(t^x, t=1/2 .. n+1/2) = (n+1/2)^(x+1)/(x+1) - (1/2)^(x+1)/(x+1)

If we drop the last term, and let N=n+1/2, X=x+1, we have

s = N^X / X
ln(s) = X*ln(N) - ln(X)

We wanted to match W(a) ﻿= z ﻿ ﻿ ﻿ ﻿ ﻿ → a = z * e^z ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ → ln(a) = z + ln(z)

ln(1/s) = X*ln(1/N) + ln(X)
ln(1/s*ln(1/N)) = X*ln(1/N) + ln(X*ln(1/N))

→ X = W(ln(1/N)/s) / ln(1/N) = -W(-ln(N)/s) / ln(N)

Turns out, LambertW -1 branch is the one we need.
Lets' try this out, for s = Σ(k, k=1 .. n)

>>> guessx = lambda n,s: -lambertw(-ln(n+.5)/s,-1) / log(n+.5) - 1
>>> for n in range(1000,5001,1000):
... ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ s = n*(n+1)/2
... ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ print n, log(s)/log(n)-1, guessx(n,s)
...
1000 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ 0.899801360605113 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ 0.999999961026799
2000 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ 0.908873017486397 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ 0.99999999120301
3000 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ 0.913467137635656 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ 0.999999996300753
4000 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ 0.916458516421875 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ 0.999999997995799
5000 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ 0.918641366353455 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ 0.999999998752945
Thanks Albert!

Any implementation for the Lambertw function (the version with two parameters) on vintage calculators like the HP-41C, HP-67, and HP-15C?

Namir

The WP 34S has an implementation of Lambert's W function -- it's in XROM so it's a keystroke program that ought to convert to other devices without too many issues.

Pauli
Angel's Sandmath has Lambert mcoded.
The french Silicium Forum have various implementations of Lambert W function for HP-Prime, SHARP PC-1211 and HP-15C

For example:
[attachment=9287]
Code:
```001- LBL A  CF 8  STO 0  1  +  LN  1  ENTER  e^x  +  LN  ÷  GTO 0 014- LBL B  CF 8  STO 0  1  e^x  ×  2  ENTER  CHS  ENTER  RCL 0  x²  LN   -  √x  ×  -  GTO 0 032- LBL C  STO 0  re↔im  STO 1  re↔im  LN  ENTER  ENTER  LN  -  x↔y  LSTx  ÷  1/x  +             047- LBL 0 048-     CF 0  PSE  ENTER  ENTER  e^x  ENTER  R↑  ×  +  LSTx  FS? 8  GSB 1 060-     RCL-0  ENTER  ABS  RND  x>0?  SF 0 075-     R↓  x↔y  ÷  -  LSTx  ABS  RND  x=0?  CF 0 075-     R↓  FS? 0  GTO 0 078- RTN          079- LBL 1  re↔im  RCL-1  re↔im  RTN```

Set desired precision with FIX 1~9.
Enter argument x (or complex z) in first stack level X: and press:
f- A for real value of \( W_0(x) \) in main positive branch
f- B for real value of \(W_{-1}(x) \) in negative branch
f- C for complex value of \( W(z) \)
Hi, Namir

The algorithm for LambertW -1 branch is the same as W0, just different guess, see here

y = W-1(a) is real only for a = [-1/e, 0]

A simple way is just iterate for it: y = log(-a), then iterate y = log(a/y) ...
If converged, y=log(a/y)﻿ ﻿ ⇒ e^y = a/y﻿ ﻿ ⇒ y*e^y = a

>>> a = mpf(-0.005) # example
>>> y = log(-a)
>>> for i in range(5): y = log(a/y); print y
...
-6.9657066586895
-7.23931642716726
-7.27784415235106
-7.28315205198181
-7.28388110922369

We could speed up convergence with Newton's method

f(y) = y - log(a/y)
f'(y) = 1 + 1/y

y = y - (y-log(a/y))/(1+1/y) = y * (1+log(a/y))/(1+y)

>>> y = log(-a) # same guess
>>> for i in range(5): y *= (1+log(a/y))/(1+y); print y
...
-7.3536234060935
-7.28404934413218
-7.28399713512886
-7.28399713509908
-7.28399713509908

>>> y*exp(y) # confirm y = W-1(a)
-0.005
Thank you all for your feedback.

My preliminary testing using lambertw in Matlab and Python shows some interesting features. The decimal part is close to the results of SopLog but one has to manipulate the integer part. I will share more details after I look at the newer messages.

Namir
(03-30-2021 01:44 AM)Albert Chan Wrote: [ -> ]If we have LambertW, we can get a much better guess, without solver.

s ≈ ∫(t^x, t=1/2 .. n+1/2) = (n+1/2)^(x+1)/(x+1) - (1/2)^(x+1)/(x+1)

If we drop the last term, and let N=n+1/2, X=x+1, we have ...

We could estimate the dropped term, and add it back.

Code:
```def guess2(n,s):     t = -log(n+.5)     p = lambertw(t/s,-1) / t     s += 0.5**p/p     return lambertw(t/s,-1) / t - 1```

Above estimated x has similar error as roughx(n,s), but twice as fast. (on my machine)

(03-30-2021 01:44 AM)Albert Chan Wrote: [ -> ]Turns out, LambertW -1 branch is the one we need.

Illustrate the reason, by example:

>>> n, s = 100, 1000
>>> t = -log(n+.5)
>>> p = lambertw(t/s,-1) / t ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ # W1 resulted p
>>> p, (n+.5)**p/p, .5**p/p
(1.60037803179321, 1000.0, 0.2060704060225)
>>>
>>> p = lambertw(t/s, 0) / t ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ # W0 resulted p
>>> p, (n+.5)**p/p, .5**p/p
(0.00100464230172027, 1000.0, 994.686243766351)

Both solved p, for s = (n+.5)**p/p.
But, the assumption that last term can be dropped is false for W0
Even faster version, eliminated second LambertW, by taylor series approximation.

W(x+h) ≈ W(x) + W'(x) * h

w*e^w = x
(w+1)*e^w dw = dx
dw/dx = e^-w / (w+1) = w/(w*x+x)

Code:
```def guess3(n,s):     t = -log(n+.5)     x = t/s     w = lambertw(x,-1)     p = w/t     h = t/(s+0.5**p/p) - x     return p-1 + p/(w*x+x) * h```

To speed up search, solvex use log scale.

With secants method, and tolerance of 1e-5, we expected 5*1.6 = 8 "correct" decimals.
("correct" from solving root of formula point of view, not actual x)

Code:
```RHS = lambda p,n: ((n+1)**(p+1)-1)/(p+1) - ((n+1)**p-1)/2 + p*((n+1)**(p-1)-1)/12 solvex = lambda n,s: findroot(lambda p: log(RHS(p,n)/s), log(s,n)-1, tol=1e-5)```
Albert,

Your guess3() function provides good approximation to the SopLog function.

Here is the SopLog implementation in Matlab:

Code:
``` function x = soplog(N, S) %SOPLOG implements the SopLog function.   toler=1e-8; % default tolerance   x = 1; % initial guess for x   maxiter = 1000;   for iter =1:maxiter     h = 0.001 * (1 + abs(x));     f0 = sumFx(N, S, x);     fp = sumFx(N, S, x + h);     fm = sumFx(N, S, x - h);     diff = 2 * h * f0 / (fp - fm);     x = x - diff;     if abs(diff) < toler, break; end   end end function y =sumFx(N,S,x)   sumx = 1;   for i=2:N     sumx = sumx + i^x;   end   y = S - sumx; end```

Here is my copy of your Python guiess3() + some test calls:

Code:
```from mpmath import * import numpy as np def guess3(n,s):     t = -np.log(n+.5)     x = t/s     w = lambertw(x,-1)     p = w/t     h = t/(s+0.5**p/p) - x     return p-1 + p/(w*x+x) * h print(guess3(100,1500)) print(guess3(1000,5000)) print(guess3(1000,500))```

Here is my Matlab version of your guess3() function which I call soplogw3:
Code:
``` function x = soplogw3(n,s) %SOPLOGW Summary of this function goes here   t = -log(n+.5);   x = t/s;   w = lambertw(-1,x);   p = w/t;   h = t/(s+0.5^p/p) - x;   x = real(p-1 + p/(w*x+x) * h); end```

I do beleive we must use the solution path of -1 with the Lambert function. The W0(x) does not give the same answers.

Here are three test values:

Code:
```soplog(100,1500)  -> 0.701661431056288 print(guess3(100,1500)) -> 0.70166598312237 soplogw3(200,2500)       -> 0.701665983122370 soplog(1000,5000)           -> 0.267187615565215 print(guess3(1000,5000)) -> 0.267188163707435 soplogw3(1000,5000)       -> 0.267188163707435 soplog(1000,500)           -> -0.118482712209941 soplogw3(1000,500)       -> -0.118485900562627 print(guess3(1000,500)) -> -0.118485900562627```

The soplog() is my original Matlab implementation. The guess3() is you latest implementation, and the soplogw3() is the Matlab equivalent of your guess3(). The results show that using the Lambert W function yields results that match to 5 or 6 decimals. Pretty good!
Albert,

Going back to an earlier post in this thread. Here is my Python copy of your code that uses the findroot() function:

Code:
```from mpmath import * import numpy as np def roughx(n,s):     return findroot(lambda X: ((n + .5) ** X - 0.5 ** X) / (s * X) - 1, np.log(s)/np.log(n)) - 1 def RHS(p,n):     return ((n+1)**(p+1)-1)/(p+1) - ((n+1)**p-1)/2 + p*((n+1)**(p-1)-1)/12 def solvex(n,s):     return findroot(lambda p: RHS(p,n)/s-1, np.log(s)/np.log(n)-1) n=100 SX = (100,0),(150,0.110121),(250,0.245589),(500,0.424944),(750,0.527995) SX += (1000,0.600423),(1100,0.624305),(1200,0.646061),(1300,0.666037) for s,x in SX: print('%g\t%f\t%f\t%f' % (s,x, solvex(n,s), roughx(n,s)))```

The output is:

Code:
```100    0.000000    0.000000    0.000000 150    0.110121    0.110121    0.110134 250    0.245589    0.245589    0.245605 500    0.424944    0.424944    0.424956 750    0.527995    0.527995    0.528004 1000    0.600423    0.600423    0.600430 1100    0.624305    0.624305    0.624311 1200    0.646061    0.646061    0.646067 1300    0.666037    0.666037    0.666042```

Here is a Matlab script that implements the Matlab version of the above three Python functions:

Code:
```clc close clear n=100; s=1500; fprintf("SopLog(%i,%i) = %f12\n", n, s, soplog(n,s)); fprintf("Solvex(%i,%i) = %f12\n", n, s, solvex(n,s)); fprintf("Roughx(%i,%i) = %f12\n", n, s, roughx(n,s)); n=100; s=5000; fprintf("SopLog(%i,%i) = %f12\n", n, s, soplog(n,s)); fprintf("Solvex(%i,%i) = %f12\n", n, s, solvex(n,s)); fprintf("Roughx(%i,%i) = %f12\n", n, s, roughx(n,s)); n=1000; s=5000; fprintf("SopLog(%i,%i) = %f12\n", n, s, soplog(n,s)); fprintf("Solvex(%i,%i) = %f12\n", n, s, solvex(n,s)); fprintf("Roughx(%i,%i) = %f12\n", n, s, roughx(n,s)); function x = roughx(n,s)   x = fsolve(@(x) ((n + .5)^x - 0.5^x) / (s * x) - 1, log(s)/log(n)) - 1; end function x =RHS(p,n)   x= ((n+1)^(p+1)-1)/(p+1) - ((n+1)^p-1)/2 + p*((n+1)^(p-1)-1)/12; end function x = solvex(n,s)   x = fsolve(@(p) RHS(p,n)/s-1, log(s)/log(n)-1); end```

The output is:

Code:
```SopLog(100,1500) = 0.70166112 Solvex(100,1500) = 0.70166112 Roughx(100,1500) = 0.70166612 SopLog(100,5000) = 0.99757912 Solvex(100,5000) = 0.99757912 Roughx(100,5000) = 0.99757912 SopLog(1000,5000) = 0.26718812 Solvex(1000,5000) = 0.26718812 Roughx(1000,5000) = 0.26718812```

The arbitrarily selected values of N and S show that the Solvex() and Roughx() give the same digits. While both functions use Matlab's fsolve the do not require loops to perform any summation. Of course the same is true for the Python version.

The end of the study that I posted on my web site I discuss some variants of the SopLog function. These variants deal with scaling up/down the powers of the next integers, so the powers to which the integers are raise vary from term to term.

Another variation of the SopLog function is to specify the number of integers AND the integer-increment for the next term. And of course you can combine this variant with the scaled up/down powers.

Namir
(03-31-2021 01:27 PM)Namir Wrote: [ -> ]I do beleive we must use the solution path of -1 with the Lambert function. The W0(x) does not give the same answers.
Although not a proof, I had shown the reason for this here

Here is another way. If x = 0, we have s = Σ(1, k=1..n) = n
This is when both branches of LambertW gives the same value.

p = -W(-ln(N)/s) / ln(N), where N = n+1/2

With s=n ⇒ p=x+1=1, we do not need the +1/2 correction:

s = ∫(1, t=1/2 .. n+1/2) = ∫(1, t=0 .. n)

p = -W(-ln(n)/n) / ln(n) = ln(n) / ln(n) = 1 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ // W both branches, see identities

For n≠s, we have p≠1, but 2 solutions for p.
Dropped term 0.5**p/p is a decreasing function, so we want the maximum p.

In other words, we pick most negative value of W(-ln(N)/s), i.e. -1 branch. (03-31-2021 01:27 PM)Namir Wrote: [ -> ]Here is the SopLog implementation in Matlab ...

Code:
```    f0 = sumFx(N, S, x);     fp = sumFx(N, S, x + h);     fm = sumFx(N, S, x - h);     diff = 2 * h * f0 / (fp - fm);     x = x - diff;     ...```

Without function for the derivative, Newton's method is very inefficient.
We should consider secant's method (or, improved secant)

Also, curve is very close to LambertW, we should use log scale.

>>> n, s = 1000, 5000
>>> x = log(s,n)-1
>>> x1, y1 = x, log(fsum(k**x for k in range(1,n+1))/s)
>>> x1, y1
(mpf('0.23299000144533966'), mpf('-0.20890713759105706')

>>> x = guess3(n,s)
>>> x2, y2 = x, log(fsum(k**x for k in range(1,n+1))/s)
>>> x2, y2
(mpf('0.26718816370743487'), mpf('3.3544082779438775e-6'))

>>> x = x2 - y2 * (x2-x1)/(y2-y1)
>>> x3, y3 = x, log(fsum(k**x for k in range(1,n+1))/s)
>>> x3, y3
(mpf('0.26718761459859175'), mpf('-5.9153427886634265e-9'))

>>> x = x3 - y3 * (x3-x2)/(y3-y2)
>>> x4, y4 = x, log(fsum(k**x for k in range(1,n+1))/s)
>>> x4, y4
(mpf('0.26718761556521503'), mpf('-2.2204460492503136e-16'))

---

Turns out mpmath implemented Euler–Maclaurin Asymptotic expansion of sums Summation will continue, as long as additional term improve the sum, by at least 1 decimal.

>>> x = solvex(n, s) # my version, without higher derivatives
>>> x, fsum(k**x for k in range(1,n+1))
(mpf('0.26718762849802313'), mpf('5000.0003957177305'))

>>> x = findroot(lambda x: log(sumem(lambda k: k**x, [1,n])/s), log(s,n)-1)
>>> x, fsum(k**x for k in range(1,n+1))
(mpf('0.26718761309749891'), mpf('4999.9999244928886'))
Revised guess3(n,s), with less code.
Code:
```def guess3(n,s):     t = -log(n+.5)     w = lambertw(t/s,-1)     p = w/t     return p-1 - p/((w+1)*(s*p*2.**p+1))```

Note that guess3 had a weekness, when t/s < -1/e, LambertW returned complex values.
Even if w is real, if n ≫ s > 1, we have p=x+1 ≈ 0. Two terms taylor series estimate is still bad.

Code:
```from mpmath import *     RHS = lambda p,n: ((n+1)**(p+1)-1)/(p+1) - ((n+1)**p-1)/2 + p*((n+1)**(p-1)-1)/12 solvex = lambda n,s: findroot(lambda p: log(RHS(p,n)/s), log(s,n)-1, tol=1e-5) exactx = lambda n,s: findroot(lambda x: log(fsum(k**x for k in range(1,n+1))/s), log(s,n)-1)```

>>> n, s = 100, 13 ﻿ ﻿ ﻿ ﻿ ﻿ ﻿ # t/s ≈ -0.35463, slightly above -1/e ≈ -0.36788
>>> print exactx(n,s), solvex(n,s), guess3(n,s), log(s,n)-1
-0.622429308602986 ﻿ ﻿ ﻿ -0.622506385172213 ﻿ ﻿ ﻿ -0.544275714709962 ﻿ ﻿ ﻿ -0.443028323846582

---

Thus, if we want fully converged x, we should start with solvex.
We can use y = log(fsum(k**x for k in range(1,n+1))/s), to estimate next x.
Note: x correction is opposite the sign of y

>>> n, s = 1000, 5000 # previous post example
>>> x = solvex(n,s)
>>> x1, y1 = x, log(fsum(k**x for k in range(1,n+1))/s)
>>> x1, y1
(mpf('0.26718762849802312'), mpf('7.9143542807225195e-8'))

>>> x = x1 - abs(x1)*y1/2
>>> x2, y2 = x, log(fsum(k**x for k in range(1,n+1))/s)
>>> x2, y2
(mpf('0.26718761792493534'), mpf('1.4440531777112248e-8'))

>>> x = x2 - y2 * (x2-x1)/(y2-y1)
>>> x
mpf('0.26718761556521503')
Does any of your previous analysis and approximations work on the following variants of the SopLog summations?

Code:
```def sdSum(n,s,x,scale):   sumx = 1   factor = 1   for i in range(n,2,-1):     sumx += i ** (x * factor)     factor *= scale   return sumx - s       def siSum(n,s,x,scale):   sumx = 1   factor = 1   for i in range(2,n+1):     sumx += i ** (x * factor)     factor *= scale   return sumx - s    def siSum2(n,s,x,incr):   sumx = 1   factor = 1   j = 2   for i in range(2,n):     sumx += j ** (x * factor)     j += incr   return sumx - s       def siSum3(n,s,x,incr,scale):   sumx = 1   factor = 1   j = 2   for i in range(2,n+1):     sumx += j ** (x * factor)     j += incr     factor *= scale   return sumx - s```

The "scale" can be above or below 1. The "incr" is an integer >= 1.
(04-01-2021 06:05 PM)Namir Wrote: [ -> ]Does any of your previous analysis and approximations work on the following variants of the SopLog summations?

It depends on the function, and its argument. Euler–Maclaurin might not work.

Example, on your sdSum (I think you meant range(n,1,-1), or range(2,n+1) in reverse)
Code:
```exact = lambda x,n,scale: 1 + fsum(k**(x*scale**(n-k)) for k in xrange(2,n+1)) guess = lambda x,n,scale: 1 + sumem(lambda k: k**(x*scale**(n-k)), [2,n])```

>>> x, n, scale = 0.5, 100000, 0.5
>>> exact(x,n,scale), guess(x,n,scale)
(100337.09650943844, 100318.790872778)

Try a big x, guess is so wrong, it "sumed" negative !

>>> x = 1.5
>>> exact(x,n,scale), guess(x,n,scale)
(31728482.871558648, -38398120.2216635)
(04-01-2021 11:05 PM)Albert Chan Wrote: [ -> ]
(04-01-2021 06:05 PM)Namir Wrote: [ -> ]Does any of your previous analysis and approximations work on the following variants of the SopLog summations?

It depends on the function, and its argument. Euler–Maclaurin might not work.

Example, on your sdSum (I think you meant range(n,1,-1), or range(2,n+1) in reverse)
Code:
```exact = lambda x,n,scale: 1 + fsum(k**(x*scale**(n-k)) for k in xrange(2,n+1)) guess = lambda x,n,scale: 1 + sumem(lambda k: k**(x*scale**(n-k)), [2,n])```

>>> x, n, scale = 0.5, 100000, 0.5
>>> exact(x,n,scale), guess(x,n,scale)
(100337.09650943844, 100318.790872778)

Try a big x, guess is so wrong, it "sumed" negative !

>>> x = 1.5
>>> exact(x,n,scale), guess(x,n,scale)
(31728482.871558648, -38398120.2216635)

I am looking to solve for x given all other parameters--n, s, scale and leading to a zero value for s - sum_of_series.
Pages: 1 2
Reference URL's
• HP Forums: https://www.hpmuseum.org/forum/index.php
• :