Post Reply 
Pseudo and Quasi Random Generators
04-15-2018, 05:59 AM
Post: #1
Pseudo and Quasi Random Generators
The following is a description of an idea for pseudo-random sequence generators and quasi-random sequence generators that are simple enough to use on a calculator. These sequence can be generated quickly with few steps. The math behind them is at least as good (I think) as that of other methods.

Quasi-random and pseudo-random generators for the HP50g based on multiples of irrational numbers.

The term "pseudo-random" applies to numbers that (are intended to) mimic independent selections from the interval (0-1) or [0-1) or (0-1] or [0-1]. Treatment of the endpoints does not affect the statistics but it can matter when programming as sometimes choosing either 0 or 1 can lead to problems. The term "quasi-random" mimics selections from the unit interval (as above) which differ in distribution from the uniform distribution as little as possible. In the quasi-random case, the selection are generally correlated.

Discrepancy is a measure of how well a finite point set approximates an interval. In more than one dimension, there are differnt possibile generalizations of an interval; for mathematical simiplicity, I'll just use the unit cube in k dimensions. The usual formula is to compute g(y)=A(y)/N-y with A(y) being the number of points from 0 to y. Other choices are possible but that are all yield similar results. One can take the maximum of g(y) over the unit intercal or the means square of y (Square root of the integral of y^2) or the mean of the average Abs(g) or the like. From the mathematical theory of uniform distribution, the expected discrepancy of N "randomly" distributed points in a k-distributional unit cube proportional to 1/Sqrt(N). One can do better. The best one can do in k dimensions is discrepancy proportional to Sqrt(Log(N)^(k-1))/N.

The reason one wants small discrepancy is the Koksma–Hlawka inequality which states that an error bound for the integration of a function by a set of points is bounded by the discrepancy of the sequence multiplied by the variation of the function. (There are generalizations). The point is that the error estimate of an average (integral) can be written as product of a term dependent only on the function times a term dependent only on the points used to sample that function. Low discrepancy point sets are thus useful.

Now to put a bit of the above to work. There is an important sequence in number theory (and by the above, numerical analysis) consisting of the fractional part of multiples of an irrational. X(m)=Frac(Sqrt(2)*m) would be a simple example. This sequence goes under the names "Kronecker Sequence" or "Weyl Sequence" or "Richtmeyer Squence" from the names of those who respectively proved the sequence dense in the unit cube, uniformly distributed in the unit cube, and giving a integration error of 1/N in the unit cube. The k dimensional version of the sequence just uses a vector of square roots of the first k primes. For various reasons, one gets slightly better results using the primes 2, and those of the form 4*m+1.

An important point is that sequences X(m)=Frac(P1)*m) and Y(m)=Frac(P2)*m) are statistically independent. These points are uniformly distributed in the unit square. (And this generalized to any number of dimensions.) As an aside, the definition of "statistically independent" sequences in the unit cube is analogous to that of independent random variable in probability theory. Let S1= Sum(m=1, N)(F(X(m))) and S2=Sum(m=1,N)(G(Y(m))) and S12=Sum(m=1,N)(F(X(m))*F(Y(m))). The definition is that two sequences X(m) and Y(m) are stastically independent is that the limit (N goes to infinity) (S1/N)*(S2/N)=(S12/N).

Now for some computational formulas. Let P be the vector of fractional parts of square roots of (2,5,13,17,29,37...) up to k primes. Then the Weyl Sequence of length N (actually a set if N is fixed) is Frac(m*P) with m running from 1 to N. This can be computed very quickly with minimal (but not zero) round off by the following X(m)=Frac(X(m-1)+P) (where X and P are vectors and their subscripts are ignored for typographic reasons.) The X sequence is usually initialized at all zeros, but any real starting vector is just as good. Thus there is a single real addition followed by a reduction mod 1. This should introduce an error of at most 1/2 ULP per step. ULP means "unit of least precision" or 1 bit base 2. Using Kahan Addition can help reduce even this.

So, what have we got? We don't have a Maxim gun (Google the reference) but we have a k-dimensional sequence of length N which fills the unit k-cube very evenly. This can also be seen to be k statistically independent sequences of length N (where N can be as big as one has the money to compute.) For shuffling S items, one needs k=S-1 and the first permutaion uses [X(1,2),X(1,5),...] as the numbers used for selecting the element to be swapped int the Knuth permutation algorithm (using the prime as the second subscript to make it easy to see what's happening). This will generat a set of permutations which sample the set of all permutations. However, the sequence X(m)=Frac(m*Sqrt(2)) does not exhibit independence among its terms. (Any set of N points from this sequence when sorted has only three sizes of gaps.)

A small adjust of this sequence gives a sequence that seems a better example what one expects a "random" sequence to look like. We take the sequence Y(m)=Frac(m^2*P). Over the years, there have been conjectures (but unfortunately few proofs) that the gaps of this sequence are Possionian (rather than having only 3 values). However, what is in the literature is at least a good as the evidence used to justify most other random number generators. It's possible to compute this sequence quickly two. Noting that (m+1)^2=m^2+2*m+1 gives the recursion. We use an auxiliary varible (A for mnenomic reasons) and computly thusly:



By storing Frac(2*P) and doing the addtions in the first part separtely (and the fractionalization separtely), one can get three 1 ULP roundings. Again Kahan Addition may help.

The sequences may be initialized with X(0) or Y(0) being some "random" number other than 0. I use the highest precision values that whatever computer I am usig gets from its clock. Save this value for debugging.

The sequence of Y(m) seems (numerically and some limit theorems) seem independent along the m direction (for a single prime). They are clearly independent across different primes. (I call using a single prime the "downstream" direction and across primes the "crossstream" direction, assuming the English Department lets me keep the sss.) For simulating a game (or something time dependent where the computation assums the number of steps is proportional the "physics" time), the Y(m) sequences may be of use. Numerically, I get a discrepancy of 1/Sqrt(N) for the small number of cases I've examined. This looks more like random than low-discrepancy.

I'm out of town for about 2 months. Later, I'll post some (edited and revised) programs to generat these. Howeve, they really are simple computations.
Find all posts by this user
Quote this message in a reply
04-16-2018, 06:20 AM
Post: #2
RE: Pseudo and Quasi Random Generators
A very interesting read, thanks.

Find all posts by this user
Quote this message in a reply
Post Reply 

User(s) browsing this thread: 1 Guest(s)