New Appoach for Random Number Generation??? - Printable Version +- HP Forums (https://www.hpmuseum.org/forum) +-- Forum: HP Calculators (and very old HP Computers) (/forum-3.html) +--- Forum: General Forum (/forum-4.html) +--- Thread: New Appoach for Random Number Generation??? (/thread-12658.html) |
New Appoach for Random Number Generation??? - Namir - 03-21-2019 12:06 PM Hi All! I thought of a new class of random numbers! Pick two different approximations of the same function, call them f1(x) and f2(x). Perform the following: Code: Using a seed of X in [0,1]. Functions f1(x) and f2(x) should NOT return extremely small or ig values for argumens in [0, 1]. The last calculation step above obtains some numerical noise suitable as random values. Here is sample code in Excel VBA using two approximations for the Normal CDF: Code: Option Explicit The resulting numbers have a mean close to 0.5 and standard deviation close to 0.28. You can use simpler approximations for the Normal or other functions. Ultimately, your choices for f1(x) and f2(x) are HUGE!!! Namir RE: New Appoach for Random Number Generation??? - ttw - 03-21-2019 12:47 PM I think (though it needs some analysis) that numeric noise is rather predictable. Given any two input floating point numbers, the roundoff error is easily obtainable. Knuth has a pretty good analysis in Volume 2. RE: New Appoach for Random Number Generation??? - Albert Chan - 03-21-2019 01:53 PM (03-21-2019 12:06 PM)Namir Wrote: your choices for f1(x) and f2(x) are HUGE!!! Letting user pick f1 and f2, without any statistical analysis, is probably a bad idea. With above setup, it is conceivable that for some seed, the period is low, possibly even 1. Also, I think you mean L = floor(log10(Diff)), x = FRAC( Diff / 10^(L-4) ) In other words, replacing 5 sig. digits of Diff, with a decimal point. RE: New Appoach for Random Number Generation??? - Namir - 03-21-2019 03:14 PM (03-21-2019 01:53 PM)Albert Chan Wrote:(03-21-2019 12:06 PM)Namir Wrote: your choices for f1(x) and f2(x) are HUGE!!! Selecting the functions f1 and f2 (and they should be appomixations to the same target function) should be done along with testing them of course. The selection is an open wide field and can be a proverbial mine field if the choices ae not made wisely and tested. In other words, with freedom comes responsibility. In calculating L, I used two seperate statements for the sake of clarity. RE: New Appoach for Random Number Generation??? - Albert Chan - 03-21-2019 07:17 PM With seed = 0.123, I created million "random" bytes = int(256*(x = rand34(x))) Test the bytes with ent.exe, it failed the Chi-Square test. Quote: If the percentage is greater than 99% or less than 1%, the sequence is almost certainly not random. Entropy = 7.999695 bits per byte. Optimum compression would reduce the size of this 1000000 byte file by 0 percent. Chi square distribution for 1000000 samples is 422.05, and randomly would exceed this value less than 0.01 percent of the times. Arithmetic mean value of data bytes is 127.3923 (127.5 = random). Monte Carlo value for Pi is 3.140892564 (error 0.02 percent). Serial correlation coefficient is 0.000456 (totally uncorrelated = 0.0). BTW, rand34() *crashed* with seed of 0.0, due to log10() domain error. RE: New Appoach for Random Number Generation??? - Namir - 03-21-2019 07:51 PM Here is another example, this time using two Calculator-oriented PRNGs! This is somewhat ironic :-) Code: Option Explicit Again, the ideas is to get the numerical noise out of low decimal places. Namir Heretic Math Hobbyist RE: New Appoach for Random Number Generation??? - Namir - 03-21-2019 11:52 PM (03-21-2019 07:17 PM)Albert Chan Wrote: With seed = 0.123, I created million "random" bytes = int(256*(x = rand34(x))) I updated the SUB go in the first message to handle zero random numbers. RE: New Appoach for Random Number Generation??? - ttw - 03-23-2019 02:34 AM You could sample the last bit or byte of two separate computations and combine them by adding or differincing mod 2 or mod 256. These should be in floating point numbers where some things get discarded. There's no reason to use the same computation. I don't know how good it would be but one could pick off the last few bits of the some numbers being used in a matrix computation (which uses lots of floating point); then add them mod 2^k with k bits. The last few bits should be almost independent and adding several of them smooths out the distribution. Note that using the most significant bits or bytes doesn't work because of Benford's law (discovered by Simon Newcomb). |