Post Reply 
[VA] SRC #011 - April 1st, 2022 Bizarro Special
04-10-2022, 09:20 AM (This post was last modified: 04-11-2022 08:10 AM by EdS2.)
Post: #21
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
A marvellous and remarkable feat, Valentin - well done, and thank you for sharing your approach, and your other approach.

The double pseudo-random number generator is very nice indeed. I would never have guessed there'd be room to generate (and use) six random numbers.

Thanks also Albert for your subsequent workings. It's particularly nice to see your arc-SOHCAHTOA method in use, and so soon after posting.

It feels to me that uniform sampling would be just as accurate as random sampling, although as it turns out it would more expensive in terms of machinery. Because in this presentation we're allowed first to decide how many steps to run, a uniform approach is natural. A random sampling approach has the great advantage that it can keep running and continue to make progress, without needing to complete some particular number of steps.

But, it seems to me that there might be an interesting way to use uniform sampling with an unbounded count, using some sort of space-filling reordering of the numbers in the interval. Perhaps a simple bitwise operation, if implementing on a 16C or a conventional machine.

In the past I've used(*) int(x+phi) as an iterator, as a way to 'fill space' in a deterministic way. It's not uniform. And it's very much not random. But it is perhaps approximately as pseudo-random as the R-D approach seen here... and simpler conventionally, but less simple in a world where we have an R-D function!

(*) oops, I meant frac(x+phi) of course!
Find all posts by this user
Quote this message in a reply
04-10-2022, 04:29 PM
Post: #22
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
(04-10-2022 09:20 AM)EdS2 Wrote:  It feels to me that uniform sampling would be just as accurate as random sampling, although as it turns out it would more expensive in terms of machinery.

I am not so sure random sampling is accurate at all.
This is like throwing dart to estimate pi. Try running it twice Big Grin

Uniform sampling might not need as many points, if we extrapolate results.
Below, FNA(A,B,C) is Aitken's delta-squared process, extraploate from 3 known points.

No tricks. Just sum all forces pressing the 2 planets, divided up N^3, M^3 tiny cubes
(If either N, M is even, it take advantage of quadrant symmetry, and do just 1 corner)

Code:
10 DEF FNA(A,B,C)=C-(C-B)^2/(C-B-(B-A))
20 INPUT "N,M ? ";N,M
30 N3=1/N @ N1=N3/2 @ N2=.5 @ IF MOD(N,2) THEN N2=1
40 M3=1/M @ M1=M3/2 @ M2=1.5-N2 @ IF MOD(M*N,2) THEN M2=1
50 T=TIME @ S=0
60 FOR X1=N1 TO 1 STEP N3 @ FOR X2=M1 TO 1 STEP M3 @ X=1+X2-X1
70 FOR Y1=N1 TO N2 STEP N3 @ FOR Y2=M1 TO M2 STEP M3 @ Y=Y2-Y1
80 FOR Z1=N1 TO N2 STEP N3 @ FOR Z2=M1 TO M2 STEP M3 @ Z=Z2-Z1
90 S=S+X/(X*X+Y*Y+Z*Z)^1.5
100 NEXT Z2 @ NEXT Z1 @ NEXT Y2 @ NEXT Y1 @ NEXT X2 @ NEXT X1
110 DISP S/(M*N)^3/(N2*M2)^2,TIME-T @ GOTO 20

>RUN
N,M ? 2,2
 .942585572032      .11
N,M ? 4,4
 .929717192068      5.66

>FNA(1, .942585572032, .929717192068)
 .925999798263

We get F required 3 digits accuracy, wth 2^6/4 + 4^6/4 = 16 + 1024 = 1040 points.

With N=M, F is over-estimated (unrealistically many points with cos(θ) = 1)
Extrapolated result removed (most of) this built-in bias.

Because we are doing F(1,1), F(2,2), F(4,4), we can also do Richardson Extrapolation.
Again, we get F required 3 digits accuracy (third column)

1
.942585572032      .923447429376
.929717192068      .925427732080      .925559752260
Find all posts by this user
Quote this message in a reply
04-11-2022, 10:12 AM (This post was last modified: 04-11-2022 10:18 AM by J-F Garnier.)
Post: #23
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
Thanks Valentin, for the conclusion.

A nice "challenge" for the April 1st Fools' Day Smile
I enjoyed the reading, really.

It reminded me a quote from "les Shadoks" (French nonsense humour animated cartoon of the late sixties that became a cult):
"If you have 999 chances in 1000 that the thing will fail, so hurry to do the 999 first tests, because the 1000th will probably be the right one."

J-F
Visit this user's website Find all posts by this user
Quote this message in a reply
04-12-2022, 06:37 PM (This post was last modified: 04-12-2022 06:48 PM by J-F Garnier.)
Post: #24
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
I had to come back to Valentin's solution, because I was puzzled: how did Valentin succeed to get (almost) 3 correct figures whereas I was unable to get more than one?
Our solutions are so close, we even used the same sequence ENTER SQRT * to compute the power 3/2.
Do 1312 samples make a so big difference with 999?
Is Valentin's random generator so much better than mine?

Then I understood. I made a big, stupid mistake. I set the initial seed of my random generator to PI.
It was not good because a seed must be between 0 and 1 (both excluded).
I wanted to keep PI as part of the seed because PI is a transcendental number (and what is more random than a transcendental number?) so I changed my sequence
PI STO 00 ; rnd seed
to
PI 1/X STO 00 ; rnd seed
And my program immediately worked much better, even better than Valentin's one.
My program worked so much better that I was puzzled again.

And again, the light came.
Valentin's generator suffers from a big flaw: the sequence [previous seed] ->DEG FRAC, with inputs between about 0.175 and 1 (so about 82.5% of the time) gives a pseudo-random number with only 8 decimal places.
On the contrary, my sequence [previous seed] PI * FRAC always gives 9 decimal places.
So my generator provides numbers that are 10 times more dense (in the mathematical sense of the word) 82.5% of the time and so is more efficient by a factor of 8.25.

The consequence is that my (corrected) program needs much less samples.
A more detailed analysis revealed that the amount of needed samples is reduced by a factor of 7.25 only, not 8.25 because numbers ending with one 0 at the last place don't count.
So I was just needing 1312 (the number Valentin estimated for his generator) divided by 7.25, i.e. just 181 samples. What a difference!

Running my very slightly modified program (1/X inserted at step 5) with 181 samples quickly provides me this answer, actually more accurate than Valentin's result:
--> 0.92616 [19182] at less than 0.0002 from the exact value 0.92598 [12605] !

Of course, my program using my version of the random number generator can't fit in a 10C.
And it's fortunate for Rd. Albizarro, otherwise he wouldn't had time to enjoy a good diner !

J-F

P.S. If you are tempted to take all this too seriously, just experiment with either program version (mine or Valentin's one) on a fast 15C or 41C emulator for instance, and you will get it soon.
Visit this user's website Find all posts by this user
Quote this message in a reply
04-12-2022, 09:46 PM (This post was last modified: 04-12-2022 09:48 PM by Albert Chan.)
Post: #25
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
(04-12-2022 06:37 PM)J-F Garnier Wrote:  Valentin's generator suffers from a big flaw: the sequence [previous seed] ->DEG FRAC, with inputs between about 0.175 and 1 (so about 82.5% of the time) gives a pseudo-random number with only 8 decimal places.
On the contrary, my sequence [previous seed] PI * FRAC always gives 9 decimal places.

I think Valentin "losing" 2 digits (when integer part get removed) is more random, not less.
Any patch of small seed will quickly get randomized. (it get multiply by 57.29..., not 3.14...)

Also, gain of least significant random digits mean very little when we sum forces.

Say, we sum 1000 point mass forces, and expected to get around 926.

If we solve FNF(X,X,X) = 926, we get X ≈ 0.0144
Hitting even 1 case within this sphere, we already passed sum of 926.

The singularity make Monte Carlo integration unsuitable.

10 DEF FNF(X,Y,Z)=X/(X*X+Y*Y+Z*Z)^1.5
20 INPUT "N ? ";N @ S=0
30 FOR I=1 TO N @ S=S+FNF(1+RND-RND,RND-RND,RND-RND) @ NEXT I
40 DISP S/N @ GOTO 20

>RUN
N ? 1000
 .924145126462
N ? 1000
 .899430246256
N ? 1000
 .919886932633
N ? 1000
 1.11304106775

If I stopped at first 1000 samples, I get F = 0.924.
But, that's just lucky. Result cannot be repeated.

With RND giving 12 random digits, and 4000 samples, we can barely get 1 digit.
Find all posts by this user
Quote this message in a reply
04-13-2022, 11:24 AM (This post was last modified: 04-13-2022 02:06 PM by Werner.)
Post: #26
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
(04-10-2022 04:29 PM)Albert Chan Wrote:  I am not so sure random sampling is accurate at all.
[..]
If I stopped at first 1000 samples, I get F = 0.924.
But, that's just lucky. Result cannot be repeated.

With RND giving 12 random digits, and 4000 samples, we can barely get 1 digit.
Valentin pulled an April fool's joke on us all after all.
Just try the exact same program with seed 0.5 instead of 1.
Then your result is 0.848...
If you run it with seed 0.5 and 10000 points, I get 1.02899..
100'000 points (equivalent program on Free42) -> 1.1189...
Seed 1 just happened to get very close to the correct result ;-)
ie. I got it, J-F ;-)

Cheers, Werner

41CV†,42S,48GX,49G,DM42,DM41X,17BII,15CE,DM15L,12C,16CE
Find all posts by this user
Quote this message in a reply
04-13-2022, 03:05 PM (This post was last modified: 04-13-2022 03:13 PM by J-F Garnier.)
Post: #27
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
(04-13-2022 11:24 AM)Werner Wrote:  Seed 1 just happened to get very close to the correct result ;-)
ie. I got it, J-F ;-)

Let's introduce the Valentin-Bizarro conjecture: for any seed, there is at least one value for the number of samples that makes the sum be as close as desired to the exact value. :-)

J-F
Visit this user's website Find all posts by this user
Quote this message in a reply
04-13-2022, 10:10 PM
Post: #28
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
(04-13-2022 03:05 PM)J-F Garnier Wrote:  
(04-13-2022 11:24 AM)Werner Wrote:  Seed 1 just happened to get very close to the correct result ;-)
ie. I got it, J-F ;-)

Let's introduce the Valentin-Bizarro conjecture: for any seed, there is at least one value for the number of samples that makes the sum be as close as desired to the exact value. :-)

You both got it ! And that's not a conjecture, that's a theorem ... Smile

Next, My Comments.

Best regards.
V.

  
All My Articles & other Materials here:  Valentin Albillo's HP Collection
 
Visit this user's website Find all posts by this user
Quote this message in a reply
04-13-2022, 10:42 PM
Post: #29
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
 
Hi, all,

Thanks a lot to those of you who posted some comments, namely Albert Chan, EdS2, ijabbott, Massimo Gnerucci, rawi, Ren, rprosperi, vaklaff, Werner, and last but certainly not least, J-F Garnier, much appreciated. Now, for my own final Comments:

My Comments

1) On my solution's algorithm and poor-man's random number generator

As I've said, there's no way to implement a deterministic cubature algorithm for a non-trivial definite 6-D integral like the one featured here in just 79 bytes of RAM (including both program and required data storage) and with no subroutines, so I had to use the poor-man's RNG which I advocated (and sent to Richard Nelson for publication in PPC CJ, where it was indeed eventually published 40+ years ago) as the fastest & smallest one which was still capable of producing decent results.

As you can see in the above linked vintage letter (page 7), it can be used in the HP-41, HP-67/97 and any calculators featuring a built radians-to-degrees conversion, and is particularly useful for games and simulations, due to its speed and simplicity.

This RNG generates uniformly distributed pseudo-random numbers in the interval 0-1 and the seed can be any integer or real number except 0, Pi or its multiples. It essentially uses a multiplier equal to 180/Pi, and as you can see in the vintage letter, a trial test generating and analyzing 1,000 values produced a decent uniform distribution with mean = 4.4 and standard deviation = 2.9 where the theoretical values are 4.5 and 3.0, respectively, close enough. I also couldn't find its period back then after generating 3,000 values.

J-F Garnier had the correct idea and almost succeeded in duplicating my program, as seen in his post, but he couldn't fit his program in the HP-10C and more important, he didn't discover the possibility of using the ->DEG instruction available in the HP-10C, which essentially uses 180/Pi = 57.2957795+ as the multiplier, and so he used Pi = 3.14159265+ instead. The problem with such a low-valued multiplier is that it's very prone to ascendent runs. For instance, if the seed ever becomes a low value such as 0.01, you'll get a long ascendent run:
    0.0100 -> 0.0314 -> 0.0987 -> 0.3101 -> 0.9741
that is, 5 consecutive values in increasing order, which means a linear dependency from the previous value and damages the overall randomness, which is probably why J-F's program couldn't achieve a sufficiently accurate answer. If the seed eventually gets smaller than 0.01, you'll get even longer ascending runs (7 values, 10 values, ...).

My method has this problem too, but to a far lesser extent because the ->DEG multiplier (57.30+) is much bigger than J-F's 3.14+ and thus any ascending runs are far shorter. For instance, with the same seed:
    0.0100 -> 0.5730
and the next number generated may or may not be in ascending order. Matter of fact, the worst that can happen to a method which uses just a multiplier (unlike linear congruential generators, which use multiplication, addition and modulus operations) is that if ever the seed becomes exactly 0, then the method will be stuck on an indefinite loop, always producing 0, and as I've explained above, whenever the seed becomes very small, then you'll get a long ascendent run if the multiplier is also small, like J-F's Pi.


2) On non-deterministic cubature methods

When numerically computing definite integrals in a single variable, it's quite common and efficient to use deterministic methods that evaluate the function being integrated at a number of well-chosen arguments (such as 16-point Gaussian quadrature, say), which for reasonably well-behaved integrands are both fast and accurate.

However, computing a double integral in two variables to the same level of accuracy would roughly need 162 integrand evaluations and in general computing a multiple integral in D variables will require grosso modo about 16N evaluations, which for the 6-D integral featured here would be 16 6 ~ 17 million evaluations.

When the number of evaluations needed to compute the integral grows exponentially with the dimension D, then the integration method suffers from the so-called curse of dimensionality, which means that deterministic methods are utterly inefficient for high-dimensional integrals (such as the ones appearing in mathematical/computational finance, where integrals having hundreds (D > 100) and even thousands (D > 1000) of variables aren't uncommon) and in practice it is mandatory to resort to non-deterministic Monte Carlo (MC) methods, which do not suffer the curse of dimensionality but converge very slowly.

Matter of fact, simple MC methods converge as slowly as 1/√D, which means that to get one additional correct digit (10x accuracy) we must use 100x the number of evaluations, but we can resort instead to Quasi-Monte Carlo (QMC) methods, which attempt to speed up the convergence to 1/D (i.e. to increase 10x the accuracy you have to increase 10x the number of evaluations, not 100x) by using low-discrepancy sequences (aka quasi-random sequences) instead of sequences of (pseudo-)random numbers, as MC uses.

The gains in speed and accuracy that QMC methods afford over simple MC (let alone deterministic methods) for multidimensional integration is extremely noticeable, e.g. requiring as little as 500 integrand evaluations to compute a 25-D test integral within 0.01, as compared to 220,000 evaluations using MC.

Mind you, none of this would fit in 79 bytes at all, so I did the best I could given the circumstances !! Smile


3) On the gravitational force F between two cubical planets

In the case of spherical planets in contact (m1=1, m2=1, d=1, G=1), the gravitational force F is 1, but if the planets are cubical and in contact, we have F<1 because they have part of their mass in the corners, which are farther away.

If instead of being in contact the cubes were at a distance d > 1 or even d >> 1, then F would quickly approach 1/d2 and the cubes would act more and more like spheres in that their mass could be considered as a point mass at their centers, like in the spherical case. Indeed, by the time the centers of the cubic planets are 4 or more units apart (d >= 4), they can be practically considered spherical as far as gravity is concerned.


4) On real-life applications of computing the gravitational field of a cubical object

In the past few years, a number of spacecraft have been sent to visit diverse astronomical objects, from 1 Ceres (dwarf planet, 939 km mean diameter, visited by Dawn) to 101955 Bennu (asteroid, 490 m mean diameter, visited by OSIRIS-REx, which first orbited, then successfully touched down on its surface and later departed for Earth). Ceres is big enough to have a reasonably spherical shape, but Bennu is markedly "squarish":
    [Image: SRC11-11.jpg]

and other irregular objects also visited by spacecraft include two-lobed comet 67P/Churyumov–Gerasimenko, which is much further away from sphericity:
    [Image: SRC11-10b.jpg]

As another such instance, the asteroid 433 Eros (~17 km mean diameter) also has a highly irregular shape and was visited by the NEAR Shoemaker spacecraft, which was initially put on a relatively distant ~320-360 km elliptical orbit. At that distance, Eros' gravitational field could be considered as if the mass of the asteroid were concentrated in its center but later, when NEAR was moved to a much closer orbit and eventually landed on the asteroid, it was necessary to compute a more accurate gravitational field, least the spacecraft would impact the asteroid at a potentially dangerous speed.

The problem is compounded if, as it's usually the case, the object not only has an irregular shape but it's also rotating. In the case of a cubic planet with a side equal to Earth's diameter, can an artificial satellite orbit it ? For starters, it will feel a stronger gravitational attraction near the cube's corners and there will be additional resonances as the planet rotates. Moreover, again due to the corners, the satellite won't follow a closed elliptical orbit but will instead be subject to rapid precession and in general the orbit won't close at all.

If both the planet and the satellite rotate in the same direction, with the satellite orbiting a few planet's radii high, the differential rotation will perturb the orbit so much when the satellite is near the cube's corners that eventually it will collide with the planet, like this:
    [Image: SRC11-08b.jpg]

Thus, launching satellites in low orbits around non-spherical bodies requires careful calculation to overcome the perturbations, and there's a number of academic publications on the gravity field of a cube, with the resulting formulae being used in real life to compute the gravitational field of a body of irregular shape by superimposing on the object a 3D grid of cubic blocks, iteratively reducing their size until the desired accuracy is attained, like this (approximately, you get the idea) :
    [Image: SRC11-09b.jpg]

Such methods can be tested as I did here, by applying them to a case whose solution is known, i.e. a spherical planet, whose shape is approximated by iteratively filling up its volume with cubic blocks of diminishing size as per the algorithm, then integrating the gravitational force over all the cubic blocks. This can also be applied to non-homogeneous bodies by using cubic blocks small enough for them to be considered individually homogeneous, then having their individual densities vary as needed.

And just in case you'd think that cubical planets wouldn't be taken seriously by anyone ...  Smile
    [Image: SRC11-14b.jpg]



Well, that's more than enough !

If you want to comment something about my OP, my Original Solution and/or my ersatz RNG you're welcome to post it to this very thread. But for comments on general Monte Carlo or quasi-Monte Carlo methods or general space exploration please create another thread so that this one may remain on-topic and focused on my OP. Thanks !

This will be my last SRC for a long while, hope you enjoyed it. Thanks for your interest and

Best regards.
V.

P.S.: A final question:  knowing that the Bizarro given name "Nitnelav" is unisex (like the English given names Morgan, Cameron or Hayden, say), what do you think ?  Is Rd. Albizarro a Bizarro-man or a Bizarro-woman ? Smile

  
All My Articles & other Materials here:  Valentin Albillo's HP Collection
 
Visit this user's website Find all posts by this user
Quote this message in a reply
04-14-2022, 05:51 AM
Post: #30
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
(04-13-2022 10:42 PM)Valentin Albillo Wrote:   P.S.: A final question:  knowing that the Bizarro given name "Nitnelav" is unisex (like the English given names Morgan, Cameron or Hayden, say), what do you think ?  Is Rd. Albizarro a Bizarro-man or a Bizarro-woman ? Smile

Gender X

Greetings,
    Massimo

-+×÷ ↔ left is right and right is wrong
Visit this user's website Find all posts by this user
Quote this message in a reply
04-14-2022, 12:49 PM (This post was last modified: 04-15-2022 06:34 PM by Albert Chan.)
Post: #31
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
(04-12-2022 06:37 PM)J-F Garnier Wrote:  my sequence [previous seed] PI * FRAC always gives 9 decimal places.

Amazingly, above random generator is so bad, it is more likely get the right answer Smile

s = FRAC(deg(s)), for (1+RND-RND), we get expected triangular distribution.

>>> def r(): global s; s*=m; s-=int(s); return s
...
>>> m, s, t = 57.295779513082323, 1, [0]*20
>>> for i in range(100000): t[int(10*(1+r()-r()))] += 1
...
>>> for i in range(20): print (i+.5)/10, '*' * int(t[i]/200+0.5)
...
0.05 **
0.15 ********
0.25 ************
0.35 *****************
0.45 ***********************
0.55 ****************************
0.65 *********************************
0.75 **************************************
0.85 *******************************************
0.95 ************************************************
1.05 ***********************************************
1.15 ******************************************
1.25 **************************************
1.35 ********************************
1.45 ***************************
1.55 **********************
1.65 *****************
1.75 *************
1.85 ********
1.95 ***

s = FRAC(PI*s), and same seed of 1, we get this instead

>>> m, s, t = 3.1415926535897931, 1, [0]*20
>>> for i in range(100000): t[int(10*(1+r()-r()))] += 1
...
>>> for i in range(20): print (i+.5)/10, '*' * int(t[i]/200+0.5)
...
0.05
0.15
0.25
0.35 *******************
0.45 ************************
0.55 ************************
0.65 **************************************
0.75 ****************************************************
0.85 ****************************************************
0.95 **************************************************************
1.05 *******************************************
1.15 ********************************************
1.25 *********************************************
1.35 **************************
1.45 *********************
1.55 *********************
1.65 ********
1.75
1.85 *********
1.95 ************

>>> t
[0, 0, 0, 3857, 4859, 4853, 7512, 10460, 10454, 12339, 8511, 8838, 9051, 5188, 4293, 4155, 1554, 0, 1715, 2361]
>>> sum(t[0:10]), sum(t[10:])
(54334, 45666)

This has distribution slightly closer to singularity, but with zero chance of getting too close !

Update: I was wrong.

JFG's rand generator, s = FRAC( PI*s ), F converge to 0.941
VA's rand generator, s = FRAC(DEG(s)), F converge to 0.925, almost dead-on (true F = 0.926)

The only problem is that it take a lot of random numbers for F convergence.
Find all posts by this user
Quote this message in a reply
04-14-2022, 02:48 PM
Post: #32
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
(04-14-2022 12:49 PM)Albert Chan Wrote:  Amazingly, above random generator is so bad, it is more likely get the right answer Smile

>>> t
[0, 0, 0, 3857, 4859, 4853, 7512, 10460, 10454, 12339, 8511, 8838, 9051, 5188, 4293, 4155, 1554, 0, 1715, 2361]
>>> sum(t[0:10]), sum(t[10:])
(54334, 45666)

Ok, ok, my generator is not so good...

Comparing your results with an actual 10-digit BCD machine (HP-15C, actually the ultra fast 15C emulator from HP - just a few seconds for the 100,000 iterations):
>>> t
[ 0, 0, 0, 3857, 4859, 4853, 7512, 10460, 10454, 12339, 8511, 8838, 9051, 5188, 4293, 4155, 1554, 0, 1715, 2361]
( 0, 0, 0, 4064, 4910, 4930, 7568, 10477, 10290, 12258, 8228, 8821, 9377, 5204, 4274, 4051, 1543, 0, 1775, 2230 - real 15C)
>>> sum(t[0:10]), sum(t[10:])
( 54334, 45666 )
( 54497, 45503 - real 15C)
so no big difference due to the platform, and the same empty classes.

J-F
Visit this user's website Find all posts by this user
Quote this message in a reply
04-14-2022, 03:39 PM (This post was last modified: 04-14-2022 03:49 PM by Albert Chan.)
Post: #33
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
(04-14-2022 02:48 PM)J-F Garnier Wrote:  so no big difference due to the platform, and the same empty classes.

Thanks for checking on a decimal machine.

We can explain the empty classes by getting min(1+RND-RND)
Here RND = (s = FP(PI*s)), and assumed calculations from left to right.
In other words, result of 1st RND is seed of 2nd RND

To minimize the expression, 2nd RND must be close to 1 (but, not yet reached it)
2nd RND cannot be close to 2 or 3, because first RND must be small.

min(1+RND-RND) = 1 + 1/PI - 1 = 1/PI ≈ 0.31831

3 classes, range of [0,0.1), [0.1,0.2), [0.2,0.3) must be empty

---

By same logic, with RND = (s = FP(DEG(s))), min(1+RND-RND) = PI/180 ≈ 0.01745
Find all posts by this user
Quote this message in a reply
04-15-2022, 06:49 AM
Post: #34
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
Very nice twist in the tail with the carefully sculpted RNG, Valentin! And your comments on that topic, and the expected number of trials for various dimensions of problem, led to a very interesting read on The Unreasonable Effectiveness of Quasirandom Sequences. Thanks!
Find all posts by this user
Quote this message in a reply
04-24-2022, 10:02 PM
Post: #35
RE: [VA] SRC #011 - April 1st, 2022 Bizarro Special
(04-04-2022 10:34 PM)Valentin Albillo Wrote:  .
Ren Wrote:I think a parallel can be seen in Star Trek: Lower Decks interactions with the Paklid.
    Thanks for the tip. I know nothing about the plethora of new ST series, I only watched the Original Series, Deep Space 9, Voyager and Enterprise (as well as all the movies), so no idea what Lower Decks is about or the aliens (I suppose they're aliens) you mention, sorry.

Just FYI, the Pakleds also appeared in one episode of TNG and 18 episodes of DS9. But Lower Decks is worth a watch (IMHO) if you get around to it.

— Ian Abbott
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 5 Guest(s)