I really want to do a RNG test where I take about 10 million rolls from 1-100 and measure both the probability of each number showing up (it'll probably be close enough to 1%) and the probability of each number showing up based on what the previous number is. It's pretty trivial to code...if I knew how to easily.
But that isn't necessarily the only flaw. There are other forms of cyclic behavior. A very brief look at the RNG gives me pause; the root RNG is a linear congruential, which is an awful basis, but there's manipulation going on that may improve it. How good is it? Not sure. It claims a basis in Berkeley's random.c...but as of when? The web shows the 1995 Berkeley code in /dev/random.c, and suffice it to say, it looks NOTHING like the code in z-rand.c. Personally, I'd rather see something with proven-good performance...a fast version of Mersenne Twister, which can readily be found as freeware, or some of the more recent members of the same family such as WELL:
WELL's a tiny lil guy...
I've definitely noticed too many cases of cluster behavior for me to think there isn't some memory in the RNG.
Comment